blog.darkstar.work - a simple url encoder/decoder

 a simple url encoder/decoder
 http://blog.darkstar.work

Labels

Wirtschaft (150) Pressefreiheit (125) Österreich (120) IT (95) code (60) Staatsschulden (37) EZB (27) Pensionssystem (16)
Posts mit dem Label code werden angezeigt. Alle Posts anzeigen
Posts mit dem Label code werden angezeigt. Alle Posts anzeigen

2021-03-31

Semantic web lost in history / herstory around 2010, how to reactivate it & a short contract use case

Current hypes, trends and pushed business models at 2020

Let's take a short look at the current hypes, trends and pushed business models for companies:

  1. IOT (Internet of things), Industry 4.0
    Machines, units, etc. know and probably send their own status at manufacturer, service center, at home or somewhere else over the rainbow (in case of  ugly DNS injection, warped rouing tables [OSPF, BGP] ot men in the middle [squid bump, bluecoat, ...]).
    Those units normally,communicate when they need maintenance, when components soon needs to be replaced, when physical, technical or environmental critical limit reached, when subsystems fail or completley shut down, when the user when the customer operates it improperly or negligently, simply report in intervals that they are alive and everything's OK, , etc

    We can find a lot of use cases for many useful applications here, added value / surplus is often small but still useful! Problems with security, flood of data, extracting relevant events and, above all, reacting to the corresponding message are the challenges here.

  2. Cloud
    I like the cloud, but sometimes cloud feels cloudy, cloudier, obscure dust and smog over keywords and hypes and real hard technical features that generate great customer surplius and real cash or quality of business benefits.

    What are the most common problems, pitfalls and misunderstandigs with any cloud?

    • Lack of specific customer requirements and inaccurate, exact technical specifications from the cloud provider.

      Practical example (what I really need and I'm thinking about right now):
      I need to implement a state service, that saves current game status of small multiplatform games (e.g. card game schnapsenarchon clone, SUPU) in the cloud. If I play on my  android tablet or smartphone, the current state and course of the game will be transfered to a cloud servie, persisted in a cloud storage or database and if I continue playing on Windows desktop, the last state of the game is automatically fetched from the cloud and the game application sets that game state and transferred back to cloud after the next move.

      For my purposes the cheapest way will be sufficient:
      A simple Amazon Linux 2 AMI (HVM), SSD Volume, Type t2.nano, t2.micro or t2.nano with a simple SQL-Database (no matter if hackish install at virtual imaage or the smallest Amazon DB instance) with somekind of open source PHP Swagger API, like https://github.com/andresharpe/quick-api

      With similar requirements, some developers and most managers are probably unsure whether to use Amazon ElasticCache or some kind of Session State Server ported to Azure SQL  or entirely another not well known cloud service.

      Even fewer people (including myself) know what the exact technical limits of the individual services are, e.g. the exact performance and scalable elasticity of Anazon Elastic Cache and when scalability of Amazon Elastic Cache is completely irrelevant, since the network traffic and the network data volume will always be the bottle neck in that specific scenario.

    • Nice advertising slogan, but poor performance and poorly configurable options from drive and storage providers, no standard network mount (like SMB or NFS), but a lot of cloudy magic.
      When I look at the beautiful Google Drive and Microsoft One Drive, I immediately see that this is not a classic network file system mount under my point of understanding.
      Reading and writing large blocks or deep recursively nested directory trees is extremely slow.
      I also didn't find an option to have an incremental version history backup created after every change (Create, Delete, Change) to the Cloud Drive or at least simply midnight-generated backups for the last 3 months at low cost.
      Options like synchronizing parts of the Cloud Drive locally create horror nightmares.
      In 1997 we booted an NFS from the USA via Etherboot with real-time DOS and debugged faster Doom Clones and other 3D games in C ++ with some ASM routines and the network performance didn't fail.

    • Difficulty finding the most suitable service in Cloud Djungel. Real technical comparisons between the hard facts and the possibilities of the individual cloud services have so far mainly published by free bloggers, e.g. AWS Lambda vs Azure Service Fabric
      Decisions are more based on creeds (we have a lot of .NET developers, so we'll use Azure, I mah strong Amazon-like power women and Linux, so I'll use Amazon).



Lets go back to RDF in the years 2010 - 2015 

You probably know what semantic web, owl, rdf, machine readable & understandable data are, right?

If not, then here is a little bit of reading:
https://www.w3.org/TR/rdf-mt/
https://www.w3.org/TR/2010/WD-rdb2rdf-ucr-20100608/
https://en.wikipedia.org/wiki/RDF_Schema

What does 'going back in time' mean in a large scope and broader sense?

Before all the new trends, there was an attempt to realize semantic web, e.g. RDF as an XML extension with semantic machine-understandable properties.

When we as human beeings read a website or analyze BIG data, we automatically recognize the context, perhaps the meaning and probably the significance of this data.

Machines can't do that, they didn't have a semantic understanding until now.

Information theory distinguishes between 3 levels of information:

  1. Syntax level (compiler construction, scanner, parser, automata & formal languages, extended grammar, e.g. https://en.wikipedia.org/wiki/Context-free_grammar
  2. Semantic level (meaningful reading by people, putting things into context, understanding contexts).
  3. Pragmatic level (goal-oriented action based on information by recognizing the meaning at the level of semantics).

I assume / postulate:

=> If computer programs reach 2. semantic level => then we are a big step closer to AI realization.


Why computer programs can't already implement semantic understandig? 

Theoretically and also practically in terms of technical frameworks, computer programs are already able to understand semantically data.

The concrete problem is here, that too little data are provided in a semantically machine-readable form and a standardized (META-)language.

Example: Regardless of whether Statistics Austria, Open Data GV or ECB, etc, certainly provide very good statistics, but none of them provide semantic machine-readable data.

Somebody has to make here a  larger startup, to transform already provided common and needed statistics public data in semantically machine-readable form.


What does that actually bring in return of short time invest?

Not so much immediately!

What could be the fruits in mid-term or in the long tail?

A lot, because when more and more public open data are availible in a semantic machine readable language, programs can also bring some of them in association and we'll be able to implement meta relational AI rules then.

Use case contracts

Semantic programs doesn't only mean mass data fetching and putting them into relation. 
Semantic programming could also in principle support any very formal business process, e.g. contract management for

  • employment contracts
  • supply contracts
  • service provider contracts
  • insurance contracts
  • bank contracts 
  • and all the contracts that people sign with just one click on a checkbox (which they usually do not know that this is a legal contract consent).


All clear and understandable so far?

I got this idea when I was looking and inspecting the adopted amendment to the laws and newly passed laws by the democratic republic of Austria and I wanted some semantic automated statistics for some use cases. 

For details look at post https://www.facebook.com/heinrich.elsigan.9/posts/156458906318533
or read the post copy below:

2021-02-27

Schlafstörungen, nächtliche Recherche und IBM Security

Aufgrund nächtlicher Schlafstörungen recherierte ich ein wenig, denn für ein komplett neues android system image kompilieren, war ich trotzdem viel zu müde.

Dabei fand ich diese netten IBM Security Produkte und noch ein par andere Dinge:


IBM's still alive


IBM Security Trusteer Mobile SDK

Hier gibts so ein trusted mobile device (andoid, iphone, tablet) SDK (Software Development Kit), das Kompromittierungen (root, hacked, jail-breaked, untergeschobene root-CAs um SSL aufzubrechen und evtl. Staatstrojaner und ähnliches erkennt). Jedenfalls sollten Entwickler damit sicherere verteilte Anwendungen für Cloud u.s.w. hochverfügbar implemetieren können:

https://www.ibm.com/products/trusteer-mobile-sdk

Mobile device management (MDM) solutions

Das hier ist auch so eine mobile security Geschichte, die mobile Geräte Sicherheit der Mitarbeiter garantieren will. Also so kein "Nachhause-Telefonieren" durch irgendwelche untergejubelten Drittanbieter Apps, klassische End-2-EndPoint Secuirty, u.s.w.

https://www.ibm.com/security/mobile/mobile-device-management

IBM Security Products (case study fraud detection)

Dann zeigt IBM weiter noch, dass es auch eine standardisierte Fraud detection (wahrscheinlich generisch erweiterbar) kann! Ich schätze diese Fraud Detection Engine geht wahrscheinlich über Insolvenzregister, KSV, Casinosperre, Standardbonitätabfragen und dubiose exotische Prepaid oder sontige WireCreditCards hinaus und lernt kontinuierlich (à la Virenscanner):

https://www.ibm.com/security/products

nomachine.com

Zu guter letzt fand ich noch einen sicheren Terminal-Server (diesmal nicht IBM, den sich auch mal wer ansehen könnte):

https://www.nomachine.com/product&p=NoMachine%20Enterprise%20Terminal%20Server




Gehe jetzt wieder ins Bett, gute Nacht / guten Morgen.


2021-02-05

No global plan for generic multi-language globalization/localizaion

There is sadley no generic concept for platform and language independent multi language globalization / localization for applications.

Some case studies


Visual Studio WinForms (C#VB)

Globalization / Localization in Windows Forms is implemented via different resource files (.resx).
ResourceManager and Resource accessors are created semi automatically from Visual Studio.

Visual Studio 2019
Resources in .NET


Microsoft and Stockoverflow links about resources and globalization and culture info.

Android Studio (Java, Kotlin)

Globalization / Localization are realised in Android Studio via different language directories beyond the res folder:

Using the Translation Editor inside Resource Manager to build your translation from Android Studio GUI, see: http://blog.androidrich.com/2016/11/translation-editor-in-android-studio.html



Android supports right to left screen orientation too for arab or farsi languages.

Documentation about globalization / localization in android

Razor MVC and Blazor Web

Webpages, that are build with .NET framework and deployed in IIS on windows or Apache on linux (no matter with mod_mono or something else [1]) use language and country prefix directly after FQDN and before path in url to route to different language subpages, e.g.:
https://docs.microsoft.com/en-us/aspnet/core/blazor/globalization-localization
https://docs.microsoft.com/de-at/aspnet/core/blazor/globalization-localization
https://docs.microsoft.com/fr-fr/aspnet/core/blazor/globalization-localization



Multi-language globalization in reality

In reality in the world wide web and on desktop or other applications, there are many different approaches how multi-language and globalization is implemented. Only few use a auto translation API like google.com/translate.

Prospect / outlook for standardized globalization?

I don't know. Lets use the crowd to answer that question!

2020-11-02

Snaps, snapd,, Snap Store, snapcraft.io

Tonight in the early morn, when I tried to write(1) an open os based solution(2) to transform almost any modern personal computer or laptop(3) to a simple generic wifi router(4).

Then I unexpectedly found a  community platform, that was still unknown to me:

snapcraft.io

Snaps are there defined as  app packages for desktop, cloud and IoT that are easy to install, secure, cross‐platform and dependency‐free. Snaps are discoverable and installable from the Snap Store, the app store for Linux with an audience of millions, 


I tried first the search functionality of that platform and let's say clearly: 
The layout design looks excellent, the community is well visited, but some functions could be better. Lets look at these 2 examples: 
Searching common phrase file
Searching 1 char only d  
Search results also could be sorted more meaningful & understandable, e.g. 
  1. show all results, where the search phrase  directly corresponds to the package app name /(no matter if sql like (prefix, suffix, substring), sql fulltext match or Levenstein distance.
  2. then show all results, where the search phrase is found in the shown package descripton
  3. at least show all results, that appear like a magical fog or smoke for the user. I know, that your search function is probably not returning total random or bad fuzzy hit results, but I guess the search algorithm searches in fields, which are not shown in search result view.




Snapcraft web site provides at the moment the snap app package search page, a store, tutorials, documentation, developer blog, forum and IOT special bulletin.


Conclusio: Snapcraft has surly more potential, but additional functionality must be added and especially a more clear vision and road-map needs to be specified, followed and permanently filled with life. (Under my point of view, without any additional improvements, vision and road-map the project and the community won't grow up and then after some time, this will remain as small circle of developer project or even become orphaned.)
So far I see this nevertheless as an good opportunity for developers and community to get seen and to receive good jobs offers.

Simply said: Thou must have very soon an idea, where this should go and what exactly you want with  it. If thou don't want to expand and keep this project as a nice and gentle private hobby, then you can keep everything and don't have to change anything. But if you want to expand, become at least a small player for long time, ...


2020-08-10

State management in Blazor

There is a sufficient document for state management in Blazor.
Here is a quick & fast work through with all prerequisites, how to get state management in Blazor working.

Example with Blazored.LocalStorage package


Install nuget package for your project

# Find packages providing a "Blazored.LocalStorage" 
Find-Package Blazored.LocalStorage
# install "Blazored.LocalStorage"  Install-Package Blazored.LocalStorage -Project YourProject# alternatively

Add Blazored.LocalStorage to ConfigureServices at your Startup.cs

using System;
/* ... */
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
/* ... */
using Blazored.LocalStorage;
public class Startup {   /* ... */     public void ConfigureServices(IServiceCollection services)  {      /* ... */     services.AddRazorPages();      services.AddServerSideBlazor();     /* ... */     services.AddBlazoredLocalStorage();     /* ... */   } }

Using LocalStorage inside a razor page / control

@page "/MyPage"
using Blazored.LocalStorage
@inject ILocalStorageService localStorage
@code {
  /* ... */
  Dictionary<stringstring> sessionDict;
  /* ... */
  protected override async Task OnAfterRenderAsync(bool firstRender)  { 
    
if (firstRender)    {
      sessionDict = 
        await localStorage.GetItemAsync<Dictionary<stringstring>>("SessDict");       StateHasChanged();     }    await base.OnAfterRenderAsync(firstRender);   }          protected async Task PersistSessionDict(<Dictionary<stringstring> persistDict) { if (persistDict != sessionDict) { await localStorage.SetItemAsync("SessDict", persistDict); } } }

Using LocalStorage inside an entity class

using System;
/* ... */
using Blazored.LocalStorage;
 
public class MyEntity
{
  [Inject] 
  public ILocalStorageService LS
  {  
    get => _localStorage;
    set => value;
  }
  private ILocalStorageService _localStorage; 



  interal async Task<Dictionary<stringstring>> GetSessionDict() =>  
      await LS.GetItemAsync<Dictionary<stringstring>>("SessDict");
  /* ... */
}

Example with ProtectedBrowserStorage package


Install nuget package for your project


# Find packages providing a "ProtectedBrowserStorage" 
Find-Package ProtectedBrowserStorage 

# install "Microsoft.AspNetCore.ProtectedBrowserStorage"
Install-Package Microsoft.AspNetCore.ProtectedBrowserStorage -Project YourProject
# alternatively install "ProtectedBrowserStorage.NETStandard"  
Install-Package ProtectedBrowserStorage.NETStandard -Project YourProject


Add ProtectedBrowserStorage to ConfigureServices at your Startup.cs

using System;
/* ... */
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
/* ... */
using Microsoft.AspNetCore.ProtectedBrowserStorage;

public class Startup {   /* ... */   public void ConfigureServices(IServiceCollection services) {       /* ... */     services.AddRazorPages();      services.AddServerSideBlazor();     /*... */     services.AddProtectedBrowserStorage();     /* ... */   } }

Using ProtectedBrowserStorage inside a razor page / control

@page "/MyPage"
@using Microsoft.AspNetCore.ProtectedBrowserStorage
@inject ProtectedSessionStorage ProtectedSessionStore
@code {   /* ... */   Dictionary<string, string> sessionDict;   /* ... */   protected override async TaskOnAfterRenderAsync(boolfirstRender)   {      if(firstRender)     {       sessionDict =         await ProtectedSessionStore.GetAsync<Dictionary<string, string>>("SessDict");       StateHasChanged();     }     await base.OnAfterRenderAsync(firstRender);   }     protected async Task PersistSessionDict(Dictionary<stringstring> persistDict) { if (persistDict != sessionDict) { await ProtectedSessionStore.SetAsync("SessDict", persistDict); } } }

Sources and links:

2020-02-15

Semantic Internet: Trends, Facts, Futures, Verification

[Draft] [Concept] [Prototype]

Semantic Internet (former known as Semantic Web, see also RDF) has the possibility to record different semantic trends occurring at different sources with a certain frequency inside the in principle accessible to the public internet.
Day after day, month after month, semantic contexts are published on the Internet. area23 semantic web filters all semantic statements that occur with a certain frequency from different sources. Furthermore, not all trends and semantically significant events are more relevant for most semantic miners.

With area23 semantic web you can filter by region, topic categories, relevance from different sources and subsequent complexity.

A filter for a region can be set similiar to Google Trends, e.g. for United States or for Germany, etc.

Basic main categories are:

  • politics (Brexit, Sinn Féin, Thüringen, ...)
  • sports  (soccer, american football events, ...)
  • entertainement (music, cinema, tv, ...)
  • technology
  • business (stock markets, trading, bonds, central bank news, different economic indictors)
  • health
  • lifestyle (eating, drinking, other events)
  • housing (appartments, flat, hotel, camping / caravan sites, vacation rentals, accommodations, e.g.: Airbnb, Wimdu)
  • infrastructure (traffic reports, flights & airports occupancy, train connections, ships & ferries connections)
  • weather (including unexpected temperatures / weather effects, like ice, heavy rain, storm, dry periods plus enviroment disasters, like hurricanes, floodings, earthquakes, volcanic eruptions)
  • and many more
Once you created your filter enviroment, you can start collecting & recording semantic events.

After some time, collected semantic events will appear, e.g.: 'coronavirus'

In that example, 'coronavirus' the most common and reliable semantic logical statements are shown (extracted from different internet sites / ressources), e.g.: number of infections, behavior to stay healty, flights canceled to / from China, stock market risk for China in the next year.

Every statement extracted from the data pool that directly makes a statement or an assumption regarding matters other than the corona virus is now checked with other data sources as to whether the statement actually has a formal fuzzy truth content. So in that (our) example, the flight connections from and to China will be verified immediately as a result. Chinese economic data and the change behavior of futures in Hang Seng, which changed in the period since the outbreak of the coronavirus, will be checked too.

Warning, formally epistemologically an extracted statement is not necessarily true, even if 15 different articles from different countries in different languages in the web claim: "Corona virus has negative effects on the current Chinese fiscal year 2020." and if the outlook for futures in Hang Seng and the economic data have deteriorated in the same period.


to be continued...



Links about semantic web and similar topics:
https://www.semantic-mediawiki.org/wiki/Semantic_MediaWiki
https://www.opensemanticdata.org/
http://jena.apache.org/
https://www.nngroup.com/articles/user-need-statements/

2019-10-07

query tools openID bearer

invoke-webrequest

$url = "https://webapi.area23.at/api/bearertest"
$headers = @{} 
$headers.Add("Accept","application/json")  
$headers.Add("Authorization", "bearer myBearer")

invoke-webrequest -Uri $url -Headers $headers  
invoke-webrequest -Uri $url -Method GET -Headers $headers 

curl

curl -X GET \
    -H "accept: application/json" \
    -H "Authorization: bearer myBearer" \
    "https://webapi.area23.at/api/bearertest"

wget

wget \ 
    --header="Authorization: Bearer myBearer" \ 
    --header="Content-Type: application/json" \ 
    --no-check-certificate \ 
    https://webapi.area23.at/api/bearertest

Postman


2018-12-25

AWS CodePipeline for android Github project

A short summary, how to create an amazon code pipeline and build project by using an android java github as source repository. (inspired by j-a.f)

I have choosen my github schnapslet project android subtree for trial.

Login into amazon webservices console

https://console.aws.amazon.com/codesuite/codepipeline/pipelines?region=us-east-1#

Click on "Create pipeline"

Choose pipeline settings

Enter a "Pipeline name" and a service role for your new pipeline here. Click "Next".

Choose source provider

Choose Gitub, authorize with your github credentials or choose a public github project, choose repositoty, choose branch, then click "Next".

Add build stage

Choose AWS CodeBuild and click on "Create project".

Create build project

In section "Project" configuration fill out "Project name" (Description - optional).

In section "Environment", I choosed the simplest way with "Managed image" as environment image, "Ubuntu" as operating system, "Android" as runtime, "aws/codebuild/android-java-8:26.1.1" as runtime version, default new service role.

In subsection "Additional configuration", you can enable a VPC on your virtual Ubuntu build server, e.g. if you want to login with ssh; you can select various performance features here, like "15 GB memory, 8 vCPUs" for your build server, you can set manually environment variables here and so on. We didn't need that here for only a simple proof of concepts.


In section "Buildspec" I choosed "Insert build commands", then switched to source editor and edited the following buildspec.yaml:
version: 0.2
phases:
  #install: #commands: # - command
  #pre_build: #commands: # - command
  build:
    commands
     - sudo chmod 755 $CODEBUILD_SRC_DIR/android/Schnapslet/gradlew
     - $CODEBUILD_SRC_DIR/android/Schnapslet/gradlew init -i
     - $CODEBUILD_SRC_DIR/android/Schnapslet/gradlew build -i
     - $CODEBUILD_SRC_DIR/android/Schnapslet/gradlew build --build-file $CODEBUILD_SRC_DIR/android/Schnapslet/app/build.gradle -i
#post_build: #commands: # - command
#artifacts: #files: # - location
#cache: #paths: # - paths

Finally click "Continue to CodePipeline".

Now click "Next", when you are back again on "Add build stage" site.

Add deploy stage

I skipped that option for that proof of concept.

Review

Rewiew "Pipeline settings", "Add source stage", "Add build stage", "Add deploy stage" here and finally click "Create pipeline".

Release change

Finally "Release change".

You can configure your "Build project" seperatly now here: https://console.aws.amazon.com/codesuite/codebuild/projects?region=us-east-1
e.g. if you want to change your buildspec.yaml or view different build logs.

2018-12-18

Html-Sql-Injection Detection

A very simple prototype of html injection detection in MS SQLServer, please notice, that real detection is much more complex...

If Exists(Select Top 1 object_id From tempdb.sys.tables Where name = '##InjWatch')
Delete From ##InjWatch
Else
Create Table ##InjWatch ( ctext nvarchar(Max), tab varchar(768), col varchar(768)
);
GO 

Declare InjectCursor Cursor FAST_FORWARD READ_ONLY For 
  Select 'Cast([' + c.name + '] as nvarchar(max))' as c_cast,
    c.name as c_name, '' + s.name + '.[' +T.name + ']' as sT_name
  From sys.tables T
  Inner Join sys.columns c
    On  c.object_id = T.object_id
    and c.max_length > 16 and c.system_type_id In (Select system_type_id From sys.types Where name In('varchar', 'nvarchar''char''nchar''text''ntext'))
  Inner Join sys.schemas s
    On s.schema_id = T.schema_id

Declare @c_cast varchar(1024), @c_name varchar(768), @sT_name varchar(768)
Open InjectCursor
Fetch Next From InjectCursor Into @c_cast, @c_name, @sT_name

While
 (@@FETCH_STATUS = 0)
Begin
  Declare @execSQL nvarchar(max)
  Set @execSQL = 'insert into ##InjWatch (ctext, tab, col) '+
    'select ' + @c_cast + ' as ctext, ''' + @sT_name + ''' as tab, ''' + @c_name + ''' as col ' +
    ' from ' + @sT_name + ' with (nolock) ' +
    ' where (' + @c_cast + ' like ''%<%'' and ' + @c_cast + ' like ''%>%'') ' +
    ' or ' + @c_cast + ' like ''%script:%'' or ' + @c_cast + ' like ''%://%''' +
    ' or ' + @c_cast + ' like ''%href%'' or ' + @c_cast + ' like ''%return %''' +
    ' or ' + @c_cast + ' like ''%mailto:%'''
  Execute sp_executesql @execSQL;
  Fetch Next From InjectCursor Into @c_cast, @c_name, @sT_name
End
Close InjectCursor
Deallocate InjectCursor

Select Distinct * From ##InjWatch
GO 

2018-05-28

Generate WSDL on the fly, with CodeDom instead WSDL:EXE

Generate WSDL on the fly, with CodeDom instead WSDL:EXE

#C#

var wsdlDescription = ServiceDescription.Read(YourWSDLFile);
var wsdlImporter = new ServiceDescriptionImporter();
wsdlImporter.ProtocolName = "Soap12"; //Might differ
wsdlImporter.AddServiceDescription(wsdlDescription, null, null);
wsdlImporter.Style = ServiceDescriptionImportStyle.Server;
wsdlImporter.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties;
var codeNamespace = new CodeNamespace();
var codeUnit = new CodeCompileUnit();
codeUnit.Namespaces.Add(codeNamespace);
var importWarning = wsdlImporter.Import(codeNamespace, codeUnit);
if (importWarning == 0) {
var stringBuilder = new StringBuilder();
var stringWriter = new StringWriter(stringBuilder);
var codeProvider = CodeDomProvider.CreateProvider("Vb");
codeProvider.GenerateCodeFromCompileUnit(codeUnit, stringWriter, new CodeGeneratorOptions());
stringWriter.Close();
File.WriteAllText(WhereYouWantYourClass, stringBuilder.ToString(), Encoding.UTF8);

} else {

Console.WriteLine(importWarning);

}


#VB

Dim SoapClient As MSSOAPLib30.SoapClient30
Dim XMLDoc As MSXML2.DOMDocument40
Dim vCol As Collection
Dim abc As Variant

Set SoapClient = New MSSOAPLib30.SoapClient30

Set XMLDoc = New MSXML2.DOMDocument40
SoapClient.ClientProperty("ServerHTTPRequest") = True

Call
SoapClient.MSSoapInit("http://169.242.82.87:8080/apex/CurveWebService.ws dl",
"CurveWebServiceService", "CurveWebService")

SoapClient.ConnectorProperty("Timeout") = 30000
SoapClient.ConnectorProperty("UseSSL") = 0

abc = SoapClient.getCurve("EMGLN", "YC_EUR_LIBOR", "GDAXML")

XMLDoc.validateOnParse = False
XMLDoc.LoadXml abc


https://weblog.west-wind.com/posts/2009/Feb/12/WSDL-Imports-without-WSDLexe