Quantcast
Channel: .NET Blog
Viewing all 4000 articles
Browse latest View live

The history of the GC configs

$
0
0

Recently, Nick from Stack Overflow tweeted about his experience of using the .NET Core GC configs – he seemed quite happy with them (minus the fact they are not documented well which is something I’m talking to our doc folks about). I thought it’d be fun to tell you about the history of the GC configs ‘cause it’s almost the weekend and I want to contribute to your fun weekend reading.

I started working on the GC toward the end of .NET 2.0. At that time we had really, really few public GC configs. “Public” in this context means “officially supported”. I’m talking just gcServer and gcConcurrent, which you could specify as application configs and I think the retain VM one which was exposed as a startup flag STARTUP_HOARD_GC_VM, not an app config. Those of you who only worked with .NET Core may not have come across “application configs” – that’s a concept that only exists on .NET, not .NET Core. If you have an .exe called a.exe, and you have a file called a.exe.config in the same directory, then .NET will look at things you specify in this file under the runtime element for things like gcServer or some other non GC configs.

At that time the official ways to configure the CLR were:

  • App configs, under the runtime element
  • Startup flags when you load the CLR via Hosting API, strictly speaking, you could use some of the hosting APIs to customize other aspects of GC like providing memory for the GC instead of having GC acquire memory via the OS VirtualAlloc/VirtualFree APIs but I will not get into those – the only customer of those was SQLCLR AFAIK.

(There were actually also other ways like the machine config but I will also not get into those as they have the same idea as the app configs)

Of course there have always been the (now (fairly) famous) COMPlus environment variables. We used them only for internal testing – it’s easy to specify env vars and our testing framework read them, set them and unset them as needed. There were actually not many of those either – one example was GCSegmentSize that was heavily used in testing (to test the expand heap scenario) but not officially supported so we never documented them as app configs.

I was told that env vars were not a good customer facing way to config things because people tend to set them and forget to unset them and then later they wonder why they were seeing some unexpected behavior. And I did see that happen with some internal teams so this seemed like a reasonable reason.

Startup flags are just a hosting API thing and hosting API was something few people heard of and way fewer used. You could say things like you want to start the runtime with Server GC and domain neutral. It’s a native API and most of our customers refused to use it when they were recommended to try. Today I’m aware of only one team who’s actively using it – not surprisingly many people on that team used to work on SQLCLR 😛

For things you could specify as app configs you could also specify them with env vars or even registry values because on .NET our internal API to read these configs always check all 3 places. While we had a different kind of attitude toward configs you could specific via app config, which were considered officially supported, implementation wise this was great because devs didn’t need to worry about which place the config would be read from – they knew if they added a new config in clrconfigvalues.h it could be specified via any of the 3 ways automatically.

During the .NET 4.x timeframe We needed to add public configs for things like CPU group (we started seeing machines with > 64 procs) or creating objects of >2gb due to customer requests. Very few customers used these configs. So they could be thought of as special case configs, in other words, the majority of the scenarios were run with no configs aside from the gcServer/gcConcurrent ones.

I was pretty wary of adding new public configs. Adding internal ones was one thing but actually telling folks about them means we’d basically be saying we are supporting them forever – in the older versions of .NET the compatibility bar was ultra high. And tooling was of course not as advanced then so perf analysis was harder to do (most of the GC configs were for perf).

For a long time folks used the 2 major flavors of the GC, Server and Workstation, mostly according to the way they were designed. But you know how the rest of this story goes – folks didn’t exactly use them “as designed originally” anymore. And as the GC diagnostic space also advanced customers were able to debug and understand GC perf better and also used .NET on larger, more stressful and more diverse scenarios. So there was more and more desire from them to do more configuration on their own.

Good thing was Microsoft internally had plenty of customers who had very stressful workloads that called for configuration so I was able to test on these stressful real world scenarios. Around the time of .NET 4.6 I started adding configs more aggressively. One of our 1st party customers was running a scenario with many managed processes. They had configed some to use Server GC and others to use Workstation. But there was nothing inbetween. This was when configs like GCHeapCount/GCNoAffinitize/GCHeapAffinitizeMask were added.

Around that time we also open sourced coreclr. The distinction of “officially supported configs” vs internal only configs was still there – in theory that line had become a little blurry because our customers could see what internal configs we had 🙂 but it also took time for Core adoption so I wasn’t aware of really anyone who was using internal only configs. Also we changed the way config values were read – we no longer had the “one API reads them all” so today on Core where the “official configs” are specified via the runtimeconfig.json, you’d need to use a different API and specify the name in the json and the name for the env var if you want to read from both.

My development was still on CLR mostly, just because we had very few workloads on CoreCLR at that time and being able to try things on large/stressful workloads was a very valuable thing. Around this time I added a few more configs for various scenarios – notable ones are GCLOHTheshold and GCHighMemPercent. A team had their product running in a process coexisting with a much large process on the same machine which had a lot of memory. So the default memory load that GC considered as “high memory load situation”, which was 90%, worked well for the much larger process but not for them. When there’s 10% physical memory left that was still a huge amount for their process so I added this for them to specify a higher value (they specified 97 or 98) which meant their process didn’t need to do full compacting GCs nearly as often.

Core 3.0 was when I unified the source between .NET and .NET Core so all the configs (“internal” or not) from .NET were made available on Core as well. The json way is obviously the official way to specify a config but it appeared specifying configs via env vars was becoming more common, especially with folks who work on scenarios with high perf requirements. I know quite a few internal and external customers use them (and have yet to hear any incidents that involved setting an env var in an undesirable fashion). A few more GC configs were added during Core 3.0 – GCHeapHardLimit, GCLargePages, GCHeapAffinitizeRanges and etc.

One thing that took folks (who used env vars) by surprise was the number you specific for a config in an env var format is interpreted as a hex number, not decimal. As far as why it was this way, it completely predates my time on the runtime team… since everyone remembered this for sure after they used it wrong the first time 😛 and it was an internal only thing, no one bothered to change it.

I am still of the opinion that the officially supported configs should not require you to have internal GC knowledge. Of course internal is up for interpretation – some people might view anything beyond gcServer as internal knowledge. I’m interpreting “not having internal GC knowledge” in this context as “only having general perf knowledge to influence the GC”. For example, GCHeapHardLimit tells the GC how much memory it’s allowed to use; GCHeapCount tells the GC how many cores it’s allowed to use. Memory/CPU usage are general perf knowledge that one already needs to have if they work on perf. GCLOHThreshold is actually violating this policy somewhat so it’s something we’d like to dynamically tune in GC instead of having users specify a number. But that’s work we haven’t done yet.

I don’t want to have configs that would need users to config things like “if this generation’s free list ratio or survival rate is > some threshold I would choose this particular GC to handle collections on that generation; but use this other GC to collect other generations”. That to me is definitely “requiring GC internal knowledge”.

So there you have it – the history of the GC configs in .NET/.NET Core.

The post The history of the GC configs appeared first on .NET Blog.


Announcing .NET Core 3.1 Preview 2

$
0
0

Today, we’re announcing .NET Core 3.1 Preview 2. .NET Core 3.1 will be a small and short release focused on key improvements in Blazor and Windows desktop, the two big additions in .NET Core 3.0.. It will be a long term support (LTS) release with an expected final ship date of December 2019.

You can download .NET Core 3.1 Preview 2 on Windows, macOS, and Linux.

ASP.NET Core and EF Core are also releasing updates today.

Visual Studio 16.4 Preview 3 and Visual Studio for Mac 8.4 Preview 3 are also releasing today. They are required updates to use .NET Core 3.1 Preview 2. Visual Studio 16.4 includes .NET Core 3.1, so just updating Visual Studio will give you both releases.

Details:

Improvements

The biggest improvement in this release is support for C++/CLI (AKA “managed C++”). C++/CLI is only enabled on Windows. The changes for C++/CLI are primarily in Visual Studio. You need to install the “Desktop development with C++” workload and the “C++/CLI support” component in order to use C++/CLI. You can see this component selected (it is the last one displayed) in the image below.

This component adds a couple templates that you can use:

  • CLR Class Library (.NET Core)
  • CLR Empty Project (.NET Core)

If you cannot find them, just search for them in the New Project dialog.

Closing

The primary goal of .NET Core 3.1 is to polish the features and scenarios we delivered in .NET Core 3.0. .NET Core 3.1 will be a long term support (LTS) release, supported for at least 3 years.

Please install and test .NET Core 3.1 Preview 2 and give us feedback. It is not yet supported or recommended for use in production.

If you missed it, check out the .NET Core 3.0 announcement from last month.

The post Announcing .NET Core 3.1 Preview 2 appeared first on .NET Blog.

.NET Core 3 for Windows Desktop

$
0
0

Intro

In September, we released .NET Core support for building Windows desktop applications, including WPF and Windows Forms. Since then, we have been delighted to see so many developers share their stories of migrating desktop applications (and controls libraries) to .NET Core. We constantly hear stories of .NET Windows desktop developers powering their business with WPF and Windows Forms, especially in scenarios where the desktop shines, including:

  • UI-dense forms over data (FOD) applications
  • Responsive low-latency UI
  • Applications that need to run offline/disconnected
  • Applications with dependencies on custom device drivers

This is just the beginning for Windows application development on .NET Core. Read on to learn more about the benefits of .NET Core for building Windows applications.

Why Windows desktop on .NET Core?

.NET Core (and in the future .NET 5 that is built on top of .NET Core) will be the future of .NET. We are committed to support .NET Framework for years to come, however it will not be receiving any new features, those will only be added to .NET Core (and eventually .NET 5). To improve Windows desktop stacks and enable .NET desktop developers to benefit from all the updates of the future, we brought Windows Forms and WPF to .NET Core. They will still remain Windows-only technologies because there are tightly coupled dependencies to Windows APIs. But .NET Core, besides being cross-platform, has many other features that can enhance desktop applications.

First of all, all the runtime improvements and language features will be added only to .NET Core and in the future to .NET 5. A good example here is C# 8 that became available in .NET Core 3.0. Besides, the .NET Core versions of Windows Forms and WPF will become a part of the .NET 5 platform. So, by porting your application to .NET Core today you are preparing them for .NET 5.

Also, .NET Core brings deployment flexibility for your applications with new options that are not available in .NET Framework, such as:

  • Side-by-side deployment. Now you can have multiple .NET Core versions on the same machine and can choose which version each of your apps should target.
  • Self-contained deployment. You can deploy the .NET Core platform with your applications and become completely independent of your end users environment – your app has everything it needs to run on any Windows machine.
  • Smaller app sizes. In .NET Core 3 we introduced a new feature called linker (also sometimes referred to as trimmer), that will analyze your code and include in your self-contained deployment only those assemblies from .NET Core that are needed for your application. That way all platform parts that are not used for your case will be trimmed out.
  • Single .exe files. You can package your app and the .NET Core platform all in one .exe file.
  • Improved runtime performance. .NET Core has many performance optimizations compared to .NET Framework. When you think about the history of .NET Core, built initially for web and server workloads, it helps to understand if your application may see noticeable benefits from the runtime optimizations. Specifically, desktop applications with heavy dependencies on File I/O, networking, and database operations will likely see improvements to performance for those scenarios. Some areas where you may not notice much change are in UI rendering performance or application startup performance.

By setting the properties <PublishSingleFile>, <RuntimeIdentifier> and <PublishTrimmed> in the publishing profile you’ll be able to deploy a trimmed self-contained application as a single .exe file as it is shown in the example below.

<PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <PublishSingleFile>true</PublishSingleFile>
    <RuntimeIdentifier>win-x64</RuntimeIdentifier>
    <PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>

Differences between .NET Framework desktop and .NET Core desktop

While developing desktop applications, you won’t notice much difference between .NET Framework and .NET Core versions of WPF and Windows Forms. A part of our effort was to provide a functional parity between these platforms in the desktop area and enhance the .NET Core experience in the future. WPF applications are fully supported on .NET Core and ready for you to use, while we are working on minor updates and improvements. For Windows Forms the runtime part is fully ported to .NET Core and the team is working on the Windows Forms Designer. We are planning to get it ready by the fourth quarter of 2020 and for now you can check out the Preview version of the designer in Visual Studio 16.4 Preview 3 or later. Don’t forget to set the checkbox in the Tools->Options->Preview Features->Use the Preview of Windows Forms designer for .NET Core apps and restart the Visual Studio. Please keep in mind that the experience is limited for now since the work on it is in progress.

Breaking changes

There are a few breaking changes between .NET Framework and .NET Core but most of the code related to Windows Forms and WPF areas was ported to Core as-is. If you were using such components as WCF Client, Code Access Security, App Domains, Interop and Remoting, you will need to refactor your code if you want to switch to .NET Core.

Another thing to keep in mind – the default output paths on .NET Core is different from on .NET Framework, so if you have some assumptions in your code about file/folder structure of the running app then it’ll probably fail at runtime.

Also, there are changes in how you configure the .NET features. .NET Core instead of machine.config file uses <something>.runtimeconfig.json file that comes with an application and has the same general purpose and similar information. Some configurations such as system.diagnostics, system.net, or system.servicemodel are not supported, so an app config file will fail to load if it contains any of these sections. This change affects System.Diagnostics tracing and WCF client scenarios which were commonly configured using XML configuration previously. In .NET Core you’ll need to configure them in code instead. To change behaviors without recompiling, consider setting up tracing and WCF types using values loaded from a Microsoft.Extensions.Configuration source or from appSettings.

You can find more information on differences between .NET Core and .NET Framework in the documentation.

Getting Started

Check out these short video tutorials:

Porting from .NET Framework to .NET Core

First of all, run the Portability Analyzer and if needed, update your code to get a 100% compatibility with .NET Core. Here are instructions on using the Portability Analyzer. We recommend to use a source control or to backup your code before you make any changes to your application in case the refactoring would not go the way you want, and you decide to go back to your initial state.

When your application is fully compatible with .NET Core, you are ready to port it. As a starting point, you can try out a tool we created to help automate converting your .NET Framework project(s) to .NET Core – try-convert.

It’s important to remember that this tool is just a starting point in your journey to .NET Core. It is also not a supported Microsoft product. Although it can help you with some of the mechanical aspects of migration, it will not handle all scenarios or project types. If your solution has projects that the tool rejects or fails to convert, you’ll have to port by hand. No worries, we have plenty of tutorials on how to do it (in the end of this section).

The try-convert tool will attempt to migrate your old-style project files to the new SDK-style and retarget applicable projects to .NET Core. For your libraries we leave it up to you to make a call regarding the platform: weather you’d like to target .NET Core or .NET Standard. You can specify it in your project file by updating the value for <TargetFramework>. Libraries without .NET Core-specific dependencies like WPF or Windows Forms may benefit from targeting .NET Standard:

<TargetFramework>netstandard2.1</TargetFramework>

so that they can be used by callers targeting many different .NET Platforms. On the other hand, if a library uses a feature that requires .NET Core (like Windows desktop UI APIs), it’s fine for the library to target .NET Core:

<TargetFramework>netcoreapp3.0</TargetFramework>

try-convert is a global tool that you can install on your machine, then you can call from CLI:

C:\> try-convert -p <path to your project>

or

C:\> try-convert -w <path to your solution>

As previously mentioned, if the try-convert tool did not work for you, here are materials on how to port your application by hand.

Videos

Documentation

The post .NET Core 3 for Windows Desktop appeared first on .NET Blog.

.NET Core with Jupyter Notebooks – Available today | Preview 1

$
0
0

When you think about Jupyter Notebooks, you probably think about writing your code in Python, R, Julia, or Scala and not .NET. Today we are excited to announce you can write .NET code in Jupyter Notebooks.

Try .NET has grown to support more interactive experiences across the web with runnable code snippets, interactive documentation generator for .NET core with dotnet try global tool, and now .NET in Jupyter Notebooks.

Build .NET Jupyter Notebooks

To get started with .NET Notebooks, you will need the following:

Please note: If you have the dotnet try global tool already installed, you will need to uninstall before grabbing the kernel enabled version.

  • Install the .NET kernel
    dotnet try jupyter install
  • Check to see if the .NET kernel is installed jupyter kernelspec list

kernelspec

  • To start a new notebook, you can either type jupyter lab Anaconda prompt or launch a notebook using the Anaconda Navigator.
  • Once Jupyter Lab has launched in your preferred browser, you have the option to create a C# or F# notebook.

Features

The initial set of features we released needed to be relevant to developers, with Notebook experience as well as give users new to the experience a useful set of tools they would be eager to try. Let’s have a look at some of the features we have enabled.

The first thing you will need to be aware of is when writing C# or F# in a .NET Notebook, you will be using C# Scripting or F# interactive.

You can either explore the features listed below locally on your machine or online using the dotnet/try binder image.

For the online documentation, please go to the Docs subfolder located in C# or F# folders.

List of features

Display output : There are several ways to display output in notebooks. You can use any of the methods demonstrated in the image below.

Object formatters : By default, the .NET notebook experience enables users to display useful information about an object in table format.

HTML output : By default .NET notebooks ship with several helper methods for writing HTML. From basic helpers that enable users to write out a string as HTML or output Javascript to more complex HTML with PocketView.

Importing packages : You can load NuGet packages using the following syntax:

#r "nuget:<package name>,<package version>"

For Example

# r "nuget:Octokit, 0.32.0"
# r "nuget:NodaTime, 2.4.6"
using Octokit; using NodaTime; 
using NodaTime.Extensions; 
using XPlot.Plotly; 

Charts with XPlot

Charts are rendered using Xplot.Plotly. As soon as users import XPlot.Plotly namespace into their notebooks(using Xplot.Ploty;), they can begin creating rich data visualizations in .NET.

Please check the .NET Notebook online for more documentation and samples.

.NET Notebooks perfect for ML .NET and .NET for Apache® Spark™

.NET notebooks bring iterative, interactive experiences popular in the worlds of machine learning and big data to .NET.

ML.NET

ML.NET with Jupyter Notebooks

.NET notebooks open up several compelling scenarios for ML.NET, like exploring and documenting model training experiments, data distribution exploration, data cleaning, plotting data charts, and learning.

For more details on how you can leverage ML.NET in Jupyter notebooks, check out this blog post on Using ML.NET in Jupyter notebooks. The ML.NET team has put together several online samples for you to get started with.

.NET for Apache® Spark™

Big Data for .NET

Having notebooks support is indispensable when you are dealing with Big data use cases. Notebooks allow data scientists, machine learning engineers, analysts, and anyone else interested in big data to prototype, run, and analyze queries rapidly.

So how can .NET developers and major .NET shops keep up with our data-oriented future? The answer is .NET for Apache Spark, which you can now use from within notebooks!

Today, .NET developers have two options for running .NET for Apache Spark queries in notebooks: Azure Synapse Analytics Notebooks and Azure HDInsight Spark + Jupyter Notebooks. Both experiences allow you to write and run quick ad-hoc queries in addition to developing complete, end-to-end big data scenarios, such as reading in data, transforming it, and visualizing it.

Option 1: Azure Synapse Analytics ships with out-of-the-box .NET support for Apache Spark (C#).

Option 2: Checkout the guide on the .NET for Apache Spark GitHub repo to learn how to get started with .NET for Apache Spark in HDInsight + Jupyter notebooks. The experience will look like the image below.

Get Started with the .NET Jupyter Notebooks Today!

.NET kernel brings interactive developer experiences of Jupyter Notebooks to the .NET ecosystem. We hope you have fun creating .NET notebooks. Please checkout our repo to learn more and let us know what you build.

The post .NET Core with Jupyter Notebooks – Available today | Preview 1 appeared first on .NET Blog.

Announcing ML.NET 1.4 general availability (Machine Learning for .NET)

$
0
0

Coinciding with the Microsoft Ignite 2019 conference, we are thrilled to announce the GA release of ML.NET 1.4 and updates to Model Builder in Visual Studio, with exciting new machine learning features that will allow you to innovate your .NET applications.

ML.NET is an open-source and cross-platform machine learning framework for .NET developers. ML.NET also includes Model Builder (easy to use UI tool in Visual Studio) and CLI (Command-Line Interface) to make it super easy to build custom Machine Learning (ML) models using Automated Machine Learning (AutoML).

Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom ML into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Price Prediction, Sales Forecast prediction, Customer segmentation, Image Classification and more!

Following are some of the key highlights in this update:

ML.NET Updates

In ML.NET 1.4 GA we have released many exciting improvements and new features that are described in the following sections.

Image classification based on deep neural network retraining with GPU support (GA release)

ML.NET, TensorFlow, NVIDIA-CUDA

This feature enables native DNN (Deep Neural Network) transfer learning with ML.NET targeting image classification.

For instance, with this feature you can create your own custom image classifier model by natively training a TensorFlow model from ML.NET API with your own images.

Image classifier scenario – Train your own custom deep learning model with ML.NET

Image Classification Training diagram

ML.NET uses TensorFlow through the low-level bindings provided by the Tensorflow.NET library. The advantage provided by ML.NET is that you use a high level API very simple to use so with just a couple of lines of C# code you define and train an image classification model. A comparable action when using the low level Tensorflow.NET library would need hundreds of lines of code.

The Tensorflow.NET library is an open source and low-level API library that provides the .NET Standard bindings for TensorFlow. That library is part of the open source SciSharp stack libraries.

The below stack diagram shows how ML.NET is implementing these new features on DNN training.

DNN stack diagram

As the first main scenario for high level APIs, we are currently providing image classification, but the goal in the future for this new API is to allow easy to use DNN training for additional scenarios such as object detection and other DNN scenarios in addition to image classification, by providing a powerful yet simple API very easy to use.

This Image-Classification feature was initially released in v1.4-preview. Now, we’re releasing it as a GA release plus we’ve added the following new capabilities for this GA release:

Improvements in v1.4 GA for Image Classification

The main new capabilities in this feature added since v1.4-preview are:

  • GPU support on Windows and Linux. GPU support is based on NVIDIA CUDA. Check hardware/software requisites and GPU requisites installation procedure here. You can also train on CPU if you cannot meet the requirements for GPU.

    • SciSharp TensorFlow redistributable supported for CPU or GPU: ML.NET is compatible with SciSharp.TensorFlow.Redist (CPU training), SciSharp.TensorFlow.Redist-Windows-GPU (GPU training on Windows) and SciSharp.TensorFlow.Redist-Linux-GPU (GPU training on Linux).
  • Predictions on in-memory images: You make predictions with in-memory images instead of file-paths, so you have better flexibility in your app. See sample web app using in-memory images here.

  • Training early stopping: It stops the training when optimal accuracy is reached and is not improving any further with additional training cycles (epochs).

  • Learning rate scheduling: Learning rate is an integral and potentially difficult part of deep learning. By providing learning rate schedulers, we give users a way to optimize the learning rate with high initial values which can decay over time. High initial learning rate helps to introduce randomness into the system, allowing the Loss function to better find the global minima. While the decayed learning rate helps to stabilize the loss over time. We have implemented Exponential Decay Learning rate scheduler and Polynomial Decay Learning rate scheduler.

  • Added additional supported DNN architectures to the Image Classifier: The supported DNN architectures (pre-trained TensorFlow model) used internally as the base for ‘transfer learning’ has grown to the following list:

    • Inception V3 (Was available in Preview)
    • ResNet V2 101 (Was available in Preview)
    • Resnet V2 50 (Added in GA)
    • Mobilenet V2 (Added in GA)

Those pre-trained TensorFlow models (DNN architectures) are widely used image recognition models trained on very large image-sets such as the ImageNet dataset and are the culmination of many ideas developed by multiple researchers over the years. You can now take advantage of it now by using our easy to use API in .NET.

Example code using the new ImageClassification trainer

The below API code example shows how easily you can train a new TensorFlow model.

Image classifier high level API code example:

// Define model's pipeline with ImageClassification defaults (simplest way)
var pipeline = mlContext.MulticlassClassification.Trainers
      .ImageClassification(featureColumnName: "Image",
                            labelColumnName: "LabelAsKey",
                            validationSet: testDataView)
   .Append(mlContext.Transforms.Conversion.MapKeyToValue(outputColumnName: "PredictedLabel",
                                                         inputColumnName: "PredictedLabel"));

// Train the model
ITransformer trainedModel = pipeline.Fit(trainDataView);

The important line in the above code is the line using the ImageClassification classifier trainer which as you can see is a high level API where you just need to provide which column has the images, the column with the labels (column to predict) and a validation dataset to calculate quality metrics while training so the model can tune itself (change internal hyper-parameters) while training.

There’s another overloaded method for advanced users where you can also specify those optional hyper-parameters such as epochs, batchSize, learningRate and other typical DNN parameters, but most users can get started with the simplified API.

Under the covers this model training is based on a native TensorFlow DNN transfer learning from a default architecture (pre-trained model) such as Resnet V2 50. You can also select the one you want to derive from by configuring the optional hyper-parameters.

For further learning read the following resources:

Database Loader (GA Release)

Database Loader diagram

This feature was previously introduced as preview and now is released as general availability in v1.4.

The database loader enables to load data from databases into the IDataView and therefore enables model training directly against relational databases. This loader supports any relational database provider supported by System.Data in .NET Core or .NET Framework, meaning that you can use any RDBMS such as SQL Server, Azure SQL Database, Oracle, SQLite, PostgreSQL, MySQL, Progress, etc.

In previous ML.NET releases, you could also train against a relational database by providing data through an IEnumerable collection by using the LoadFromEnumerable() API where the data could be coming from a relational database or any other source. However, when using that approach, you as a developer are responsible for the code reading from the relational database (such as using Entity Framework or any other approach) which needs to be implemented properly so you are streaming data while training the ML model, as in this previous sample using LoadFromEnumerable().

However, this new Database Loader provides a much simpler code implementation for you since the way it reads from the database and makes data available through the IDataView is provided out-of-the-box by the ML.NET framework so you just need to specify your database connection string, what’s the SQL statement for the dataset columns and what’s the data-class to use when loading the data. It is that simple!

Here’s example code on how easily you can now configure your code to load data directly from a relational database into an IDataView which will be used later on when training your model.

//Lines of code for loading data from a database into an IDataView for a later model training
//...
string connectionString = @"Data Source=YOUR_SERVER;Initial Catalog= YOUR_DATABASE;Integrated Security=True";

string commandText = "SELECT * from SentimentDataset";

DatabaseLoader loader = mlContext.Data.CreateDatabaseLoader();
DbProviderFactory providerFactory = DbProviderFactories.GetFactory("System.Data.SqlClient");
DatabaseSource dbSource = new DatabaseSource(providerFactory, connectionString, commandText);

IDataView trainingDataView = loader.Load(dbSource);

// ML.NET model training code using the training IDataView
//...

public class SentimentData
{
    public string FeedbackText;
    public string Label;
}

It is important to highlight that in the same way as when training from files, when training with a database ML.NET also supports data streaming, meaning that the whole database doesn’t need to fit into memory, it’ll be reading from the database as it needs so you can handle very large databases (i.e. 50GB, 100GB or larger).

Resources for the DatabaseLoader:

PredictionEnginePool for scalable deployments released as GA

WebApp icon Azure Function icon

When deploying an ML model into multithreaded and scalable .NET Core web applications and services (such as ASP.NET Core web apps, WebAPIs or an Azure Function) it is recommended to use the PredictionEnginePool instead of directly creating the PredictionEngine object on every request due to performance and scalability reasons.

The PredictionEnginePool comes as part of the Microsoft.Extensions.ML NuGet package which is being released as GA as part of the ML.NET 1.4 release.

For further details on how to deploy a model with the PredictionEnginePool, read the following resources:

For further background information on why the PredictionEnginePool is recommended, read this blog post.

Enhanced for .NET Core 3.0 – Released as GA

.NET Core 3.0 icon

ML.NET is now building for .NET Core 3.0 (optional). This feature was previosly released as preview but it is now released as GA.

This means ML.NET can take advantage of the new features when running in a .NET Core 3.0 application. The first new feature we are using is the new hardware intrinsics feature, which allows .NET code to accelerate math operations by using processor specific instructions.

Of course, you can still run ML.NET on older versions, but when running on .NET Framework, or .NET Core 2.2 and below, ML.NET uses C++ code that is hard-coded to x86-based SSE instructions. SSE instructions allow for four 32-bit floating-point numbers to be processed in a single instruction.

Modern x86-based processors also support AVX instructions, which allow for processing eight 32-bit floating-point numbers in one instruction. ML.NET’s C# hardware intrinsics code supports both AVX and SSE instructions and will use the best one available. This means when training on a modern processor, ML.NET will now train faster because it can do more concurrent floating-point operations than it could with the existing C++ code that only supported SSE instructions.

Another advantage the C# hardware intrinsics code brings is that when neither SSE nor AVX are supported by the processor, for example on an ARM chip, ML.NET will fall back to doing the math operations one number at a time. This means more processor architectures are now supported by the core ML.NET components. (Note: There are still some components that don’t work on ARM processors, for example FastTree, LightGBM, and OnnxTransformer. These components are written in C++ code that is not currently compiled for ARM processors).

For more information on how ML.NET uses the new hardware intrinsics APIs in .NET Core 3.0, please check out Brian Lui’s blog post Using .NET Hardware Intrinsics API to accelerate machine learning scenarios.

Use ML.NET in Jupyter notebooks

Jupyter and MLNET logos

Coinciding with Microsoft Ignite 2019 Microsoft is also announcing the new .NET support on Jupyter notebooks, so you can now run any .NET code (C# / F#) in Jupyter notebooks and therefore run ML.NET code in it as well! – Under the covers, this is enabled by the new .NET kernel for Jupyter.

The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, visualizations and narrative text.

In terms of ML.NET this is awesome for many scenarios like exploring and documenting model training experiments, data distribution exploration, data cleaning, plotting data charts, learning scenarios such as ML.NET courses, hands-on-labs and quizzes, etc.

You can simply start exploring what kind of data is loaded in an IDataView:

Exploring data in Jupyter

Then you can continue by plotting data distribution in the Jupyter notebook following an Exploratory Data Analysis (EDA) approach:

Plotting in Jupyter

You can also train an ML.NET model and have its training time documented:

Training in Jupyter

Right afterwards you can see the model’s quality metrics in the notebook, and have it documented for later review:

Metrics in Jupyter

Additional examples are ‘plotting the results of predictions vs. actual data’ and ‘plotting a regression line along with the predictions vs. actual data’ for a better and visual analysis:

Jupyter additional

For additional explanation details, check out this detailed blog post:

For a direct “try it out experience”, please go to this Jupyter notebook hosted at MyBinder and simply run the ML.NET code:

Updates for Model Builder in Visual Studio

The Model Builder tool for Visual Studio has been updated to use the latest ML.NET GA version (1.4 GA) plus it includes new exciting features such the visual experience in Visual Studio for local Image Classification model training.

Release date. Note that at the time of this blog post publication this version of Model Builder is still not released but will be released very soon in just a few days after Microsoft Ignite and the release of ML.NET 1.4 GA.

Model Builder updated to latest ML.NET GA version

Model Builder was updated to use latest GA version of ML.NET (1.4) and therefore the generated C# code also references ML.NET 1.4 NuGet packages.

Visual and local Image Classification model training in VS

As introduced at the begining of this blog post you can locally train an Image Classification model with the ML.NET API. However, when dealing with image files and image folders, the easiesnt way to do it is with a visual interface like the one provided by Model Builder in Visual Studio, as you can see in the image below:

Model Builder in VS

When using Model Builder for training an Image Classifier model you simply need to visually select the folder (with a structure based on one sub-folder per image class) where you have the images to use for training and evaluation and simply start training the model. Then, when finished training you will get the C# code generated for inference/predictions and even for training if you want to use C# code for training from other environments like CI pipelines. Is that easy!

Try ML.NET and Model Builder today!

ML.NET logo

We are excited to release these updates for you and we look forward to seeing what you will build with ML.NET. If you have any questions or feedback, you can ask here at this blog post or at the ML.NET repo at GitHub.

Happy coding!

The ML.NET team.

This blog was authored by Cesar de la Torre plus additional contributions of the ML.NET team.

The post Announcing ML.NET 1.4 general availability (Machine Learning for .NET) appeared first on .NET Blog.

Building Modern Cloud Applications using Pulumi and .NET Core

$
0
0

This is a guest post from the Pulumi team. Pulumi is an open source infrastructure as code tool that helps developers and infrastructure teams work better together to create, deploy, and manage cloud applications using their favorite languages. For more information, see https://pulumi.com/dotnet.

We are excited to announce .NET Core support for Pulumi! This announcement means you can declare cloud infrastructure — including all of Azure, such as Kubernetes, Functions, AppService, Virtual Machines, CosmosDB, and more — using your favorite .NET language, including C#, VB.NET, and F#. This brings the entire cloud to your fingertips without ever having to leave your code editor, while using production-ready “infrastructure as code” techniques.

Infrastructure has become more relevant these days as modern cloud capabilities such as microservices, containers, serverless, and data stores permeate your application’s architecture. The term “infrastructure” covers all of the cloud resources your application needs to run. Modern architectures require thinking deeply about infrastructure while building your application, instead of waiting until afterwards. Pulumi’s approach helps developers and infrastructure teams work together to deliver innovative new functionality that leverages everything the modern cloud has to offer.

Pulumi launched a little over a year ago and recently reached a stable 1.0 milestone. After working with hundreds of companies to get cloud applications into production, .NET has quickly risen to one of Pulumi’s most frequently community requested features. Especially since many of us on the Pulumi team are early .NET ex-pats, we are thrilled today to make Pulumi available on .NET Core for your cloud engineering needs.

What is Pulumi?

Pulumi lets you use real languages to express your application’s infrastructure needs, using a powerful technique called “infrastructure as code.” Using infrastructure as code, you declare desired infrastructure, and an engine provisions it for you, so that it’s automated, easy to replicate, and robust enough for demanding production requirements. Pulumi takes this approach a step further by leveraging real languages and making modern cloud infrastructure patterns, such as containers and serverless programs, first class and easy.

With Pulumi for .NET you can:

  • Declare infrastructure using C#, VB.NET, or F#.

  • Automatically create, update, or delete cloud resources using Pulumi’s infrastructure as code engine, removing manual point-and-clicking in the Azure UI and ad-hoc scripts.

  • Use your favorite IDEs and tools, including Visual Studio and Visual Studio Code, taking advantage of features like auto-completion, refactoring, and interactive documentation.

  • Catch mistakes early on with standard compiler errors, Roslyn analyzers, and an infrastructure-specific policy engine for enforcing security, compliance, and best practices.

  • Reuse any existing NuGet library, or distribute your own, whether that’s for infrastructure best practices, productivity, or just general programming patterns.

  • Deploy continuously, predictably, and reliably using Azure DevOps Pipelines, GitHub Actions, or one of over a dozen integrations.

  • Build scalable cloud applications using classic infrastructure cloud native technologies like Kubernetes, Docker containers, serverless functions, and highly scalable databases such as CosmosDB into your core development experience, bringing them closer to your application code.

Pulumi’s free open source SDK, which includes a CLI and assortment of libraries, enables these capabilities. Pulumi also offers premium features for teams wanting to use Pulumi in production, such as Azure ActiveDirectory integration for identity and advanced policies.

An example: global database with serverless app

Let’s say we want to build a new application that uses Azure CosmosDB for global distribution of data so that performance is great for customers no matter where in the world they are, with a C# serverless application that automatically scales alongside our database. Normally we’d use some other tools to create the infrastructure, such as JSON, YAML, a DSL, or manually point-and-click in the Azure console. This approach is par for the course, but also daunting — it’s complex, unrepeatable, and means we need an infrastructure expert just to even get started.!

The Pulumi approach is to just write code in our favorite .NET language and the Pulumi tool will handle the rest. For example, this C# code creates an Azure CosmosDB databases with a serverless Azure AppService FunctionApp that automatically scales alongside the database:

After writing this code, you run the Pulumi CLI with the pulumi up command and it will first show you a preview of the infrastructure resources it plans on creating. After confirming, it will deploy your whole application and its required infrastructure in just a few minutes.

Later on, if you need to make any changes, you just edit your program, rerun the CLI, and it will make the necessary incremental changes to update your infrastructure accordingly. A full history of your deployments is recorded so you can easily see what changes have been made.

Why is .NET great for infrastructure too?

Many of us love using .NET to author our applications, so why not use it for infrastructure as code too? We’ve in fact already seen above some of the advantages above to doing so. Many of these are probably evident if you already know and love .NET, however, let’s briefly recap.

By using any .NET language, you get many helpful features for your infrastructure code:

  • Familiarity: No need to learn DSLs or markup templating languages.

  • Expressiveness: Use loops, conditionals, pattern matching, LINQ, async code, and more, to dynamically create infrastructure that meets the target environment’s needs.

  • Abstraction: Encapsulate common patterns into classes and functions to hide complexity and avoid copy-and-pasting the same boilerplate repeatedly.

  • Sharing and reuse: Tap into a community of cloud applications and infrastructure experts, by sharing and reusing NuGet libraries with your team or the global community.

  • Productivity: Use your favorite IDE and get statement completion, go to definition, live error checking, refactoring, static analysis, and interactive documentation.

  • Project organization: Use common code structuring techniques such as assemblies and namespaces to manage your infrastructure across one or more projects.

  • Application lifecycle: Use existing ALM systems and techniques to manage and deploy your infrastructure projects, including source control, code review, testing, and continuous integration (CI) and delivery (CD).

Pulumi unlocks access to the entire .NET ecosystem — something that’s easy to take for granted but is missing from other solutions based on JSON, YAML, DSLs, or CLI scripts. Having access to a full language was essential to enabling the CosmosApp example above, which is a custom component that internally uses classes, loops, lambdas, and even LINQ. This approach also helps developers and operators work better together using a shared foundation. Add all of the above together, and you get things done faster and more reliably.

Join the community and get started

Today we’ve released the first preview of Pulumi for .NET, including support for the entire Azure breath of services. To give Pulumi a try, visit the Pulumi for .NET homepage.

There you will find several instructions on installing and getting started with Pulumi for .NET. The following resources provide additional useful information:

Although Pulumi for .NET is listed in “preview” status, it supports all of the most essential Pulumi programming model features (and the rest is on its way). Our goal is to gather feedback and over the next few weeks, and we will be working hard to improve the .NET experience across the board, including more examples and better documentation.

Please join the community in Slack to discuss your scenarios, ideas, and to get any needed assistance from the team and other end users. Pulumi is also open source on GitHub.

This is an exciting day for the Pulumi team and our community. Pulumi was started by some of .NET’s earliest people and so it’s great to get back to our roots and connect with the .NET community, helping developers, infrastructure teams, and operators build better cloud software together.

We look forward to seeing the new and amazing cloud applications you build with Pulumi for .NET!

The post Building Modern Cloud Applications using Pulumi and .NET Core appeared first on .NET Blog.

.NET Framework Repair Tool

$
0
0

Today, we are announcing an update to .NET Framework Repair tool.

In case you are not familiar with previous releases of this tool, here is a bit of background. Occasionally, some customers will run into issues when deploying a .NET Framework release or its updates and the issue may not be fixed from within the setup itself.

In such cases, the .NET Framework Repair Tool can help with detecting and fixing some of the common causes of install failures. This version (v1.4) of the repair tool supports all versions of .NET Framework from 3.5 SP1 to 4.8. The latest update adds support for .NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 and 4.8.

If you had previously downloaded an older version of the tool, we recommend you get the latest version from the following location:

Microsoft .NET Framework Repair Tool

More Information

The tool supports both command line mode for power users as well as a setup wizard interface familiar to most users. You can find more information about how to use the repair tool and its capabilities by visiting the knowledge base article 2698555.

The post .NET Framework Repair Tool appeared first on .NET Blog.

ML.NET Model Builder Updates

$
0
0

ML.NET is a cross-platform, machine learning framework for .NET developers, and Model Builder is the UI tooling in Visual Studio that uses Automated Machine Learning (AutoML) to easily allow you to train and consume custom ML.NET models. With ML.NET and Model Builder, you can create custom machine learning models for scenarios like sentiment analysis, price prediction, and more without any machine learning experience!

ML.NET Model Builder

This release of Model Builder comes with bug fixes and two exciting new features:

  • Image classification scenario – locally train image classification models with your own images
  • Try your model – make predictions on sample input data right in the UI

Image Classification Scenario

We showed off this feature in .NET Conf to classify the weather in images as either sunny, cloudy, or rainy, and now you can locally train image classification models in Model Builder with your own images!

Image classification scenario in Model Builder

For example, say you have a dataset of images of dogs and cats, and you want to use those images to train an ML.NET model that classifies new images as “dog” or “cat.”

Your dataset must contain a parent folder with labelled subfolders for each category (e.g. a folder called Animals that contains two sub-folders: one named Dog, which contains training images of dogs, and one named Cat, which contains training images of cats):

Example Folder Structure

You can use the Next Steps code and projects generated by Model Builder to easily consume the trained image classification model in your end-user application, just like with text scenarios.

Try Your Model

After training a model in Model Builder, you can use the model to make predictions on sample input right in the UI for both text and image scenarios.

For instance, for the dog vs. cat image classification example, you could input an image and see the results in the Evaluate step of Model Builder:

Image try it out

If you have a text scenario, like price prediction for taxi fare, you can also input sample data in the Try your model section:

Text try it out

Give Us Your Feedback

If you run into any issues, feel that something is missing, or really love something about ML.NET Model Builder, let us know by creating an issue in our GitHub repo.

Model Builder is still in Preview, and your feedback is super important in driving the direction we take with this tool!

Get Started with Model Builder

You can download ML.NET Model Builder in the VS Marketplace (or in the Extensions menu of Visual Studio).

Learn more in the ML.NET Docs or get started with this tutorial.

Not currently using Visual Studio? Try out the ML.NET CLI (image classification not yet implemented).

The post ML.NET Model Builder Updates appeared first on .NET Blog.


Announcing .NET Core 3 Preview 2

$
0
0

Today, we are announcing .NET Core 3 Preview 2. It includes new features in .NET Core 3.0 and C# 8, in addition to the large number of new features in Preview 1. ASP.NET Core 3.0 Preview 2  is also released today. C# 8 Preview 2 is part of .NET Core 3 SDK, and was also released last week with Visual Studio 2019 Preview 2.

Download and get started with .NET Core 3 Preview 2 right now on Windows, macOS and Linux.

In case you missed it, we made some big announcements with Preview 1, including adding support for Windows Forms and WPF with .NET Core, on Windows, and that both UI frameworks will be open source. We also announced that we would support Entity Framework 6 on .NET Core, which will come in a later preview. ASP.NET Core is also adding many features including Razor components.

Please provide feedback on the release in the comments below or at dotnet/core #2263. You can see complete details of the release in the .NET Core 3 Preview 2 release notes.

.NET Core 3 will be supported in Visual Studio 2019, Visual Studio for Mac and Visual Studio Code. Visual Studio 2019 Preview 2 was released last week and has support for C# 8. The Visual Studio Code C# Extension (in pre-release channel) was also just updated to support C# 8.

C# 8

C# 8 is a major release of the language, as Mads describes in Do more with patterns in C# 8.0, Take C# 8.0 for a spin and Building C# 8.0. In this post, I’ll cover a few favorites that are new in Preview 2.

Using Declarations

Are you tired of using statements that require indenting your code? No more! You can now write the following code, which attaches a using declaration to the scope of the current statement block and then disposes the object at the end of it.

Switch Expressions

Anyone who uses C# probably loves the idea of a switch statement, but not the syntax. C# 8 introduces switch expressions, which enable the following: terser syntax, returns a value since it is an expression, and fully integrated with pattern matching. The switch keyword is “infix”, meaning the keyword sits between the tested value (here, that’s o) and the list of cases, much like expression lambdas. The following examples use the lambda syntax for methods, which integrates well with switch expressions but isn’t required.

You can see the syntax for switch expressions in the following example:

There are two patterns at play in this example. o first matches with the Point type pattern and then with the property pattern inside the {curly braces}. The _ describes the discard pattern, which is the same as default for switch statements.

You can go one step further, and rely on tuple deconstruction and parameter position, as you can see in the following example:

In this example, you can see you do not need to define a variable or explicit type for each of the cases. Instead, the compiler can match the tuple being testing with the tuples defined for each of the cases.

All of these patterns enable you to write declarative code that captures your intent instead of procedural code that implements tests for it. The compiler becomes responsible for implementing that boring procedural code and is guaranteed to always do it correctly.

There will still be cases where switch statements will be a better choice than switch expressions and patterns can be used with both syntax styles.

Async streams

Async streams are another major improvement in C# 8. They have been changing with each preview and require that the compiler and the framework libraries match to work correctly. You need .NET Core 3.0 Preview 2 to use async streams if you want to develop with either Visual Studio 2019 Preview 2 or the latest preview of the C# extension for Visual Studio Code. If you are using .NET Core 3.0 Preview 2 at the commandline, then everything will work as expected.

IEEE Floating-point improvements

Floating point APIs are in the process of being updated to comply with IEEE 754-2008 revision. The goal of this floating point project is to expose all “required” operations and ensure that they are behaviorly compliant with the IEEE spec.

Parsing and formatting fixes:

  • Correctly parse and round inputs of any length.
  • Correctly parse and format negative zero.
  • Correctly parse Infinity and NaN by performing a case-insensitive check and allowing an optional preceding + where applicable.

New Math APIs:

  • BitIncrement/BitDecrement — corresponds to the nextUp and nextDown IEEE operations. They return the smallest floating-point number that compares greater or lesser than the input (respectively). For example, Math.BitIncrement(0.0) would return double.Epsilon.
  • MaxMagnitude/MinMagnitude — corresponds to the maxNumMag and minNumMag IEEE operations, they return the value that is greater or lesser in magnitude of the two inputs (respectively). For example, Math.MaxMagnitude(2.0, -3.0) would return -3.0.
  • ILogB — corresponds to the logB IEEE operation which returns an integral value, it returns the integral base-2 log of the input parameter. This is effectively the same as floor(log2(x)), but done with minimal rounding error.
  • ScaleB — corresponds to the scaleB IEEE operation which takes an integral value, it returns effectively x * pow(2, n), but is done with minimal rounding error.
  • Log2 — corresponds to the log2 IEEE operation, it returns the base-2 logarithm. It minimizes rounding error.
  • FusedMultiplyAdd — corresponds to the fma IEEE operation, it performs a fused multiply add. That is, it does (x * y) + z as a single operation, there-by minimizing the rounding error. An example would be FusedMultiplyAdd(1e308, 2.0, -1e308) which returns 1e308. The regular (1e308 * 2.0) - 1e308 returns double.PositiveInfinity.
  • CopySign — corresponds to the copySign IEEE operation, it returns the value of x, but with the sign of y.

.NET Platform Dependent Intrinsics

We’ve added APIs that allow access to certain perf-oriented CPU instructions, such as the SIMD or Bit Manipulation instruction sets. These instructions can help achieve big performance improvements in certain scenarios, such as processing data efficiently in parallel. In addition to exposing the APIs for your programs to use, we have begun using these instructions to accelerate the .NET libraries too.

The following CoreCLR PRs demonstrate a few of the intrinsics, either via implementation or use:

For more information, take a look at .NET Platform Dependent Intrinsics, which defines an approach for defining this hardware infrastructure, allowing Microsoft, chip vendors or any other company or individual to define hardware/chip APIs that should be exposed to .NET code.

Introducing a fast in-box JSON Writer & JSON Document

Following the introduction of the JSON reader in preview1, we’ve added System.Text.Json.Utf8JsonWriter and System.Text.Json.JsonDocument.

As described in our System.Text.Json roadmap, we plan to provide a POCO serializer and deserializer next.

Utf8JsonWriter

The Utf8JsonWriter provides a high-performance, non-cached, forward-only way to write UTF-8 encoded JSON text from common .NET types like String, Int32, and DateTime. Like the reader, the writer is a foundational, low-level type, that can be leveraged to build custom serializers. Writing a JSON payload using the new Utf8JsonWriter is 30-80% faster than using the writer from Json.NET and does not allocate.

Here is a sample usage of the Utf8JsonWriter that can be used as a starting point:

The Utf8JsonWriter accepts IBufferWriter<byte> as the output location to synchronously write the json data into and you, as the caller, need to provide a concrete implementation. The platform does not currently include an implementation of this interface, but we plan to provide one that is backed by a resizable byte array. That implementation would enable synchronous writes, which could then be copied to any stream (either synchronously or asynchronously). If you are writing JSON over the network and include the System.IO.Pipelines package, you can leverage the Pipe-based implementation of the interface called PipeWriter to skip the need to copy the JSON from an intermediary buffer to the actual output.

You can take inspiration from this sample implementation of IBufferWriter<T>. The following is a skeleton array-backed concrete implementation of the interface:

JsonDocument

In preview2, we’ve also added System.Text.Json.JsonDocument which was built on top of the Utf8JsonReader. The JsonDocument provides the ability to parse JSON data and build a read-only Document Object Model (DOM) that can be queried to support random access and enumeration. The JSON elements that compose the data can be accessed via the JsonElement type which is exposed by the JsonDocument as a property called RootElement. The JsonElement contains the JSON array and object enumerators along with APIs to convert JSON text to common .NET types. Parsing a typical JSON payload and accessing all its members using the JsonDocument is 2-3x faster than Json.NET with very little allocations for data that is reasonably sized (i.e. < 1 MB).

Here is a sample usage of the JsonDocument and JsonElement that can be used as a starting point:

GPIO Support for Raspberry Pi

We added initial support for GPIO with Preview 1. As part of Preview 2, we have released two NuGet packages that you can use for GPIO programming.

The GPIO Packages includes APIs for GPIO, SPI, I2C and PWM devices. The IoT bindings package includes device bindings for various chips and sensors, the same ones available at dotnet/iot – src/devices.

Updated serial port APIs were announced as part of Preview 1. They are not part of these packages but are available as part of the .NET Core platform.

Local dotnet tools

Local dotnet tools have been improved in Preview 2. Local tools are similar to dotnet global tools but are associated with a particular location on disk. This enables per-project and per-repository tooling. You can read more about them in the .NET Core 3.0 Preview 1 post.

In this preview, we have added 2 new commands:

  • dotnet new tool-manifest
  • dotnet tool list

To add local tools, you need to add a manifest that will define the tools and versions available. The dotnet new tool-manifest command automates creation of the manifest required by local tools. After this file is created, you can add tools to it via dotnet tool install <packageId>.

The command dotnet tool list lists local tools and their corresponding manifest. The command dotnet tool list -g lists global tools.

There was a change in .NET Core Local Tools between .NET Core 3.0 Preview 1 and .NET Core 3.0 Preview 2. If you tried out local tools in Preview 1 by running a command like dotnet tool restore or dotnet tool install, you need to delete your local tools cache folder before local tools will work correctly in Preview 2. This folder is located at:

On mac, Linux: rm -r $HOME/.dotnet/toolResolverCache

On Windows: rmdir /s %USERPROFILE%\.dotnet\toolResolverCache

If you do not delete this folder, you will receive an error.

Assembly Unloadability

Assembly unloadability is a new capability of AssemblyLoaderContext. This new feature is largely transparent from an API perspective, exposed with just a few new APIs. It enables a loader context to be unloaded, releasing all memory for instantiated types, static fields and for the assembly itself. An application should be able to load and unload assemblies via this mechanism forever without experiencing a memory leak.

We expect this new capability to be used for the following scenarios:

  • Plugin scenarios where dynamic plugin loading and unloading is required.
  • Dynamically compiling, running and then flushing code. Useful for web sites, scripting engines, etc.
  • Loading assemblies for introspection (like ReflectionOnlyLoad), although MetadataLoadContext (released in Preview 1) will be a better choice in many cases.

More information:

Assembly unloading requires significant care to ensure that all references to managed objects from outside a loader context are understood and managed. When the loader context is requested to be unloaded, any outside references need to have been unreferenced so that the loader context is self-consistent only to itself.

Assembly unloadability was provided in the .NET Framework by Application Domains (AppDomains), which are not supported with .NET Core. AppDomains had both benefits and limitations compared to this new model. We consider this new loader context-based model to be more flexible and higher performance when compared to AppDomains.

Windows Native Interop

Windows offers a rich native API, in the form of flat C APIs, COM and WinRT. We’ve had support for P/Invoke since .NET Core 1.0, and have been adding the ability to CoCreate COM APIs and Activate WinRT APIs as part of the .NET Core 3.0 release. We have had many requests for these capabilities, so we know that they will get a lot of use.

Late last year, we announced that we had managed to automate Excel from .NET Core. That was a fun moment. You can now try this same demo yourself with the Excel demo sample. Under the covers, this demo is using COM interop features like NOPIA, object equivalence and custom marshallers.

Managed C++ and WinRT interop are coming in a later preview. We prioritized getting COM interop built first.

WPF and Windows Forms

The WPF and Windows Forms teams opened up their repositories, dotnet/wpf and dotnet/winforms, respectively, on December 4th, the same day .NET Core 3.0 Preview 1 was released. Much of the last month, beyond holidays, has been spent interacting with the community, merging PRs, and responding to issues. In the background, they’ve been integrating WPF and Windows Forms into the .NET Core build system, including adopting the Arcade SDK. Arcade is an MSBuild SDK that exposes functionality which is needed to build the .NET Platform. The WPF team will be publishing more of the WPF source code over the coming months.

The same teams have been making a final set of changes in .NET Framework 4.8. These same changes have also been added to the .NET Core versions of WPF and Windows Forms.

Visual Studio support

Desktop development on .NET Core 3 requires Visual Studio 2019. We added the WPF and Windows Forms templates to the New Project Dialog to make it easier starting your new applications without using the command line.

The WPF and Windows Forms designer teams are continuing to work on an updated designer for .NET Core, which will be part of a Visual Studio 2019 Update.

MSIX Deployment for Desktop apps

MSIX is a new Windows app package format. It can be used to deploy .NET Core 3 desktop applications to Windows 10.

The Windows Application Packaging Project, available in Visual Studio 2019 preview 2, allows you to create MSIX packages with self-contained .NET Core applications.

Note: The .NET Core project file must specify the supported runtimes in the <RuntimeIdentifiers> property:

<RuntimeIdentifiers>win-x86;win-x64</RuntimeIdentifiers>

Install .NET Core 3.0 Previews on Linux with Snap

Snap is the preferred way to install and try .NET Core previews on Linux distributions that support Snap. At present, only X64 builds are supported with Snap. We’ll look at supporting ARM builds, too.

After configuring Snap on your system, run the following command to install the .NET Core SDK 3.0 Preview SDK.

sudo snap install dotnet-sdk --beta --classic

When .NET Core in installed using the Snap package, the default .NET Core command is dotnet-sdk.dotnet, as opposed to just dotnet. The benefit of the namespaced command is that it will not conflict with a globally installed .NET Core version you may have. This command can be aliased to dotnet with:

sudo snap alias dotnet-sdk.dotnet dotnet

Some distros require an additional step to enable access to the SSL certificate. See our Linux Setup for details.

Platform Support

.NET Core 3 will be supported on the following operating systems:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2012 R2 SP1+
  • macOS: 10.12+
  • RHEL: 6+
  • Fedora: 26+
  • Ubuntu: 16.04+
  • Debian: 9+
  • SLES: 12+
  • openSUSE: 42.3+
  • Alpine: 3.8+

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Windows and Linux
  • ARM64 on Linux

For Linux, ARM32 is supported on Debian 9+ and Ubuntu 16.04+. For ARM64, it is the same as ARM32 with the addition of Alpine 3.8. These are the same versions of those distros as is supported for X64. We made a conscious decision to make supported platforms as similar as possible between X64, ARM32 and ARM64.

Docker images for .NET Core 3.0 are available at microsoft/dotnet on Docker Hub. We are in the process of adopting Microsoft Container Registry (MCR). We expect that the final .NET Core 3.0 images will only be published to MCR.

Closing

Take a look at the .NET Core 3.0 Preview 1 post if you missed that. It includes a broader description of the overall release including the initial set of features, which are also included and improved on in Preview 2.

Thanks to everyone that installed .NET Core 3.0 Preview 1. We appreciate you trying out the new version and for your feedback. Please share any feedback you have about Preview 2.

Submit to the Applied F# Challenge!

$
0
0

This post was written by Lena Hall, a Senior Cloud Developer Advocate at Microsoft.

F# Software Foundation has recently announced their new initiative — Applied F# Challenge! We encourage you to participate and send your submissions about F# on Azure through the participation form.

Applied F# Challenge is a new initiative to encourage in-depth educational submissions to reveal more of the interesting, unique, and advanced applications of F#.

The motivation for the challenge is uncovering more of advanced and innovative scenarios and applications of F# we hear about less often:

We primarily hear about people using F# for web development, analytical programming, and scripting. While those are perfect use cases for F#, there are many more brilliant and less covered scenarios where F# has demonstrated its strength. For example, F# is used in quantum computing, cancer research, bioinformatics, IoT, and other domains that are not typically mentioned as often.

You have some time to think about the topic for your submission because the challenge is open from February 1 to May 20 this year.

What should you submit?

Publish a new article or an example code project that covers a use case of a scenario where you feel the use of F# to be essential or unique. The full eligibility criteria and frequently asked questions are listed in the official announcement.

There are multiple challenge categories you can choose to write about:

F# for machine learning and data science.
F# for distributed systems.
F# in the cloud: web, serverless, containers, etc.
F# for desktop and mobile development.
F# in your organization or domain: healthcare, finance, games, retail, etc.
F# and open-source development.
F# for IoT or hardware programming.
F# in research: quantum, bioinformatics, security, etc.
Out of the box F# topics, scenarios, applications, or examples.

Why should you participate in the challenge?

All submissions will receive F# stickers as a participation reward for contributing to the efforts of improving the F# ecosystem and raising awareness of F# strengths in advanced or uncovered use cases.

Participants with winning submissions in each category will also receive the title of a Recognized F# Expert by F# Software Foundation and a special non-monetary prize.

Each challenge category will be judged by the committees that include many notable F# experts and community leaders, including Don Syme, Rachel Blasucci, Evelina Gabasova, Henrik Feldt, Tomas Perticek, and many more.

As the participation form suggests, you will also have an opportunity to be included in a recommended speaker list by F# Software Foundation.

Spread the word

Help us spread the word about the Applied F# Challenge by encouraging others to participate with #AppliedFSharpChallenge hashtag on Twitter!

Announcing ML.NET 0.10 – Machine Learning for .NET

$
0
0

alt text

ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS) for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models.

ML.NET allows you to create and use machine learning models targeting common tasks such as classification, regression, clustering, ranking, recommendations and anomaly detection. It also supports the broader open source ecosystem by proving integration with popular deep-learning frameworks like TensorFlow and interoperability through ONNX. Some common use cases of ML.NET are scenarios like Sentiment Analysis, Recommendations, Image Classification, Sales Forecast, etc. Please see our samples for more scenarios.

Today we’re announcing the release of ML.NET 0.10. ( ML.NET 0.1 was released at //Build 2018). Note that ML.NET follows a semantic versioning pattern, so this preview version is 0.10. There will be additional versions such as 0.11 and 0.12 before we release v1.0.

This release focuses on the overall stability of the framework, continuing to refine the API, increase test coverage and as an strategic milestone, we have moved the IDataView components into a new and separated assembly under Microsoft.Data.DataView namespace so it will favor interoperability in the future.

The main highlights for this blog post are described below in further details:

IDataView as a shared type across libraries in the .NET ecosystem

The IDataView component provides a very efficient, compositional processing of tabular data (columns and rows) especialy made for machine learning and advanced analytics applications. It is designed to efficiently handle high dimensional data and large data sets. It is also suitable for single node processing of data partitions belonging to larger distributed data sets.

For further info on IDataview read the IDataView design principles

What’s new in v0.10 for IDataView

In ML.NET 0.10 we have segregated the IDataView component into a single assembly and NuGet package. This is a very important step towards the interoperability with other APIs and frameworks.

Why segregate IDataView from the rest of the ML.NET framework?

This is a very important milestone that will help the ecosystem’s interoperability between multiple frameworks and libraries from Microsoft of third parties. By seggregating IDataView, different libraries will be able to reference it and use it from their API and allow users to pass large volumes of data between two independent libraries.

For example, from ML.NET you can of course consume and produce IDataView instances. But what if you need to integrate with a different framework by creating an IDataView from another API such as any “Data Preparation framework” library? If those frameworks can simply reference a single NuGet package with just the IDataView, then you can directly pass data into ML.NET from those frameworks without having to copy the data into a format that ML.NET consumes. Also, the additional framework wouldn’t depend on the whole ML.NET framework but just reference a very clean package limited to the IDataView.

The image below is an aspirational approach when using IDataView across frameworks in the ecosystem:

alt text

Another good example would be any plotting/charting library in the .NET ecosystem that could consume data using IDataView. You could take data that was produced by ML.NET and feed it directly into the plotting library without that library having a direct reference to the whole ML.NET framework. There would be no need to copy, or change the shape of the data at all. And there is no need for this plotting library to know anything about ML.NET.

Basically, IDataView can be an exchange data format which allows producers and consumers to pass large amounts of data in a standarized way.

For additional info check the PR #2220

Support for multiple ‘feature columns’ in recommendations (FFM based)

In previous ML.NET releases, when using the Field-aware Factorization Machine (FFM) trainer (training algorithm) you could only provide a single feature column like in this sample app

In 0.10 release we’ve added support for multiple ‘feature columns’ in your training dataset when using an FFM trainer by allowing to specify those additional column names in the trainer’s ‘Options’ parameter as shown in the following code snippet:

var ffmArgs = new FieldAwareFactorizationMachineTrainer.Options();

// Create the multiple field names.
ffmArgs.FeatureColumn = nameof(MyObservationClass.MyField1); // First field.
ffmArgs.ExtraFeatureColumns = new[]{ nameof(MyObservationClass.MyField2), nameof(MyObservationClass.MyField3) }; // Additional fields.

var pipeline = mlContext.BinaryClassification.Trainers.FieldAwareFactorizationMachine(ffmArgs);

var model = pipeline.Fit(dataView);

You can see additional code example details in this code

Additional updates in v0.10 timeframe

Support for returning multiple predicted labels

Until ML.NET v0.9, when predicting (for instance with a multi-class classification model), you could only predict and return a single label. That’s an issue for many business scenarios. For instance, in an eCommerce scenario, you could want to automatically classify a product and assign it to multiple product categories instead of just a single category.

However, when predicting, ML.NET internally already had a list of the multiple possible predictions with a score/proability per each in the schema’s data, but the API was simply not returning the list of possible predicted labels but a single one.

Therefore, this improvement allows you to access the schema’s data so you can get a list of the predicted labels which can then be related to their scores/proabilities provided by the float[] Score array in your Prediction class, such as in this sample prediction class.

For additional info check this code example

Minor updates in 0.10

  • Introducing Microsoft.ML.Recommender NuGet name instead of Microsoft.ML.MatrixFactorization name: Microsoft.ML.Recommender] is a better naming for NuGet packages based on the scenario (Recommendations) instead of the trainer’s name (Microsoft.ML.MatrixFactorization).

  • Added support in TensorFlow for using using text and sparse input in TensorFlow: Specifically, this adds support for loading a map from a file through dataview by using ValueMapperTransformer. This provides support for additional scenarios like a Text/NLP scenario) in TensorFlowTransform where model’s expected input is vector of integers.

  • Added Tensorflow unfrozen models support in GetModelSchema: For a code example loading an unfrozen TensorFlow check it out here.

Breaking changes in ML.NET 0.10

For your convenience, if you are moving your code from ML.NET v0.9 to v0.10, you can check out the breaking changes list that impacted our samples.

Instrumented code coverage tools as part of the ML.NET CI systems

We have also instrumented code coverage tools (using https://codecov.io/) as part of our CI systems and will continue to push for stability and quality in the code.

You can check it out here which is also a link in the home page of the ML.NET repo:

alt text

Once you click on that link, you’ll see the current code coverage for ML.NET:

alt text

Explore the community samples and share yours!!

As part of the ML.NET Samples repo we also have a special Community Samples page pointing to multiple samples provided by the community. These samples are not maintained by Microsoft but are very interesting and cover additional scenarios not covered by us.

Here’s an screenshot of the current community samples:

alt text

There are pretty cool samples like the following:

‘Photo-Search’ WPF app running a TensorFlow model exported to ONNX format

alt text

UWP app using ML.NET

alt text

Other very interesting samples are:

Share your sample with the ML.NET community!

alt text

We encourage you to share your ML.NET demos and samples with the community by simply submitting its brief description and URL pointing to your GitHub repo or blog posts, into this repo issue “Request for your samples!”.

We’ll do the rest and publish it at the ML.NET Community Samples page!

Planning to go to production?

alt text

If you are using ML.NET in your app and looking to go into production, you can talk to an engineer on the ML.NET team to:

  1. Get help implementing ML.NET successfully in your application.
  2. Demo your app and potentially have it featured on the .NET Blog, dot.net site, or other Microsoft channel.

Fill out this form and someone from the ML.NET team will contact you.

Get started!

alt text

If you haven’t already get started with ML.NET here.

Next, going further explore some other resources:

We will appreciate your feedback by filing issues with any suggestions or enhancements in the ML.NET GitHub repo to help us shape ML.NET and make .NET a great platform of choice for Machine Learning.

Thanks and happy coding with ML.NET!

The ML.NET Team.

This blog was authored by Cesar de la Torre and Eric Erhardt plus additional contributions of the ML.NET team

.NET Core February 2019 Updates – 1.0.14, 1.1.11, 2.1.8 and 2.2.2

$
0
0

Today, we are releasing the .NET Core February 2019 Update. These updates contain security and reliability fixes. See the individual release notes for details on included reliability fixes.

Security

Microsoft Security Advisory CVE-2019-0657: .NET Core Domain Spoofing Vulnerability

A domain spoofing vulnerability exists in .NET Framework and .NET Core which causes the meaning of a URI to change when International Domain Name encoding is applied. An attacker who successfully exploited the vulnerability could redirect a URI. This issue effects .NET Core 1.0, 1.1 2.1 and 2.2.

The security update addresses the vulnerability by not allowing certain Unicode characters from the URI.

Getting the Update

The latest .NET Core updates are available on the .NET Core download page. This update is also included in the Visual Studio 15.0.21 (.NET Core 1.0 and 1.1) and 15.9.7 (.NET Core 1.0, 1.1 and 2.1) updates, which is also releasing today.

See the .NET Core release notes ( 1.0.14 | 1.1.11 | 2.1.8 | 2.2.2 ) for details on the release including a issues fixed and affected packages.

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

microsoft/dotnet (includes the latest aspnetcore release)
microsoft/dotnet-samples
microsoft/aspnetcore (aspnetcore version 1.0 and 1.1)

Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

Note: You must re-pull base images in order to get updates. The Docker client does not pull updates automatically.

Azure App Services deployment

Deployment of these updates Azure App Services has begun. We’ll keep this section updated as the regions go live. Deployment to all regions is expected to complete in a few days.

Lifecycle Information

The following lifecycle documents describe Microsoft support policies and should be consulted to ensure that you are using supported .NET Core versions.

.NET Framework February 2019 Security and Quality Rollup

$
0
0

Updated: February 19, 2019

  • The advisory is now resolved with no action needed from Microsoft Customers. The issue was not applicable to any valid or supported configuration. There is no consequence for .NET 4.8 Preview customers.  We strive to share timely information to protect our customer’s productivity, in this case, our finding was thankfully of no consequence for customers on supported configurations.

Updated: February 15, 2019

  • A new advisory has been released today for issues customers have reported with .NET 4.8 Preview and this security update for Windows 10 update 1809 installed.

Updated: February 14, 2019

  • Added the previously released previous release update information in quality and reliability for easier reference.

 

Yesterday, we released the February 2019 Security and Quality Rollup.

Security

CVE-2019-0613 – Remote Code Execution Vulnerability

This security update resolves a vulnerability in .NET Framework software if the software does not check the source markup of a file. An attacker who successfully exploits the vulnerability could run arbitrary code in the context of the current user. If the current user is logged on by using administrative user rights, an attacker could take control of the affected system. An attacker could then install programs; view, change, or delete data; or create new accounts that have full user rights. Users whose accounts are configured to have fewer user rights on the system could be less affected than users who have administrative user rights.

CVE-2019-0657 – Domain Spoofing Vulnerability

This security update resolves a vulnerability in certain .NET Framework APIs that parse URLs. An attacker who successfully exploits this vulnerability could use it to bypass security logic that’s intended to make sure that a user-provided URL belonged to a specific host name or a subdomain of that host name. This could be used to cause privileged communication to be made to an untrusted service as if it were a trusted service.

CVE-2019-0657

Quality and Reliability

This release contains the following previously released quality and reliability improvements, January 2019 Preview of Quality Rollup.

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, Microsoft Update Catalog, and Docker.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10 1507, Windows 10 update 1607, Windows 10 update 1703, Windows 10 update 1709 and Windows update 1803, the .NET Framework updates are part of the Windows 10 Monthly Rollup.  For Windows 10 update 1809 the .NET Framework updates are part of the Cumulative Update for .NET Framework which is not part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Security and Quality Rollup KB
Windows 10 1809 (October 2018 Update)
Windows Server 2019
Catalog
4483452
.NET Framework 3.5, 4.7.2 4483452
Windows 10 1803 (April 2018 Update) Catalog
4487017
.NET Framework 3.5, 4.7.2 4487017
Windows 10 1709 (Fall Creators Update) Catalog
4486996
.NET Framework 3.5, 4.7.1, 4.7.2 4486996
Windows 10 1703 (Creators Update) Catalog
4487020
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4487020
Windows 10 1607 (Anniversary Update)
Windows Server 2016
Catalog
4487026
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4487026
Windows 10 1507 Catalog
4487018
.NET Framework 3.5, 4.6, 4.6.1, 4.6.2 4487018

The following table is for earlier Windows and Windows Server versions.

Product Version Security and Quality Rollup KB Security Only Update KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4487080
Catalog
4487124
.NET Framework 3.5 Catalog
4483459
Catalog
4483484
.NET Framework 4.5.2 Catalog
4483453
Catalog
4483472
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4483450
Catalog
4483469
Windows Server 2012 Catalog
4487079
Catalog
4487122
.NET Framework 3.5 Catalog
4483456
Catalog
4483481
.NET Framework 4.5.2 Catalog
4483454
Catalog
4483473
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4483449
Catalog
4483468
Windows 7 SP1
Windows Server 2008 R2 SP1
Catalog
4487078
Catalog
4487121
.NET Framework 3.5.1 Catalog
4483458
Catalog
4483483
.NET Framework 4.5.2 Catalog
4483455
Catalog
4483474
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4483451
Catalog
4483470
Windows Server 2008 Catalog
4487081
Catalog
4487124
.NET Framework 2.0, 3.0 Catalog
4483457
Catalog
4483482
.NET Framework 4.5.2 Catalog
4483455
Catalog
4483474
.NET Framework 4.6 Catalog
4483451
Catalog
4483470

Docker Images

We are updating the following .NET Framework Docker images for today’s release:

Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

Note: Significant changes have been made with Docker images recently. Please look at .NET Docker Announcements for more information.

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Help us make the .NET Architecture guides better for you!

$
0
0

Over the last couple of years, we worked with experts to create some incredible architecture guides & reference samples for .NET developers. We focused on Microservices Architecture, Modernizing existing .NET apps, DevOps best practices, ASP.NET web apps, Azure cloud apps, Mobile apps with Xamarin and more.

We’re running a survey to find out how you’re using them and what we could improve. It should take 10 mins or less, and your answers will help us make the guides and reference samples better.


Take the survey now! 

Thank You!
.NET Team

Microsoft’s Developer Blogs are Getting an Update

$
0
0

In the coming days, we’ll be moving our developer blogs to a new platform with a modern, clean design and powerful features that will make it easy for you to discover and share great content. This week, you’ll see the Visual Studio, IoTDev, and Premier Developer blogs move to a new URL  – while additional developer blogs will transition over the coming weeks. You’ll continue to receive all the information and news from our teams, including access to all past posts. Your RSS feeds will continue to seamlessly deliver posts and any of your bookmarked links will re-direct to the new URL location.

We’d love your feedback

Our most inspirational ideas have come from this community and we want to make sure our developer blogs are exactly what you need. Share your thoughts about the current blog design by taking our Microsoft Developer Blogs survey and tell us what you think! We would also love your suggestions for future topics that our authors could write about. Please use the survey to share what kinds of information, tutorials, or features you would like to see covered in future posts.

Frequently Asked Questions

For the most-asked questions, see the FAQ below. If you have any additional questions, ideas, or concerns that we have not addressed, please let us know by submitting feedback through the Developer Blogs FAQ page.

Will Saved/Bookmarked URLs Redirect?

Yes. All existing URLs will auto-redirect to the new site. If you discover any broken links, please report them through the FAQ feedback page.

Will My Feed Reader Still Send New Posts?

Yes, your RSS feed will continue to work through your current set-up. If you encounter any issues with your RSS feed, please report it with details about your feed reader.

Which Blogs Are Moving?

This migration involves the majority of our Microsoft developer blogs so expect to see changes to our Visual Studio, IoTDev, and Premier Developer blogs this week. The rest will be migrated in waves over the next few weeks.

We appreciate all of the feedback so far and look forward to showing you what we’ve been working on! For additional information about this blog migration, please refer to our Developer Blogs FAQ.


RESOLVED: Advisory on February 2019 Security update for Windows 10 update 1809

$
0
0

Final Update 2/19/19 @1:30 PM (PST): This advisory is now resolved with no action needed from Microsoft Customers. The issue was not applicable to any valid or supported configuration. There is no consequence for .NET 4.8 Preview customers.

We strive to share timely information to protect our customer’s productivity, in this case, our finding was thankfully of no consequence for customers on supported configurations.

Update 2/15/19 @3:35 PM (PST): As we continue our investigation, we are finding the issue to be restricted to a limited and isolated set of test-only systems that are using non-official versions of the .NET 4.8 Preview. As of 2/15/19 around 12:00 pm (PST) we further tightened our delivery mechanisms to ensure that the February .NET security updates are only deployed to their expected target systems.

The February 2019 Security and Quality Rollup for Windows 10 update 1809 was released earlier this week. We have received multiple customer reports of issues when a customer has installed .NET Framework 4.8 Preview then installed the February security update, 4483452. These reports are specific to only to the scenario when a customer has installed both .NET Framework 4.8 Preview and the February security update.

We are actively working on investigating and addressing this issue. If you installed the February 2019 security update and have not seen any negative behavior, we recommend that you leave your system as-is but closely monitor it and ensure that you apply upcoming .NET Framework updates.

We will continue to update this post as we have new information.

Guidance

We are working on guidance and will update this post and as we have new information.

Workaround

There are no known workarounds at this time.

Symptoms

After installing .NET Framework 4.8 Preview then the February 2019 security update, 4483452, Visual Studios will crash with this error message:

“Exception has been thrown by the target of an invocation.”

.NET Core 1.0 and 1.1 will reach End of Life on June 27, 2019

$
0
0

.NET Core 1.0 was released on June 27, 2016 and .NET Core 1.1 was released on November 16, 2016. As an LTS release, .NET Core 1.0 is supported for three years. .NET Core 1.1 fits into the same support timeframe as .NET Core 1.0. .NET Core 1.0 and 1.1 will reach end of life and go out of support on June 27, 2019, three years after the initial .NET Core 1.0 release.

After June 27, 2019, .NET Core patch updates will no longer include updated packages or container images for .NET Core 1.0 and 1.1. You should plan your upgrade from .NET Core 1.x to .NET Core 2.1 or 2.2 now.

Upgrade to .NET Core 2.1

The supported upgrade path for .NET Core 1.x applications is via .NET Core 2.1 or 2.2. Instructions for upgrading can be found in the following documents:

Note: The migration documents are written for .NET Core 2.0 to 2.1 migration, but equally apply to .NET Core 1.x to 2.1 migration.

.NET Core 2.1 is a long-term support (LTS) release. We recommend that you make .NET Core 2.1 your new standard for .NET Core development, particularly for apps that are not updated often.

.NET Core 2.0 has already reached end-of-life, as of October 1, 2018. It is important to migrate applications to at least .NET Core 2.1.

Microsoft Support Policy

Microsoft has a published support policy for .NET Core. It includes policies for two release types: LTS and Current.

.NET Core 1.0, 1.1 and 2.1 are LTS releases. .NET Core 2.2 is a Current release.

  • LTS releases are designed for long-term support. They included features and components that have been stabilized, requiring few updates over a longer support release lifetime. These releases are a good choice for hosting applications that you do not intend to update.
  • Current releases include new features that may undergo future change based on feedback. These releases are a good choice for applications in active development, giving you access to the latest features and improvements. You need to upgrade to later .NET Core releases more often to stay in support.

Both types of releases receive critical fixes throughout their lifecycle, for security, reliability, or to add support for new operating system versions. You must stay up to date with the latest patches to qualify for support.

The .NET Core Supported OS Lifecycle Policy defines which Windows, macOS and Linux versions are supported for each .NET Core release.

.NET Framework February 2019 Preview of Quality Rollup

$
0
0

Today, we released the February 2019 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addresses an issue in System.Threading.Timer where a single global queue that was protected by a single process-wide lock causing a issues with scalability where Timers are used frequently on multi-CPU machine.  The fix can be opted into with an AppContext switch below.  See instructions for enabling the switch.  [750048]
    • Switch name: Switch.System.Threading.UseNetCoreTimer
    • Switch value to enable: true
      Don’t rely on applying the setting programmatically – the switch value is read only once per AppDomain at the time when the System.Threading.Timer type is loaded.

SQL

  • Addresses an issue that caused compatibility breaks seen in some System.Data.SqlClient usage scenarios. [721209]

WPF

  • Improved the memory allocation and cleanup scheduling behavior of the weak-event pattern.   The fix can be opted into with an AppContext switch below.  See instructions for enabling the switch.  [676441]
    • Switch name: Switch.MS.Internal.EnableWeakEventMemoryImprovements
    • Switch name: Switch.MS.Internal.EnableCleanupSchedulingImprovements
    • Switch value to enable: true

 

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10 update 1607, Windows 10 update 1703, Windows 10 update 1709 and Windows update 1803, the .NET Framework updates are part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Cumulative Update KB
Windows 10 1803 (April 2018 Update) Catalog
4487029
.NET Framework 3.5, 4.7.2 4487029
Windows 10 1709 (Fall Creators Update) Catalog
4487021
.NET Framework 3.5, 4.7.1, 4.7.2 4487021
Windows 10 1703 (Creators Update) Catalog
4487011
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4487011
Windows 10 1607 (Anniversary Update)
Windows Server 2016
Catalog
4487006
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4487006

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4487258
.NET Framework 3.5 Catalog
4483459
.NET Framework 4.5.2 Catalog
4483453
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486545
Windows Server 2012 Catalog
4487257
.NET Framework 3.5 Catalog
4483456
.NET Framework 4.5.2 Catalog
4483454
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486544
Windows 7 SP1
Windows Server 2008 R2 SP1
Catalog
4487256
.NET Framework 3.5.1 Catalog
4483458
.NET Framework 4.5.2 Catalog
4483455
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486546
Windows Server 2008 Catalog
4487259
.NET Framework 2.0, 3.0 Catalog
4483457
.NET Framework 4.5.2 Catalog
4483455
.NET Framework 4.6 Catalog
4486546

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Introducing Orleans 3.0

$
0
0

This is a guest post from the Orleans team. Orleans is a cross-platform framework for building distributed applications with .NET. For more information, see https://github.com/dotnet/orleans.

We are excited to announce the Orleans 3.0 release. A great number of improvements and fixes went in, as well as several new features, since Orleans 2.0. These changes were driven by the experience of many people running Orleans-based applications in production in a wide range of scenarios and environments, and by the ingenuity and passion of the global Orleans community that always strives to make the codebase better, faster, and more flexible. A BIG Thank You to all who contributed to this release in various ways!

Major changes since Orleans 2.0

Orleans 2.0 was released a little over 18 months ago and since then Orleans has made significant strides. Some of the headline changes since 2.0 are:

  • Distributed ACID transactions — multiple grains can join a transaction regardless of where their state is stored
  • A new scheduler, which alone increased performance by over 30% in some cases
  • A new code generator based on Roslyn code analysis
  • Rewritten cluster membership for improved recovery speed
  • Co-hosting support

As well as many, many other improvements and fixes.

Since the days of working on Orleans 2.0, the team established a virtuous cycle of implementing or integrating certain features, such as generic host, named options, in close collaboration with the .NET team before those features are ready to be part of the .NET Core releases, contributing feedback and improvements “upstream”, and in later releases switching to their final implementations shipped with .NET releases. During development of Orleans 3.0, this cycle continued, with Bedrock code used by Orleans 3.0.0-beta1 before it finally shipped as part of .NET 3.0. Similarly, support for TLS on TCP socket connections was implemented as part of Orleans 3.0, and is intended to become part of a future release of .NET Core. We view this ongoing collaboration as our contribution to the larger .NET ecosystem, in the true spirit of Open Source.

Networking layer replacement with ASP.NET Bedrock

Support for securing communication with TLS has been a major ask for some time, both from the community as well as from internal partners. With the 3.0 release we are introducing TLS support, available via the Microsoft.Orleans.Connections.Security package. For more information, see the TransportLayerSecurity sample.

Implementing TLS support was a major undertaking due to how the networking layer in previous versions of Orleans was implemented: it could not be easily adapted to use SslStream, which is the most common method for implementing TLS. With TLS as our driving force, we embarked upon a journey to rewrite Orleans’ networking layer.

Orleans 3.0 replaces its entire networking layer with one built on top of Project Bedrock, an initiative from the ASP.NET team. The goal of Bedrock is to help developers to build fast and robust network clients and servers.

The ASP.NET team and the Orleans team worked together to design abstractions which support both network clients and servers, are transport-agnostic, and can be customized using middleware. These abstractions allow us to change the network transport via configuration, without modifying internal, Orleans-specific networking code. Orleans’ TLS support is implemented as a Bedrock middleware and our intention is for this to be made generic so that it can be shared with others in the .NET ecosystem.

Although the impetus for this undertaking was to enable TLS support, we see an approximately 30% improvement in throughput on average in our nightly load tests.

The networking layer rewrite also involved replacing our custom buffer pooling with reliance on MemoryPool<byte> and in making this change, serialization now takes more advantage of Span<T>. Some code paths which previously relied on blocking via dedicated threads calling BlockingCollection<T> are now using Channel<T> to pass messages asynchronously. This results in fewer dedicated threads, moving the work to the .NET thread pool instead.

The core wire protocol for Orleans has remained fixed since its initial release. With Orleans 3.0, we have added support for progressively upgrading the network protocol via protocol negotiation. The protocol negotiation support added in Orleans 3.0 enables future enhancements, such as customizing the core serializer, while maintaining backwards compatibility. One benefit of the new networking protocol is support for full-duplex silo-to-silo connections rather than the simplex connection pairs established between silos previously. The protocol version can be configured via ConnectionOptions.ProtocolVersion.

Co-hosting via the Generic Host

Co-hosting Orleans with other frameworks, such as ASP.NET Core, in the same process is now easier than before thanks to the .NET Generic Host.

Here is an example of adding Orleans alongside ASP.NET Core to a host using UseOrleans:

var host = new HostBuilder()
  .ConfigureWebHostDefaults(webBuilder =>
  {
    // Configure ASP.NET Core
    webBuilder.UseStartup<Startup>();
  })
  .UseOrleans(siloBuilder =>
  {
    // Configure Orleans
    siloBuilder.UseLocalHostClustering();
  })
  .ConfigureLogging(logging =>
  {
    /* Configure cross-cutting concerns such as logging */
  })
  .ConfigureServices(services =>
  {
    /* Configure shared services */
  })
  .UseConsoleLifetime()
  .Build();

// Start the host and wait for it to stop.
await host.RunAsync();

Using the generic host builder, Orleans will share a service provider with other hosted services. This grants these services access to Orleans. For example, a developer can inject IClusterClient or IGrainFactory into an ASP.NET Core MVC controller and call grains directly from their MVC application.

This functionality can be used to simplify your deployment topology or to add additional functionality to an existing application. Some teams internally use co-hosting to add Kubernetes liveness and readiness probes to their Orleans silos using the ASP.NET Core Health Checks.

Reliability improvements

Clusters now recover more quickly from failures thanks to extended gossiping. In previous versions of Orleans, silos would send membership gossip messages to other silos, instructing them to update membership information. Gossip messages now include versioned, immutable snapshots of cluster membership. This improves convergence time after a silo joins or leaves the cluster (for example during upgrade, scaling, or after a failure) and alleviates contention on the shared membership store, allowing for quicker cluster transitions. Failure detection has also been improved, with more diagnostic messages and refinements to ensure faster, more accurate detection. Failure detection involves silos in a cluster collaboratively monitoring each other, with each silo sending periodic health probes to a subset of other silos. Silos and clients also now proactively disconnect from silos which have been declared defunct and they will deny connections to such silos.

Messaging errors are now handled more consistently, resulting in prompt errors being propagated back to the caller. This helps developers to discover errors more quickly. For example, when a message cannot be fully serialized or deserialized, a detailed exception will be propagated back to the original caller.

Improved extensibility

Streams can now have custom data adaptors, allowing them to ingest data in any format. This gives developers greater control over how stream items are represented in storage. It also gives the stream provider control over how data is written, allowing steams to integrate with legacy systems and/or non-Orleans services.

Grain extensions allow additional behavior to be added to a grain at runtime by attaching a new component with its own communication interface. For example, Orleans transactions use grain extensions to add transaction lifecycle methods, such as Prepare, Commit, and Abort, to a grain transparently to the user. Grain extensions are now also available for Grain Services and System Targets.

Custom transactional state can now declare what roles it is able to fulfil in a transaction. For example, a transactional state implementation which writes transaction lifecycle events to a Service Bus queue cannot fulfil the duties of the transaction manager since it is write-only.

The predefined placement strategies are publicly accessible now, so that any placement director can be replaced during configuration time.

Join the effort

Now that Orleans 3.0 is out the door we are turning our attention to future releases — and we have some exciting plans! Come and join our warm, welcoming community on GitHub and Gitter and help us to make these plans a reality.

Orleans Team

The post Introducing Orleans 3.0 appeared first on .NET Blog.

Continuously deploy and monitor your UWP, WPF, and Windows Forms app with App Center

$
0
0

App Center is an integrated developer solution with the mission of helping developers build better apps. Last week, we announced General Availability support of distribute, analytics and diagnostics service for WPF and Windows Forms desktop applications. We also expanded our existing UWP offerings to include crash and error reporting for sideloaded UWP apps.

In this blog, we’ll highlight how App Center can help you continuously ship higher quality apps. Get started in minutes by creating an App Center account.

Managing your releases

App Center Distribute makes releasing apps quick and simple so you can ship more frequently to gather feedback and continuously improve your app.

Invite your testing teams and end users via email and organize them into groups to easily manage your releases. Simply upload an .app, .appxbundle, .appxupload, .msi, .msix, or .msixupload package to App Center and your end users will a receive a link to download the latest release.

Whether it’s a new feature or a bug fix, with App Center Distribute, you can quickly deploy a new release to your users in minutes.

Monitoring app analytics

Once you release your app to your users, how do you know what features your users are using? App Center Analytics provides out of the box usage metrics such as active users, sessions, devices, languages, and more.

You can easily define custom events and properties to understand if your users are using the new features you built, what trends are occurring so you can ultimately improve the user experience.

Diagnosing app issues

No matter how much testing is done, your apps will inevitably ship with bugs . App Center Diagnostics helps you monitor, prioritize, and fix issues in your app so you can be proactive about your app health instead of waiting for those negative reviews and complaints from frustrated customers.

Simply integrate the App Center Diagnostics SDK and you’ll see your crashes and errors smartly grouped with the number of users affected, and other important information to help you fix your crashes.

Crashes

App Center will process and display your app crashes with complete stack traces and contextual information that helps you pinpoint the root cause faster. You can add attachments or track specific events to better understand user activity before the crash. App Center Diagnostics also integrates seamlessly with all your current bug tracking tools including JIRA, GitHub, and Azure DevOps so your team can stay organized.

Errors

Not every issue will result in an app crash but these can be just as important in helping you detect and prevent issues. If you know where an issue might occur, you can track non fatal errors by using an App Center method inside a try/catch block.

Other services

For UWP apps, App Center offers build and push service.

Automating app builds

App Center Build helps provides fast and secure builds on managed, cloud-hosted machines. Just connect a GitHub, BitBucket, Azure DevOps or GitLab repo and automate builds in minutes. Learn more here.

Sending push notifications

App Center Push provides an easy solution for you to send push notifications to specific users and devices.

Get started today!

Create an App Center account. You can also follow us @VSAppCenter on Twitter for the latest updates and give us feedback via AppCenter on GitHub to let us know what you’d like to see.

The post Continuously deploy and monitor your UWP, WPF, and Windows Forms app with App Center appeared first on .NET Blog.

Viewing all 4000 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>