Quantcast
Channel: .NET Blog
Viewing all 4000 articles
Browse latest View live

.NET Core Source Code Analysis with Intel® VTune™ Amplifier

$
0
0

This post was written by Varun Venkatesan, Li Tian, Denis Pravdin, who are engineers at Intel. They are excited to share .NET Core-specific enhancements that Intel has made to VTune Amplifier 2019. You can use this tool to use to make .NET Core applications faster on Intel processors.

Last year in the .NET blog, we discussed .NET Core Performance Profiling with Intel® VTune™ Amplifier 2018 including profiling Just-In-Time (JIT) compiled .NET Core code on Microsoft Windows* and Linux* operating systems. This year Intel VTune™ Amplifier 2019 was launched on September 12th, 2018 with improved source code analysis for .NET Core applications. It includes .NET Core support for profiling a remote Linux target and analyzing the results on a Windows host. We will walk you through a few scenarios to see how these new VTune Amplifier features can be used to optimize .NET Core applications.

Note that VTune Amplifier is a commercial product. In some cases, you may be eligible to obtain a free copy of VTune Amplifier under specific terms. To see if you qualify, please refer to https://software.intel.com/en-us/qualify-for-free-software and choose download options at https://software.intel.com/en-us/vtune/choose-download.

Background

Before this release, source code analysis on VTune Amplifier hotspots for JIT compiled .NET Core code was not supported on Linux and limited support on Windows. Hotspot functions were only available at the assembly-level and not at source-level, as shown in the figure below.

VTune Amplifier 2019 addresses this issue and provides full source code analysis for JIT compiled code on both Windows and Linux. It also supports remote profiling a Linux target from a Windows host. Let’s see how these features work using sample .NET Core applications on local Linux host, local Windows host and remote Linux profiling with Windows host analysis.

Here is the hardware/software configuration for the test system:

  • Processor: Intel(R) Core(TM) i7-5960X CPU @ 3.00GHz
  • Memory: 32 GB
  • Ubuntu* 16.04 LTS (64-bit)
  • Microsoft Windows 10 Pro Version 1803 (64-bit)
  • .NET Core SDK 2.1.401

Profiling .NET Core applications on a local Linux host

Let’s create a sample .NET Core application on Linux that multiplies two matrices using the code available here. Following is the C# source code snippet of interest:

Now let’s refer to the instructions from our earlier .NET blog to build and run this application using the .NET Core command-line interface (CLI). Next let’s use VTune Amplifier to profile this application using the Launch Application target type and the Hardware Event-Based Sampling mode as detailed in the following picture.

Here are the hotspots under the Process/Module/Function/Thread/Call Stack grouping:

Now let’s take a look at the source-level hotspots for the Program::Multiply function, which is a major contributor to overall CPU time.

The above figure shows that most of the time is being spent in line 62 which performs matrix arithmetic operations. This source-assembly mapping helps both .NET Core application and compiler developers to identify their source-level hotspots and determine optimization opportunities.

Now, let’s use the new source code analysis feature to examine the assembly snippets corresponding to the highlighted source line.

From the above profile, it is clear that reducing the time spent in matrix arithmetic operations would help lower overall application time. One of the possible optimizations here would be to replace the rectangular array data structure used to represent individual matrices with jagged arrays. The C# source code snippet below shows how to do this (complete code is available here).

Here is the updated list of hotspot functions from VTune Amplifier:

We can see that the overall application time has reduced by about 21%1 (from 16.660 s to 13.175 s).

The following figure shows the source-assembly mapping for the Program::Multiply function. We see that there is a corresponding reduction in CPU time for the highlighted source line which performs matrix arithmetic operations. Note that the size of the JIT generated code has been reduced too.

This is a brief description about the feature on Linux. Similar analysis with the matrix multiplication samples above could be done on Windows and we leave that as an exercise for you to try. Now, let’s use a different example to see how source code analysis works on Windows.

Profiling .NET Core applications on a local Windows host

Let’s create a sample .NET Core application on Windows that reverses an integer array using the code available here. Following is the C# source code snippet of interest:

Now let’s refer to the instructions from our earlier .NET blog to build and run this application using the .NET Core command-line interface (CLI). Next let’s use VTune Amplifier to profile this application using the Launch Application target type and the Hardware Event-Based Sampling mode as detailed in the following picture. Additionally, we need to provide the source file location on Windows using the Search Sources/Binaries button before profiling.

Here are the hotspots under the Process/Module/Function/Thread/Call Stack grouping:

Now let’s take a look at the source-level hotspots for the Program::IterativeReverse function, which is a major contributor to overall CPU time.

The above figure shows that most of the time is being spent in line 48 which performs array element re-assignment. Now, let’s use the new source code analysis feature to examine the assembly snippets corresponding to the highlighted source line.

One of the possible optimizations here would be to reverse the integer array by using recursion, rather than iterating over the array contents. The C# source code snippet below shows how to do this (complete code is available here).

Here is the updated list of hotspot functions from VTune Amplifier:

We can see that the overall application time has reduced by about 42%2 (from 13.095 s to 7.600 s).

The following figure shows the source-assembly mapping for the Program::RecursiveReverse function.

As we can see, the reduction in time is reflected in the source lines above, giving developers a clear picture on how their application performs.

Profiling .NET Core applications on a remote Linux target and analyzing the results on a Windows host

Sometimes .NET Core developers may need to collect performance data on remote target systems and later finalize the data on a different machine in order to work around resource constraints on the target system or to reduce overhead when finalizing the collected data. VTune Amplifier 2019 has added .NET Core support to collect profiling data from a remote Linux target system and analyze the results on a Windows host system. This section illustrates how to leverage this capability using the matrix multiplication .NET Core application discussed earlier (source code is available here).

First let’s publish the sample application for an x64 target type on either the host or the target with: dotnet publish –c Release –r linux-x64. Then we need to copy the entire folder with sources and binaries to the other machine. Next let’s setup a password-less SSH access to the target with PuTTY, using instructions here. We also need to set /proc/sys/kernel/perf_event_paranoid and /proc/sys/kernel/kptr_restrict to 0 in the target system to enable driverless profiling so that user does not need to install target packages, while VTune Amplifier automatically installs the appropriate collectors on the target system.

echo 0 | sudo tee /proc/sys/kernel/perf_event_paranoid

echo 0 | sudo tee /proc/sys/kernel/kptr_restrict

 

Now let’s use VTune Amplifier on the host machine to start remote profiling the application run on the target. First we need to set the profiling target to Remote Linux (SSH) and provide the necessary details to establish an SSH connection with the target. VTune Amplifier automatically installs the appropriate collectors on the target system in the /tmp/vtune_amplifier_<version>.<package_num> directory.

Then let’s select the Launch Application target type and the Hardware Event-Based Sampling modes. Additionally, we need to provide the binary and source file locations on Windows using the Search Sources/Binaries button before profiling.

Here are the hotspots under the Process/Module/Function/Thread/Call Stack grouping:

Let’s look at source code analysis in action by selecting one of the hotspot functions.

The support for remote profiling would enable developers collect low-overhead profiling data on resource-constrained target platforms and then analyze this information on the host.

Summary

The Source Code Analysis feature can be a useful value addition to the .NET Core community, especially for developers interested in performance optimization as they can get insights into hotspots at the source code and assembly levels and then work on targeted optimizations. We continue to look for additional .NET Core scenarios that could benefit from feature enhancements of VTune Amplifier. Let us know in the comments below if you have any suggestions in mind.

References

VTune Amplifier Product page: https://software.intel.com/en-us/intel-vtune-amplifier-xe

For more details on using the VTune Amplifier, see the product online help.

For more complete information about compiler optimizations, see our Optimization Notice.

 

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development.  All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, Intel Core, VTune are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© Intel Corporation.


Bringing .NET application performance analysis to Linux

$
0
0

Both the Windows and Linux ecosystems have a swath of battle-hardened performance analysis and investigation tools. But up until recently, developers and platform engineers could use none of these tools with .NET applications on Linux.

Getting them to work with .NET involved collaboration across many open source communities. The .NET team at Microsoft and the LTTng community worked together to bring .NET application performance analysis to Linux. Since one of this project’s goals was to avoid reinventing the wheel—and to allow existing workflows to be used for .NET applications on Linux—the .NET team chose to enable usage of popular Linux tools such as LTTng and perf to enable performance analysis of .NET core applications.

We worked with the team at EfficiOS to make this LTTNG collaboration happen. They wrote wrote and published a deeper dive into our collaboration in the Bringing .NET application performance analysis to Linux post on the Lttng.org blog. It  covers some of the work involved in enabling performance analysis of .NET Core applications on Linux: what works, what doesn’t, and future plans. Take a look at their post to learn more about Linux performance analysis.

Announcing .NET Framework 4.8 Early Access build 3673

$
0
0

We are happy to share the next Early Access build for the .NET Framework 4.8. This includes an updated .NET 4.8 runtime as well as the .NET 4.8 Developer Pack (a single package that bundles the .NET Framework 4.8 runtime, the .NET 4.8 Targeting Pack and the .NET Framework 4.8 SDK).

Please help us ensure this is a high quality and compatible release by trying out this build and exploring the new features.

Next steps:
To explore the new features, download the .NET 4.8 Developer Pack build 3673. If you want to try just the .NET 4.8 runtime, you can download either of these:

Please provide your feedback by .NET Framework Early Access GitHub repository.

Please note: This release is still under development and you can expect to see more features and fixes in future preview builds. Also, a reminder that this build is not supported for production use.

This preview build 3673 includes a key improvement/fix  in the WPF area:

        • [WPF] – High DPI Enhancements

You can see the complete list of improvements in this build here .

.NET Framework build 3673 is also included in the next update for Windows 10. You can sign up for Windows Insiders to validate that your applications work great on the latest .NET Framework included in the latest Windows 10 releases.

WPF – High DPI Enhancements

WPF has added support for Per-Monitor V2 DPI Awareness and Mixed-Mode DPI scaling in .NET 4.8. Additional information about these Windows concepts is available here.

The latest Developer Guide for Per monitor application development in WPF states that only pure-WPF applications are expected to work seamlessly in a high-DPI WPF application and that Hosted HWND’s and Windows Forms controls are not fully supported.

.NET 4.8 improves support for hosted HWND’s and Windows Forms interoperation in High-DPI WPF applications on platforms that support Mixed-Mode DPI scaling (Windows 10 v1803). When hosted HWND’s or Windows Forms controls are created as Mixed-Mode DPI scaled windows, (as described in the “Mixed-Mode DPI Scaling and DPI-aware APIs” documentation by calling SetThreadDpiHostingBehavior and SetThreadDpiAwarenessContext API’s), it will be possible to host such content in a Per-Monitor V2 WPF application and have them be sized and scaled appropriately. Such hosted content will not be rendered at the native DPI – instead, the OS will scale the hosted content to the appropriate size.

The support for Per-Monitor v2 DPI awareness mode also allows WPF controls to be hosted (i.e., parented) under a native window in a high-DPI application. Per-Monitor V2 DPI Awareness support will be available on Windows 10 v1607 (Anniversary Update). Windows adds support for child-HWND’s to receive DPI change notifications when Per-Monitor V2 DPI Awareness mode is enabled via the application manifest.

This support is leveraged by WPF to ensure that controls that are hosted under a native window can respond to DPI changes and update themselves. For e.g.- a WPF control hosted in a Windows Forms or a Win32 application that is manifested as Per Monitor V2 – will now be able to respond correctly to DPI changes and update itself.

Note that Windows supports Mixed-Mode DPI scaling on Windows 10 v1803, whereas Per-Monitor V2 is supported on v1607 onwards.

To try out these features, the following application manifest and AppContext flags must be enabled:

1. Enable Per-Monitor DPI in your application

  • Turn on Per-Monitor V2 in your app.manifest

2. Turn on High DPI support in WPF

  • Target .NET Framework 4.6.2 or greater

and

3. Set AppContext switch in your app.config

Alternatively,

Set AppContextSwitch Switch.System.Windows.DoNotUsePresentationDpiCapabilityTier2OrGreater=false in App.Config to enable Per-Monitor V2 and Mixed-Mode DPI support introduced in .NET 4.8.

The runtime section in the final App.Config might look like this:

AppContext switches can also be set in registry. You can refer to the AppContext Class for additional documentation.

Closing
We will continue sharing early builds of the next release of the .NET Framework via the Early Access Program on a regular basis for your feedback. As a member of the .NET Framework Early Access community you play a key role in helping us build new and improved .NET Framework products. We will do our best to ensure these early access builds are stable and compatible, but you may see bugs or issues from time to time. It would help us greatly if you would take the time to report these to us on Github so we can address these issues before the official release.

Thank you!

Announcing .NET Standard 2.1

$
0
0

Since we shipped .NET Standard 2.0 about a year ago, we’ve shipped two updates to .NET Core 2.1 and are about to release .NET Core 2.2. It’s time to update the standard to include some of the new concepts as well as a number of small improvements that make your life easier across the various implementations of .NET.

Keep reading to learn more about what’s new in this latest release, what you need to know about platform support, governance and coding.

What’s new in .NET Standard 2.1?

In total, about 3k APIs are planned to be added in .NET Standard 2.1. A good chunk of them are brand-new APIs while others are existing APIs that we added to the standard in order to converge the .NET implementations even further.

Here are the highlights:

  • Span<T>. In .NET Core 2.1 we’ve added Span<T> which is an array-like type that allows representing managed and unmanaged memory in a uniform way and supports slicing without copying. It’s at the heart of most performance-related improvements in .NET Core 2.1. Since it allows managing buffers in a more efficient way, it can help in reducing allocations and copying. We consider Span<T> to be a very fundamental type as it requires runtime and compiler support in order to be fully leveraged. If you want to learn more about this type, make sure to read Stephen Toub’s excellent article on Span<T>.
  • Foundational-APIs working with spans. While Span<T> is available as a .NET Standard compatible NuGet package (System.Memory) already, adding this package cannot extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allow working with spans, such as Stream.Read(Span<Byte>). Part of the value proposition to add span to .NET Standard is to add theses companion APIs as well.
  • Reflection emit. To boost productivity, the .NET ecosystem has always made heavy use of dynamic features such as reflection and reflection emit. Emit is often used as a tool to optimize performance as well as a way to generate types on the fly for proxying interfaces. As a result, many of you asked for reflection emit to be included in the .NET Standard. Previously, we’ve tried to provide this via a NuGet package but we discovered that we cannot model such a core technology using a package. With .NET Standard 2.1, you’ll have access to Lightweight Code Generation (LCG) as well as Reflection Emit. Of course, you might run on a runtime that doesn’t support running IL via interpretation or compiling it with a JIT, so we also exposed two new capability APIs that allow you to check for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported) as well as whether the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). This will make it much easier to write libraries that can exploit these capabilities in a portable fashion.
  • SIMD. .NET Framework and .NET Core had support for SIMD for a while now. We’ve leveraged them to speed up basic operations in the BCL, such as string comparisons. We’ve received quite a few requests to expose these APIs in .NET Standard as the functionality requires runtime support and thus cannot be provided meaningfully as a NuGet package.
  • ValueTask and ValueTask<T>. In .NET Core 2.1, the biggest feature was improvements in our fundamentals to support high-performance scenarios, which also included making async/await more efficient. ValueTask<T> already exists and allows to return results if the operation completed synchronously without having to allocate a new Task<T>. With .NET Core 2.1 we’ve improved this further which made it useful to have a corresponding non-generic ValueTask that allows reducing allocations even for cases where the operation has to be completed asynchronously, a feature that types like Socket and NetworkStream now utilize. Exposing these APIs in .NET Standard 2.1 enables library authors to benefit from these improvements both, as a consumer, as well as a producer.
  • DbProviderFactories. In .NET Standard 2.0 we added almost all of the primitives in ADO.NET to allow O/R mappers and database implementers to communicate. Unfortunately, DbProviderFactories didn’t make the cut for 2.0 so we’re adding it now. In a nutshell, DbProviderFactories allows libraries and applications to utilize a specific ADO.NET provider without knowing any of its specific types at compile time, by selecting among registered DbProviderFactory instances based on a name, which can be read from, for example, configuration settings.
  • General Goodness. Since .NET Core was open sourced, we’ve added many small features across the base class libraries such as System.HashCode for combining hash codes or new overloads on System.String. There are about 800 new members in .NET Core and virtually all of them got added in .NET Standard 2.1.

For more details, you might want to check out the full API diff between .NET Standard 2.1 and .NET Standard 2.0. You can also use apisof.net to quickly check whether a given API will be included with .NET Standard 2.1.

.NET platform support

In case you missed our Update on .NET Core 3.0 and .NET Framework 4.8, we’ve described our support for .NET Framework and .NET Core as follows:

.NET Framework is the implementation of .NET that’s installed on over one billion machines and thus needs to remain as compatible as possible. Because of this, it moves at a slower pace than .NET Core. Even security and bug fixes can cause breaks in applications because applications depend on the previous behavior. We will make sure that .NET Framework always supports the latest networking protocols, security standards, and Windows features.

.NET Core is the open source, cross-platform, and fast-moving version of .NET. Because of its side-by-side nature it can take changes that we can’t risk applying back to .NET Framework. This means that .NET Core will get new APIs and language features over time that .NET Framework cannot. At Build we showed a demo how the file APIs are faster on .NET Core. If we put those same changes into .NET Framework we could break existing applications, and we don’t want to do that.

Given many of the API additions in .NET Standard 2.1 require runtime changes in order to be meaningful, .NET Framework 4.8 will remain on .NET Standard 2.0 rather than implement .NET Standard 2.1. .NET Core 3.0 as well as upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1.

Library authors who need to support .NET Framework customers should stay on .NET Standard 2.0. In fact, most libraries should be able to stay on .NET Standard 2.0, as the API additions are largely for advanced scenarios. However, this doesn’t mean that library authors cannot take advantage of these APIs even if they have to support .NET Framework. In those cases they can use multi-targeting to compile for both .NET Standard 2.0 as well as .NET Standard 2.1. This allows writing code that can expose more features or provide a more efficient implementation on runtimes that support .NET Standard 2.1 while not giving up on the bigger reach that .NET Standard 2.0 offers.

For more recommendations on targeting, check out the brand new documentation on cross-platform targeting.

Governance model

The .NET Standard 1.x and 2.0 releases focused on exposing existing concepts. The bulk of the work was on the .NET Core side, as this platform started with a much smaller API set. Moving forward, we’ll often have to standardize brand-new technologies, which means we need to consider the impact on all .NET implementations, not just .NET Core, and including those managed in other communities such as Mono or Unity. Our governance model has been updated to best include all considerations, including:

A .NET Standard review board. To ensure we don’t end up adding large chunks of API surface that cannot be implemented, a review board will sign-off on API additions to the .NET Standard. The board comprises representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation and will be chaired by Miguel de Icaza. We will continue to strive to make decisions based on consensus and will leverage Miguel’s extensive expertise and experience building .NET implementations that are supported by multiple parties when needed.

A formal approval process. The .NET Standard 1.x and 2.0 version were largely mechanically derived by computing which APIs existing .NET implementations had in common, which means the API sets were effectively a computational outcome. Moving forward, we are implementing an editorial approach:

  • Anyone can submit proposals for API additions to the .NET Standard.
  • New members on standardized types are automatically considered. To prevent accidental fragmentation, we’ll automatically consider all members added by any .NET implementation on types that are already in the standard. The rationale here is that divergence at that the member level is not desirable and unless there is something wrong with the API it’s likely a good addition.
  • Acceptance requires:
    • A sponsorship from a review board member. That person will be assigned the issue and is expected to shepherd the issue until it’s either accepted or rejected. If no board member is willing to sponsor the proposal, it will be considered rejected.
    • A stable implementation in at least one .NET implementation. The implementation must be licensed under an open source license that is compatible with MIT. This will allow other .NET implementations to jump- start their own implementations or simply take the feature as-is.
  • .NET Standard updates are planned and will generally follow a set of themes. We avoid releases with a large number of tiny features that aren’t part of a common set of scenarios. Instead, we try to define a set of goals that describe what kind of feature areas a particular .NET Standard version provides. This simplifies answering the question which .NET Standard a given library should depend on. It also makes it easier for .NET implementations to decide whether it’s worth implementing a higher version of .NET Standard.
  • The version number is subject to discussion and is generally a function of how significant the new version is. While we aren’t planning on making breaking changes, we’ll rev the major version if the new version adds large chunks of APIs (like when we doubled the number of APIs in .NET Standard 2.0) or has sizable changes in the overall developer experience (like the added compatibility mode for consuming .NET Framework libraries we added in .NET Standard 2.0).

For more information, take a look at the .NET Standard governance model and the .NET Standard review board.

Summary

The definition of .NET Standard 2.1 is ongoing. You can watch our progress on GitHub and still file requests.

If you want to quickly check whether a specific API is in .NET Standard (or any other .NET platform), you can use apisof.net. You can also use the .NET Portability Analyzer to check whether an existing project or binary can be ported to .NET Standard 2.1.

Happy coding!

Understanding the Whys, Whats, and Whens of ValueTask

$
0
0

The .NET Framework 4 saw the introduction of the System.Threading.Tasks namespace, and with it the Task class. This type and the derived Task<TResult> have long since become a staple of .NET programming, key aspects of the asynchronous programming model introduced with C# 5 and its async / await keywords. In this post, I’ll cover the newer ValueTask/ValueTask<TResult> types, which were introduced to help improve asynchronous performance in common use cases where decreased allocation overhead is important.

Task

Task serves multiple purposes, but at its core it’s a “promise”, an object that represents the eventual completion of some operation. You initiate an operation and get back a Task for it, and that Task will complete when the operation completes, which may happen synchronously as part of initiating the operation (e.g. accessing some data that was already buffered), asynchronously but complete by the time you get back the Task (e.g. accessing some data that wasn’t yet buffered but that was very fast to access), or asynchronously and complete after you’re already holding the Task (e.g. accessing some data from across a network). Since operations might complete asynchronously, you either need to block waiting for the results (which often defeats the purpose of the operation having been asynchronous to begin with) or you need to supply a callback that’ll be invoked when the operation completes. In .NET 4, providing such a callback was achieved via ContinueWith methods on the Task, which explicitly exposed the callback model by accepting a delegate to invoke when the Task completed:

SomeOperationAsync().ContinueWith(task =>
{
    try
    {
        TResult result = task.Result;
        UseResult(result);
    }
    catch (Exception e)
    {
        HandleException(e);
    }
});

But with the .NET Framework 4.5 and C# 5, Tasks could simply be awaited, making it easy to consume the results of an asynchronous operation, and with the generated code being able to optimize all of the aforementioned cases, correctly handling things regardless of whether the operation completes synchronously, completes asynchronously quickly, or completes asynchronously after already (implicitly) providing a callback:

TResult result = await SomeOperationAsync();
UseResult(result);

Task as a class is very flexible and has resulting benefits. For example, you can await it multiple times, by any number of consumers concurrently. You can store one into a dictionary for any number of subsequent consumers to await in the future, which allows it to be used as a cache for asynchronous results. You can block waiting for one to complete should the scenario require that. And you can write and consume a large variety of operations over tasks (sometimes referred to as “combinators”), such as a “when any” operation that asynchronously waits for the first to complete.

However, that flexibility is not needed for the most common case: simply invoking an asynchronous operation and awaiting its resulting task:

TResult result = await SomeOperationAsync();
UseResult(result);

In such usage, we don’t need to be able to await the task multiple times. We don’t need to be able to handle concurrent awaits. We don’t need to be able to handle synchronous blocking. We don’t need to write combinators. We simply need to be able to await the resulting promise of the asynchronous operation. This is, after all, how we write synchronous code (e.g. TResult result = SomeOperation();), and it naturally translates to the world of async / await.

Further, Task does have a potential downside, in particular for scenarios where instances are created a lot and where high-throughput and performance is a key concern: Task is a class. As a class, that means that any operation which needs to create one needs to allocate an object, and the more objects that are allocated, the more work the garbage collector (GC) needs to do, and the more resources we spend on it that could be spent doing other things.

The runtime and core libraries mitigate this in many situations. For example, if you write a method like the following:

public async Task WriteAsync(byte value)
{
    if (_bufferedCount == _buffer.Length)
    {
        await FlushAsync();
    }
    _buffer[_bufferedCount++] = value;
}

in the common case there will be space available in the buffer and the operation will complete synchronously. When it does, there’s nothing special about the Task that needs to be returned, since there’s no return value: this is the Task-based equivalent of a void-returning synchronous method. Thus, the runtime can simply cache a single non-generic Task and use that over and over again as the result task for any async Task method that completes synchronously (that cached singleton is exposed via `Task.CompletedTask`). Or for example, if you write:

public async Task<bool> MoveNextAsync()
{
    if (_bufferedCount == 0)
    {
        await FillBuffer();
    }
    return _bufferedCount > 0;
}

in the common case, we expect there to be some data buffered, in which case this method simply checks _bufferedCount, sees that it’s larger than 0, and returns true; only if there’s currently no buffered data does it need to perform an operation that might complete asynchronously. And since there are only two possible Boolean results (true and false), there are only two possible Task<bool> objects needed to represent all possible result values, and so the runtime is able to cache two such objects and simply return a cached Task<bool> with a Result of true, avoiding the need to allocate. Only if the operation completes asynchronously does the method then need to allocate a new Task<bool>, because it needs to hand back the object to the caller before it knows what the result of the operation will be, and needs to have a unique object into which it can store the result when the operation does complete.

The runtime maintains a small such cache for other types as well, but it’s not feasible to cache everything. For example, a method like:

public async Task<int> ReadNextByteAsync()
{
    if (_bufferedCount == 0)
    {
        await FillBuffer();
    }

    if (_bufferedCount == 0)
    {
        return -1;
    }

    _bufferedCount--;
    return _buffer[_position++];
}

will also frequently complete synchronously. But unlike the Boolean case, this method returns an Int32 value, which has ~4 billion possible results, and caching a Task<int> for all such cases would consume potentially hundreds of gigabytes of memory. The runtime does maintain a small cache for Task<int>, but only for a few small result values, so for example if this completes synchronously (there’s data in the buffer) with a value like 4, it’ll end up using a cached task, but if it completes synchronously with a value like 42, it’ll end up allocating a new Task<int>, akin to calling Task.FromResult(42).

Many library implementations attempt to mitigate this further by maintaining their own cache as well. For example, the MemoryStream.ReadAsync overload introduced in the .NET Framework 4.5 always completes synchronously, since it’s just reading data from memory. ReadAsync returns a Task<int>, where the Int32 result represents the number of bytes read. ReadAsync is often used in a loop, often with the number of bytes requested the same on each call, and often with ReadAsync able to fully fulfill that request. Thus, it’s common for repeated calls to ReadAsync to return a Task<int> synchronously with the same result as it did on the previous call. As such, MemoryStream maintains a cache of a single task, the last one it returned successfully. Then on a subsequent call, if the new result matches that of its cached Task<int>, it just returns the cached one again; otherwise, it uses Task.FromResult to create a new one, stores that as its new cached task, and returns it.

Even so, there are many cases where operations complete synchronously and are forced to allocate a Task<TResult> to hand back.

ValueTask<TResult> and synchronous completion

All of this motivated the introduction of a new type in .NET Core 2.0 and made available for previous .NET releases via a System.Threading.Tasks.Extensions NuGet package: ValueTask<TResult>.

ValueTask<TResult> was introduced in .NET Core 2.0 as a struct capable of wrapping either a TResult or a Task<TResult>. This means it can be returned from an async method, and if that method completes synchronously and successfully, nothing need be allocated: we can simply initialize this ValueTask<TResult> struct with the TResult and return that. Only if the method completes asynchronously does a Task<TResult> need to be allocated, with the ValueTask<TResult> created to wrap that instance (to minimize the size of ValueTask<TResult> and to optimize for the success path, an async method that faults with an unhandled exception will also allocate a Task<TResult>, so that the ValueTask<TResult> can simply wrap that Task<TResult> rather than always having to carry around an additional field to store an Exception).

With that, a method like MemoryStream.ReadAsync that instead returns a ValueTask<int> need not be concerned with caching, and can instead be written with code like:

public override ValueTask<int> ReadAsync(byte[] buffer, int offset, int count)
{
    try
    {
        int bytesRead = Read(buffer, offset, count);
        return new ValueTask<int>(bytesRead);
    }
    catch (Exception e)
    {
        return new ValueTask<int>(Task.FromException<int>(e));
    }
}

ValueTask<TResult> and asynchronous completion

Being able to write an async method that can complete synchronously without incurring an additional allocation for the result type is a big win. This is why ValueTask<TResult> was added to .NET Core 2.0, and why new methods that are expected to be used on hot paths are now defined to return ValueTask<TResult> instead of Task<TResult>. For example, when we added a new ReadAsync overload to Stream in .NET Core 2.1 in order to be able to pass in a Memory<byte> instead of a byte[], we made the return type of that method be ValueTask<int>. That way, Streams (which very often have a ReadAsync method that completes synchronously, as in the earlier MemoryStream example) can now be used with significantly less allocation.

However, when working on very high-throughput services, we still care about avoiding as much allocation as possible, and that means thinking about reducing and removing allocations associated with asynchronous completion paths as well.

With the await model, for any operation that completes asynchronously we need to be able to hand back an object that represents the eventual completion of the operation: the caller needs to be able to hand off a callback that’ll be invoked when the operation completes, and that requires having a unique object on the heap that can serve as the conduit for this specific operation. It doesn’t, however, imply anything about whether that object can be reused once an operation completes. If the object can be reused, then an API can maintain a cache of one or more such objects, and reuse them for serialized operations, meaning it can’t use the same object for multiple in-flight async operations, but it can reuse an object for non-concurrent accesses.

In .NET Core 2.1, ValueTask<TResult> was augmented to support such pooling and reuse. Rather than just being able to wrap a TResult or a Task<TResult>, a new interface was introduced, IValueTaskSource<TResult>, and ValueTask<TResult> was augmented to be able to wrap that as well. IValueTaskSource<TResult> provides the core support necessary to represent an asynchronous operation to ValueTask<TResult> in a similar manner to how Task<TResult> does:

public interface IValueTaskSource<out TResult>
{
    ValueTaskSourceStatus GetStatus(short token);
    void OnCompleted(Action<object> continuation, object state, short token, ValueTaskSourceOnCompletedFlags flags);
    TResult GetResult(short token);
}

GetStatus is used to satisfy properties like ValueTask<TResult>.IsCompleted, returning an indication of whether the async operation is still pending or whether it’s completed and how (success or not). OnCompleted is used by the ValueTask<TResult>‘s awaiter to hook up the callback necessary to continue execution from an await when the operation completes. And GetResult is used to retrieve the result of the operation, such that after the operation completes, the awaiter can either get the TResult or propagate any exception that may have occurred.

Most developers should never have a need to see this interface: methods simply hand back a ValueTask<TResult> that may have been constructed to wrap an instance of this interface, and the consumer is none-the-wiser. The interface is primarily there so that developers of performance-focused APIs are able to avoid allocation.

There are several such APIs in .NET Core 2.1. The most notable are Socket.ReceiveAsync and Socket.SendAsync, with new overloads added in 2.1, e.g.

 public ValueTask<int> ReceiveAsync(Memory<byte> buffer, SocketFlags socketFlags, CancellationToken cancellationToken = default);

This overload returns a ValueTask<int>. If the operation completes synchronously, it can simply construct a ValueTask<int> with the appropriate result, e.g.

int result = …;
return new ValueTask<int>(result);

If it completes asynchronously, it can use a pooled object that implements this interface:

IValueTaskSource<int> vts = …;
return new ValueTask<int>(vts);

The Socket implementation maintains one such pooled object for receives and one for sends, such that as long as no more than one of each is outstanding at a time, these overloads will end up being allocation-free, even if they complete operations asynchronously. That’s then further surfaced through NetworkStream. For example, in .NET Core 2.1, Stream exposes:

public virtual ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken);

which NetworkStream overrides. NetworkStream.ReadAsync just delegates to Socket.ReceiveAsync, so the wins from Socket translate to NetworkStream, and NetworkStream.ReadAsync effectively becomes allocation-free as well.

Non-generic ValueTask

When ValueTask<TResult> was introduced in .NET Core 2.0, it was purely about optimizing for the synchronous completion case, in order to avoid having to allocate a Task<TResult> to store the TResult already available. That also meant that a non-generic ValueTask wasn’t necessary: for the synchronous completion case, the Task.CompletedTask singleton could just be returned from a Task-returning method, and was implicitly by the runtime for async Task methods.

With the advent of enabling even asynchronous completions to be allocation-free, however, a non-generic ValueTask becomes relevant again. Thus, in .NET Core 2.1 we also introduced the non-generic ValueTask and IValueTaskSource. These provide direct counterparts to the generic versions, usable in similar ways, just with a void result.

Implementing IValueTaskSource / IValueTaskSource<T>

Most developers should never need to implement these interfaces. They’re also not particularly easy to implement. If you decide you need to, there are several implementations internal to .NET Core 2.1 that can serve as a reference, e.g.

To make this easier for developers that do want to do it, in .NET Core 3.0 we plan to introduce all of this logic encapsulated into a ManualResetValueTaskSourceCore<TResult> type, a struct that can be encapsulated into another object that implements IValueTaskSource<TResult> and/or IValueTaskSource, with that wrapper type simply delegating to the struct for the bulk of its implementation. You can learn more about this in the associated issue in the dotnet/corefx repo at https://github.com/dotnet/corefx/issues/32664.

Valid consumption patterns for ValueTasks

From a surface area perspective, ValueTask and ValueTask<TResult> are much more limited than Task and Task<TResult>. That’s ok, even desirable, as the primary method for consumption is meant to simply be awaiting them.

However, because ValueTask and ValueTask<TResult> may wrap reusable objects, there are actually significant constraints on their consumption when compared with Task and Task<TResult>, should someone veer off the desired path of just awaiting them. In general, the following operations should never be performed on a ValueTask / ValueTask<TResult>:

  • Awaiting a ValueTask / ValueTask<TResult> multiple times. The underlying object may have been recycled already and be in use by another operation. In contrast, a Task / Task<TResult> will never transition from a complete to incomplete state, so you can await it as many times as you need to, and will always get the same answer every time.
  • Awaiting a ValueTask / ValueTask<TResult> concurrently. The underlying object expects to work with only a single callback from a single consumer at a time, and attempting to await it concurrently could easily introduce race conditions and subtle program errors. It’s also just a more specific case of the above bad operation: “awaiting a ValueTask / ValueTask<TResult> multiple times.” In contrast, Task / Task<TResult> do support any number of concurrent awaits.
  • Using .GetAwaiter().GetResult() when the operation hasn’t yet completed. The IValueTaskSource / IValueTaskSource<TResult> implementation need not support blocking until the operation completes, and likely doesn’t, so such an operation is inherently a race condition and is unlikely to behave the way the caller intends. In contrast, Task / Task<TResult> do enable this, blocking the caller until the task completes.

If you have a ValueTask or a ValueTask<TResult> and you need to do one of these things, you should use .AsTask() to get a Task / Task<TResult> and then operate on that resulting task object. After that point, you should never interact with that ValueTask / ValueTask<TResult> again.

The short rule is this: with a ValueTask or a ValueTask<TResult>, you should either await it directly (optionally with .ConfigureAwait(false)) or call AsTask() on it directly, and then never use it again, e.g.

// Given this ValueTask<int>-returning method…
public ValueTask<int> SomeValueTaskReturningMethodAsync();
…
// GOOD
int result = await SomeValueTaskReturningMethodAsync();

// GOOD
int result = await SomeValueTaskReturningMethodAsync().ConfigureAwait(false);

// GOOD
Task<int> t = SomeValueTaskReturningMethodAsync().AsTask();

// WARNING
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
... // storing the instance into a local makes it much more likely it'll be misused,
    // but it could still be ok

// BAD: awaits multiple times
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
int result = await vt;
int result2 = await vt;

// BAD: awaits concurrently (and, by definition then, multiple times)
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
Task.Run(async () => await vt);
Task.Run(async () => await vt);

// BAD: uses GetAwaiter().GetResult() when it's not known to be done
ValueTask<int> vt = SomeValueTaskReturningMethodAsync();
int result = vt.GetAwaiter().GetResult();

There is one additional advanced pattern that some developers may choose to use, hopefully only after measuring carefully and finding it provides meaningful benefit. Specifically, ValueTask / ValueTask<TResult> do expose some properties that speak to the current state of the operation, for example the IsCompleted property returning false if the operation hasn’t yet completed, and returning true if it has (meaning it’s no longer running and may have completed successfully or otherwise), and the IsCompletedSuccessfully property returning true if and only if it’s completed and completed successfully (meaning attempting to await it or access its result will not result in an exception being thrown). For very hot paths where a developer wants to, for example, avoid some additional costs only necessary on the asynchronous path, these properties can be checked prior to performing one of the operations that essentially invalidates the ValueTask / ValueTask<TResult>, e.g. await, .AsTask(). For example, in the SocketsHttpHandler implementation in .NET Core 2.1, the code issues a read on a connection, which returns a ValueTask<int>. If that operation completed synchronously, then we don’t need to worry about being able to cancel the operation. But if it completes asynchronously, then while it’s running we want to hook up cancellation such that a cancellation request will tear down the connection. As this is a very hot code path, and as profiling showed it to make a small difference, the code is structured essentially as follows:

int bytesRead;
{
    ValueTask<int> readTask = _connection.ReadAsync(buffer);
    if (readTask.IsCompletedSuccessfully)
    {
        bytesRead = readTask.Result;
    }
    else
    {
        using (_connection.RegisterCancellation())
        {
            bytesRead = await readTask;
        }
    }
}

This pattern is acceptable, because the ValueTask<int> isn’t used again after either .Result is accessed or it’s awaited.

Should every new asynchronous API return ValueTask / ValueTask<TResult>?

In short, no: the default choice is still Task / Task<TResult>.

As highlighted above, Task and Task<TResult> are easier to use correctly than are ValueTask and ValueTask<TResult>, and so unless the performance implications outweigh the usability implications, Task / Task<TResult>are still preferred. There are also some minor costs associated with returning a ValueTask<TResult> instead of a Task<TResult>, e.g. in microbenchmarks it’s a bit faster to await a Task<TResult> than it is to await a ValueTask<TResult>, so if you can use cached tasks (e.g. you’re API returns Task or Task<bool>), you might be better off performance-wise sticking with Task and Task<bool>. ValueTask / ValueTask<TResult> are also multiple words in size, and so when these are awaitd and a field for them is stored in a calling async method’s state machine, they’ll take up a little more space in that state machine object.

However, ValueTask / ValueTask<TResult> are great choices when a) you expect consumers of your API to only await them directly, b) allocation-related overhead is important to avoid for your API, and c) either you expect synchronous completion to be a very common case, or you’re able to effectively pool objects for use with asynchronous completion. When adding abstract, virtual, or interface methods, you also need to consider whether these situations will exist for overrides/implementations of that method.

What’s Next for ValueTask and ValueTask<TResult>?

For the core .NET libraries, we’ll continue to see new Task / Task<TResult>-returning APIs added, but we’ll also see new ValueTask / ValueTask<TResult>-returning APIs added where appropriate. One key example of the latter is for the new IAsyncEnumerator<T> support planned for .NET Core 3.0. IEnumerator<T> exposes a bool-returning MoveNext method, and the asynchronous IAsyncEnumerator<T> counterpart exposes a MoveNextAsyncmethod. When we initially started designing this feature, we thought of MoveNextAsync as returning a Task<bool>, which could be made very efficient via cached tasks for the common case of MoveNextAsync completing synchronously. However, given how wide-reaching we expect async enumerables to be, and given that they’re based on interfaces that could end up with many different implementations (some of which may care deeply about performance and allocations), and given that the vast, vast majority of consumption will be through await foreach language support, we switched to having MoveNextAsync return a ValueTask<bool>. This allows for the synchronous completion case to be fast but also for optimized implementations to use reusable objects to make the asynchronous completion case low-allocation as well. In fact, the C# compiler takes advantage of this when implementing async iterators to make async iterators as allocation-free as possible.

Announcing ML.NET 0.7 (Machine Learning .NET)

$
0
0

ML.NET icon

We’re excited to announce today the release of ML.NET 0.7 – the latest release of the cross-platform and open source machine learning framework for .NET developers (ML.NET 0.1 was released at //Build 2018). This release focuses on enabling better support for recommendation based ML tasks, enabling anomaly detection, enhancing the customizability of the machine learning pipelines, enabling using ML.NET in x86 apps, and more.

This blog post provides details about the following topics in the ML.NET 0.7 release:

Enhanced support for recommendation tasks with Matrix Factorization

Recommendation icon

Recommender systems enable producing a list of recommendations for products in a catalog, songs, movies, and more. We have improved support for creating recommender systems in ML.NET by adding Matrix factorization (MF), a common approach to recommendations when you have data on how users rated items in your catalog. For example, you might know how users rated some movies and want to recommend which other movies they are likely to watch next.

We added MF to ML.NET because it is often significantly faster than Field-Aware Factorization Machines (which we added in ML.NET 0.3) and it can support ratings which are continuous number ratings (e.g. 1-5 stars) instead of boolean values (“liked” or “didn’t like”). Even though we just added MF, you might still want to use FFM if you want to take advantage of other information beyond the rating a user assigns to an item (e.g. movie genre, movie release date, user profile). A more in-depth discussion of the differences can be found here.

Sample usage of MF can be found here. The example is general but you can imagine that the matrix rows correspond to users, matrix columns correspond to movies, and matrix values correspond to ratings. This matrix would be quite sparse as users have only rated a small subset of the catalog.

ML.NET’s MF uses LIBMF.

Enabled anomaly detection scenarios – detecting unusual events


Anomaly detection icon

Anomaly detection enables identifying unusual values or events. It is used in scenarios such as fraud detection (identifying suspicious credit card transactions) and server monitoring (identifying unusual activity).

ML.NET 0.7 enables detecting two types of anomalous behavior:

  • Spike detection: spikes are attributed to sudden yet temporary bursts in values of the input data. These could be outliers due to outages, cyber-attacks, viral web content, etc.
  • Change point detection: change points mark the beginning of more persistent deviations in the behavior of the data. For example, if product sales are relatively consistent and become more popular (monthly sales double), there is a change point when the trend changes.

These anomalies can be detected on two types of data using different ML.NET components:

    • IidSpikeDetector and IidChangePointDetector are used on data assumed to be from one stationary distribution (each data point is independent of previous data, such as the number of retweets of each tweet).
    • SsaSpikeDetector and SsaChangePointDetector are used on data that has a season/trend components (perhaps ordered by time, such as product sales)

Sample code using anomaly detection with ML.NET can be found here.

Improved customizability of ML.NET pipelines

Pipeline icon

ML.NET offers a variety of data transformations (e.g. processing text, images, categorical features, etc.). However, some use cases require application-specific transformations, such as calculating cosine similarity between two text columns. We have now added support for custom transforms so you can easily include custom business logic.

The CustomMappingEstimator allows you to write your own methods to process data and bring them into the ML.NET pipeline. Here is what it would look like in the pipeline:

var estimator = mlContext.Transforms.CustomMapping<MyInput, MyOutput>(MyLambda.MyAction, "MyLambda")
    .Append(...)
    .Append(...)

Below is the definition of what this custom mapping will do. In this example, we convert the text label (“spam” or “ham”) to a boolean label (true or false).

public class MyInput
{
    public string Label { get; set; }
}

public class MyOutput
{
    public bool Label { get; set; }
}

public class MyLambda
{
    [Export("MyLambda")]
    public ITransformer MyTransformer => ML.Transforms.CustomMappingTransformer<MyInput, MyOutput>(MyAction, "MyLambda");

    [Import]
    public MLContext ML { get; set; }

    public static void MyAction(MyInput input, MyOutput output)
    {
        output.Label= input.Label == "spam" ? true : false;
    }
}

A more complete example of the CustomMappingEstimator can be found here.

x86 support in addition to x64

Pipeline icon

With this release of ML.NET you can now train and use machine learning models on x86 / 32-bit architecture devices. Previously, ML.NET was limited to x64 devices.
Note that some components that are based on external dependencies (e.g. TensorFlow) will not be available in x86.

NimbusML – experimental Python bindings for ML.NET

Python logo

NimbusML provides experimental Python bindings for ML.NET. We have seen feedback from the external community and internal teams regarding the use of multiple programming languages. We wanted to enable as many people as possible to benefit from ML.NET and help teams to work together more easily. ML.NET not only enables data scientists to train and use machine learning models in Python (with components that can also be used in scikit-learn pipelines), but it also enables saving models which can be easily used in .NET applications through ML.NET (see here for more details).

In case you missed it: provide your feedback on the new API

ML.NET 0.6 introduced a new set of APIs for ML.NET that provide enhanced flexibility. These APIs in 0.7 and upcoming versions are still evolving and we would love to get your feedback so you can help shape the long-term API for ML.NET.

Want to get involved? Start by providing feedback through issues at the ML.NET GitHub repo!

Additional resources

  • The most important ML.NET concepts for understanding the new API are introduced here.
  • A cookbook (How to guides) that shows how to use these APIs for a variety of existing and new scenarios can be found here.
  • A ML.NET API Reference with all the documented APIs can be found here.

Get started!

Get started icon

If you haven’t already, get started with ML.NET here Next, explore some other great resources:

We look forward to your feedback and welcome you to file issues with any suggestions or enhancements in the ML.NET GitHub repo.

This blog was authored by Gal Oshri and Cesar de la Torre

Thanks,

The ML.NET Team

Building C# 8.0

$
0
0

Building C# 8.0

The next major version of C# is C# 8.0. It’s been in the works for quite some time, even as we built and shipped the minor releases C# 7.1, 7.2 and 7.3, and I’m quite excited about the new capabilities it will bring.

The current plan is that C# 8.0 will ship at the same time as .NET Core 3.0. However, the features will start to come alive with the previews of Visual Studio 2019 that we are working on. As those come out and you can start trying them out in earnest, we will provide a whole lot more detail about the individual features. The aim of this post is to give you an overview of what to expect, and a heads-up on where to expect it.

New features in C# 8.0

Here’s an overview of the most significant features slated for C# 8.0. There are a number of smaller improvements in the works as well, which will trickle out over the coming months.

Nullable reference types

The purpose of this feature is to help prevent the ubiquitous null reference exceptions that have riddled object-oriented programming for half a century now.

It stops you from putting null into ordinary reference types such as string – it makes those types non-nullable! It does so gently, with warnings, not errors. But on existing code there will be new warnings, so you have to opt in to using the feature (which you can do at the project, file or even source line level).

string s = null; // Warning: Assignment of null to non-nullable reference type

What if you do want null? Then you can use a nullable reference type, such as string?:

string? s = null; // Ok

When you try to use a nullable reference, you need to check it for null first. The compiler analyzes the flow of your code to see if a null value could make it to where you use it:

void M(string? s)
{
    Console.WriteLine(s.Length); // Warning: Possible null reference exception
    if (s != null)
    {
        Console.WriteLine(s.Length); // Ok: You won't get here if s is null
    }
}

The upshot is that C# lets you express your “nullable intent”, and warns you when you don’t abide by it.

Async streams

The async/await feature of C# 5.0 lets you consume (and produce) asynchronous results in straightforward code, without callbacks:

async Task<int> GetBigResultAsync()
{
    var result = await GetResultAsync();
    if (result > 20) return result; 
    else return -1;
}

It is not so helpful if you want to consume (or produce) continuous streams of results, such as you might get from an IoT device or a cloud service. Async streams are there for that.

We introduce IAsyncEnumerable<T>, which is exactly what you’d expect; an asynchronous version of IEnumerable<T>. The language lets you await foreach over these to consume their elements, and yield return to them to produce elements.

async IAsyncEnumerable<int> GetBigResultsAsync()
{
    await foreach (var result in GetResultsAsync())
    {
        if (result > 20) yield return result; 
    }
}

Ranges and indices

We’re adding a type Index, which can be used for indexing. You can create one from an int that counts from the beginning, or with a prefix ^ operator that counts from the end:

Index i1 = 3;  // number 3 from beginning
Index i2 = ^4; // number 4 from end
int[] a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
Console.WriteLine($"{a[i1]}, {a[i2]}"); // "3, 6"

We’re also introducing a Range type, which consists of two Indexes, one for the start and one for the end, and can be written with a x..y range expression. You can then index with a Range in order to produce a slice:

var slice = a[i1..i2]; // { 3, 4, 5 }

Default implementations of interface members

Today, once you publish an interface it’s game over: you can’t add members to it without breaking all the existing implementers of it.

In C# 8.0 we let you provide a body for an interface member. Thus, if somebody doesn’t implement that member (perhaps because it wasn’t there yet when they wrote the code), they will just get the default implementation instead.

interface ILogger
{
    void Log(LogLevel level, string message);
    void Log(Exception ex) => Log(LogLevel.Error, ex.ToString()); // New overload
}

class ConsoleLogger : ILogger
{
    public void Log(LogLevel level, string message) { ... }
    // Log(Exception) gets default implementation
}

The ConsoleLogger class doesn’t have to implement the Log(Exception) overload of ILogger, because it is declared with a default implementation. Now you can add new members to existing public interfaces as long as you provide a default implementation for existing implementors to use.

Recursive patterns

We’re allowing patterns to contain other patterns:

IEnumerable<string> GetEnrollees()
{
    foreach (var p in People)
    {
        if (p is Student { Graduated: false, Name: string name }) yield return name;
    }
}

The pattern Student { Graduated: false, Name: string name } checks that the Person is a Student, then applies the constant pattern false to their Graduated property to see if they’re still enrolled, and the pattern string name to their Name property to get their name (if non-null). Thus, if p is a Student, has not graduated and has a non-null name, we yield return that name.

Switch expressions

Switch statements with patterns are quite powerful in C# 7.0, but can be cumbersome to write. Switch expressions are a “lightweight” version, where all the cases are expressions:

var area = figure switch 
{
    Line _      => 0,
    Rectangle r => r.Width * r.Height,
    Circle c    => Math.PI * c.Radius * c.Radius,
    _           => throw new UnknownFigureException(figure)
};

Target-typed new-expressions

In many cases, when you’re creating a new object, the type is already given from context. In those situations we’ll let you omit the type:

Point[] ps = { new (1, 4), new (3,-2), new (9, 5) }; // all Points

The implementation of this feature was contributed by a member of the community, Alireza Habibi. Thank you!

Platform dependencies

Most of the C# 8.0 language features will run on any version of .NET. However, a few of them have platform dependencies.

Async streams, indexers and ranges all rely on new framework types that will be part of .NET Standard 2.1. As Immo describes in his post Announcing .NET Standard 2.1, .NET Core 3.0 as well as Xamarin, Unity and Mono will all implement .NET Standard 2.1, but .NET Framework 4.8 will not. This means that the types required to use these features won’t be available when you target C# 8.0 to .NET Framework 4.8.

As always, the C# compiler is quite lenient about the types it depends on. If it can find types with the right names and shapes, it is happy to target them.

Default interface member implementations rely on new runtime enhancements, and we will not make those in the .NET Runtime 4.8 either. So this feature simply will not work on .NET Framework 4.8 and on older versions of .NET.

The need to keep the runtime stable has prevented us from implementing new language features in it for more than a decade. With the side-by-side and open-source nature of the modern runtimes, we feel that we can responsibly evolve them again, and do language design with that in mind. Scott explained in his Update on .NET Core 3.0 and .NET Framework 4.8 that .NET Framework is going to see less innovation in the future, instead focusing on stability and reliability. Given that, we think it is better for it to miss out on some language features than for nobody to get them.

How can I learn more?

The C# language design process is open source, and takes place in the github.com/dotnet/csharplang) repo. It can be a bit overwhelming and chaotic if you don’t follow along regularly. The heartbeat of language design is the language design meetings, which are captured in the C# Language Design Notes.

About a year ago I wrote a post Introducing Nullable Reference Types in C#. It should still be an informative read.

You can also watch videos such as The future of C# from Microsoft Build 2018, or What’s Coming to C#? from .NET Conf 2018, which showcase several of the features.

Kathleen has a great post laying out the plans for Visual Basic in .NET Core 3.0.

As we start releasing the features as part of Visual Studio 2019 previews, we will also publish much more detail about the individual features.

Personally I can’t wait to get them into the hands of all of you!

Happy hacking,

Mads Torgersen, Design Lead for C#

Cross-platform Time Zones with .NET Core

$
0
0

Developing applications that span multiple operating systems in .NET Core while working with Time Zone information can lead to unexpected results for developers not familiar with the differences in how operating systems manage Time Zones. In this post, we will explore those differences and the challenges they present.

Reproducing the issue

Suppose you are writing a console application in .NET Core and you want to get information about a specific Time Zone. You might write something like this

static void Main(string[] args)
{
    TimeZoneInfo tzi = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time");
    Console.WriteLine(tzi.DisplayName);
}

Running this on my Windows 10 development environment, I see the following output

(UTC-06:00) Central Time (US & Canada)

If I take that same block of code over to my Ubuntu 18.04 development environment and run it, I instead see the following exception being thrown

Exception has occurred: CLR/System.TimeZoneNotFoundException
An unhandled exception of type 'System.TimeZoneNotFoundException' occurred in System.Private.CoreLib.dll: 'The time zone ID 'Central Standard Time' was not found on the local computer.'

What’s going on here? Let’s spend a little time digging into that and see what exactly is happening.

Time Zone differences

Windows maintains its list of Time Zones in the Windows registry. You can find a list of those values here.

In contrast, Linux distributions use the Time Zone database curated by the Internet Assigned Numbers Authority (IANA). You can find the latest copy of that database on IANA’s website. Here’s an example of what an IANA Time Zone looks like

America/New_York

The issue comes into play when you write your .NET Core code specifically using one of the two formats and then try to run the application on another operating system. Because the runtime is deferring the Time Zone management to the underlying operating system you will need to handle the differences if that scenario applies to you.

How can we work around this?

There is an open source project available on GitHub that addresses these differences. Head over and check out the source code contributed by the project developer and maintainer. You can grab the package via NuGet with the following command:

Install-Package TimeZoneConverter

Once you have it installed, you are able to work with different operating system Time Zone providers in a uniform way.

TimeZoneInfo tzi = TZConvert.GetTimeZoneInfo("Central Standard Time");
TimeZoneInfo tzi = TZConvert.GetTimeZoneInfo("America/New_York");

Time zone data changes every so often, so as noted in the project documentation – be sure to keep this package up to date.


.NET Core tooling update for Visual Studio 2017 version 15.9

$
0
0

Starting with Visual Studio 2017 version 15.9, we’ve changed how the Visual Studio tooling for .NET consumes .NET Core SDKs. Prior to this change, installing a preview version of the .NET Core SDK would cause all Visual Studio tooling for .NET Core to use that SDK because it had a higher version.

We now have a compatibility check in the .NET Core SDK that allows for a given SDK to mark a minimum required Visual Studio version. This ensures that the Visual Studio tools for .NET Core will not try to use an SDK that requires a newer Visual Studio version.

For stable releases of Visual Studio, the tools will now default to consuming only the latest stable version of the SDK that is installed on your machine. If you install any preview SDKs, the tools will not consume them by default. You can change this setting in Tools > Options > Projects and Solutions > .NET Core:

For preview releases of Visual Studio, the tools will continue to consume the latest preview version of the SDK that is installed on your machine by default. You cannot change the option to turn this off for preview releases of Visual Studio because they usually require a corresponding preview SDK to work correctly.

If you specify an SDK explicitly with a global.json file, the tooling will adhere to normal global.json rules.

These changes will make the use of .NET Core within Visual Studio more predictable.

Happy hacking!

.NET Framework November 2018 Security and Quality Rollup

$
0
0

Today, we are releasing the November 2018 Security and Quality Rollup.

Security

No new security fixes. See .NET Framework September 2018 Security and Quality Rollup for the latest security updates.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Updates Japanese dates that are formatted for the first year in an era and for which the format pattern uses “y年”. The format of the year together with the symbol “元” is supported instead of using year number 1. Also, formatting day numbers that include “元” is supported. [646179]
  • Updating Venezuela currency information. •currency symbol changed to “Bs.S” •English currency name is changed to “Bolívar Soberano” •Native Currency name is changed to “bolívar soberano” •Intl Currency Code changed to “VES” This will affect the culture of “es-VE” [616146]
  • Address a situation where the System.Security.Cryptography.Algorithms reference was not correctly loaded on .NET Framework 4.7.1 after the 7B/8B patch. [673870]

WF

  • In some .NET Remoting scenarios, when using TransactionScopeAsyncFlowOption.Enabled, it was possible to have Transaction.Current reset to null after a remoting call. [669153]

Winforms

  • Addressed an issue where application created numerous Windows Forms textboxes to a flowLayoutPanel, with only a few calls to comctl32.dll. [638365]

WPF

  • Addressed a race condition involving temporary files and some anti-virus scanners. This was causing crashes with the message “The process cannot access the file “. [638468]
  • Addressed a crash due to TaskCanceledException that can occur during shutdown of some WPF apps. Apps that continue to do work involving weak events or data binding after Application.Run() returns are known to be vulnerable to this crash. [655427]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup. The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Security and Quality Rollup KB
Windows 10 1803 (April 2018 Update) Catalog 4467702
.NET Framework 3.5, 4.7.1, 4.7.2 4467702
Windows 10 1709 (Fall Creators Update) Catalog 4467686
.NET Framework 3.5, 4.7.1, 4.7.2 4467686
Windows 10 1703 (Creators Update) Catalog 4467696
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4467696
Windows 10 1607 (Anniversary Update) Catalog 4467691
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4467691
Windows 10 RTM Catalog 4467680
.NET Framework 3.5, 4.6, 4.6.1, 4.6.2 4467680

The following table is for earlier Windows and Windows Server versions.

Product Version Securty and Quality Rollup KB
Windows 8.1 Windows RT 8.1 Windows Server 2012 R2 Catalog 4467242
.NET Framework 3.5 4459935
.NET Framework 4.5.2 4459943
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4459941
Windows Server 2012 Catalog 4467241
.NET Framework 3.5 4459932
.NET Framework 4.5.2 4459944
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4459940
Windows 7 Windows Server 2008 R2 Catalog 4467240
.NET Framework 3.5.1 4459934
.NET Framework 4.5.2 4459945
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4459942
Windows Server 2008 Catalog 4467243
.NET Framework 2.0, 3.0 4459933
.NET Framework 4.5.2 4459945
.NET Framework 4.6 4459942

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Handling a new era in the Japanese calendar in .NET

$
0
0

Typically, calendar eras represent long time periods. In the Gregorian calendar, for example, the current era spans (as of this year) 2,018 years. In the Japanese calendar, however, a new era begins with the reign of a new emperor. On April 30, 2019, Emperor Akihito is expected to abdicate, which will bring to an end the Heisei era. On the following day, when his successor becomes emperor, a new era in the Japanese calendar will begin. It is the first transition from one era to another in the history of .NET, and the first change of eras in the Japanese calendar since Emperor Akihito’s accession in January 1989. In this blog post, I’ll discuss how eras work in general in .NET, how you can determine whether your application is affected by the era change, and what you as a developer have to do to make sure your application handles the upcoming Japanese era changes successfully.

Calendars in .NET

.NET supports a number of calendar classes, all of which are derived from the base Calendar class. Calendars can be used in either of two ways in .NET. A supported calendar is a calendar that can be used by a specific culture and that defines the formatting of dates and times for that culture. One supported calendar is the default calendar of a particular culture; it is automatically used as that culture’s calendar for culture-aware operations. Standalone calendars are used apart from a specific culture by calling members of that Calendar class directly. All calendars can be used as standalone calendars. Not all calendars can be used as supported calendars, however.

Each CultureInfo object, which represents a particular culture, has a default calendar, defined by its Calendar property. The OptionalCalendars property defines the set of calendars supported by the culture. Any member of this collection can become the current calendar for the culture by assigning it to the CultureInfo.DateTimeFormat.Calendar property.

Each calendar has a minimum supported date and a maximum supported date. The calendar classes also support eras, which divide the overall time interval supported by the calendar into two or more periods. Most .NET calendars support a single era. The DateTime and DateTimeOffset constructors that create a date using a specific calendar assume that that dates belong to the current era. You can instantiate a date in an era other than the current era by calling an overload of the Calendar.ToDateTime method.

The JapaneseCalendar and JapaneseLunisolarCalendar classes

Two calendar classes, JapaneseCalendar and JapaneseLunisolarCalendar, are affected by the introduction of a new Japanese era. These calendars differ from other calendar classes in the .NET Framework in how they calculate calendar years; the reign of a new emperor marks the beginning of a new era, which begins with year 1.

The JapaneseCalendar and JapaneseLunisolarCalendar are the only two calendar classes in .NET that recognize more than one era. Neither is the default calendar of any culture. The JapaneseCalendar class is an optional calendar supported by the Japanese-Japan (ja-JP) culture and is used in some official and cultural contexts. The JapaneseLunisolarCalendar class is a standalone calendar; it cannot be the current calendar of any culture. That neither is the default calendar of the ja-JP culture minimizes the impact that results from the introduction of a new Japanese calendar era. The introduction of a new era in the Japanese calendar affects only:

Note that, with the exception of the “g” or “gg” custom format specifier, any unintended side effects from the change in Japanese eras occur only if you use a Japanese calendar class as a standalone calendar, or if you use the JapaneseCalendar as the current calendar of the ja-JP culture.

Testing era changes on Windows

The best way to determine whether your applications are affected by the new era is to test them in advance with the new era in place. You can do this immediately for .NET Framework 4.x apps and for .NET Core apps running on Windows systems. For .NET Core apps on other platforms, you’ll have to wait until the ICU globalization library is updated; see the Updating data sources section for more information.

For .NET Framework 4.x apps and for .NET Core apps on Windows systems, era information for the Japanese calendars is stored as a set of REG_SZ values in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\Calendars\Japanese\Eras key of the system registry. For example, the following figure shows the definition of the Heisei era in the Japanese calendar.

You can use the Registry Editor (regedit.exe) to add a definition for the new era. The name defines the era start date, and its value defines the native era name, native abbreviated era name, English era name, and English abbreviated era name, as follows:

Name: yyyy mm dd Value: <native-full>_<native-abbreviated>_<English-full>_<English-abbreviated>

Since the new era name has not been announced, you can use question marks as a placeholder. For the native full and abbreviated name, you can use the FULLWIDTH QUESTION MARK (U+FF1F), and for the English full and abbreviated name, you can use the QUESTION MARK (U+003F). For example:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\Calendars\Japanese\Eras]
"2019 05 01"="??_?_??????_?"

Once the new era information is in place on any system running .NET, you can use code like the following to identify instances in which the current string representation of a date and time in a Japanese calendar will differ from its string representation after the introduction of the new era:

Note that the new era will begin on May 1, 2019, and the Japanese government is expected to announce the official name of the new era on about April 1, 2019. A window of approximately one month leaves very little time to test, detect bugs, troubleshoot, and address bugs. It is important that applications be adequately tested well in advance of the beginning of the new era.

.NET changes to support the new era

To ensure that the transition to a new Japanese calendar era is as seamless as possible, the following changes have been or will be introduced in .NET Framework and .NET Core. The changes are made in servicing updates to all versions of .NET Framework from .NET Framework 3.5 through .NET Framework 4.7.2, as well as in .NET Core 2.1. Release of these updates started in September 2018 and is scheduled to go through early 2019.

Updating data sources

Currently, the way in which calendar era information is stored differs across .NET implementations:

  • For .NET Framework 4.0 and later, as well as for .NET Core running on Windows, calendar era information is provided by the Windows operating system and retrieved from the system registry. An update to Windows will add the new era value to the registry once the era name and abbreviated era name are known. .NET on Windows will automatically reflect this update.
  • For .NET Framework 3.5 on Windows, calendar era information is maintained as hard-coded data by the .NET Framework itself. An update to .NET Framework 3.5 will change its source for calendar data from private hard-coded data to the registry. Once this happens, .NET Framework 3.5 will automatically reflect the eras defined in the Windows registry.
  • For .NET Core on non-Windows platforms, calendar information is provided by International Components for Unicode (ICU), an open source globalization library. ICU libraries will be updated once the era name and abbreviated era name are known.
    Because they do not depend on ICU cultural data, applications that use the globalization invariant mode are not affected by this change.

Updates will be released as soon as possible after the new era name is announced.

Relaxed era range checks

In the past, date and time methods that depend on calendar eras threw an ArgumentOutOfRangeException when a date and time was outside the range of a specified era. The following example attempts to instantiate a date in the 65th year of the Showa era, which began on December 25, 1926 and ended on January 7, 1989. This date corresponds to January 9, 1990, which is outside the range of the Showa era in the JapaneseCalendar. As a result, an ArgumentOutOfRangeException results.

To accommodate the era change, .NET by default uses relaxed range enforcement rules. A date in a particular era can “overflow” into the following era, and no exception is thrown. The following example instantiates a date in the third quarter of year 31 of the Heisei era, which is more than two months after the Heisei era has ended.

If this behavior is undesirable, you can restore strict era range checks as follows:

  • .NET Framework 4.6 or later: You can set the following AppContextSwitchOverrides element switch:
  • .NET Framework 4.5.2 or earlier: You can set the following registry value:
    Key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AppContext
    Name Switch.System.Globalization.EnforceJapaneseEraYearRanges
    Type REG_SZ
    Value 1
  • .NET Core: You can add the following to the .netcore.runtimeconfig.json config file:

The first year of an era

Traditionally, the first year of a new Japanese calendar era is called Gannen (元年). For example, instead of Heisei 1, the first year of the Heisei era can be described as Heisei Gannen.

As part of its enhanced support for Japanese calendar eras, .NET by default adopts this convention in formatting operations. In parsing operations, .NET successfully handles strings that include “1” or “Gannen” as the year component.

The following example displays a date in the first year of the Heisei era. The output from the example illustrates the difference between the current and future handling of the first year of an era by .NET. As the output from the example illustrates, the .NET formatting routine converts year 1 to Gannen only if the year is followed by the 年 symbol.

If this behavior is undesirable in formatting operations, you can restore the previous behavior, which always represents the year as “1” rather than “Gannen”, by doing the following:

  • .NET Framework 4.6 or later: You can set the following AppContextSwitchOverrides element switch:
  • .NET Framework 4.5.2 or earlier: You can set the following registry value:
    Key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AppContext
    Name Switch.System.Globalization.FormatJapaneseFirstYearAsNumber
    Type REG_SZ
    Value 1
  • .NET Core: You can add the following to the .netcore.runtimeconfig.json config file:

Although there rarely should be a need to do this, you can also restore .NET’s previous behavior in parsing operations. This recognizes only “1” as the first year of an era, and does not recognize “Gannen”. You can do this as follows for both .NET Framework and .NET Core:

  • .NET Framework 4.6 or later: You can set the following AppContextSwitchOverrides element switch:
  • .NET Framework 4.5.2 or earlier: You can set the following registry value:
    Key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\AppContext
    Name Switch.System.Globalization.EnforceLegacyJapaneseDateParsing
    Type REG_SZ
    Value 1
  • .NET Core: You can add the following to the .netcore.runtimeconfig.json config file:

 

Handling Japanese calendar eras effectively

The change in Japanese eras poses a number of issues. The following list addresses some of these and proposes workarounds.

Specify an era when instantiating dates

You can instantiate a date using the date values of the Japanese calendar in any of three ways:

The .NET calendar classes include a CurrentEra property, which indicates the current (or default) era used in interpreting dates expressed in a specific calendar. Its value is the constant 0. It is an index into the Eras property, which orders eras in reverse chronological order. In other words, the most recent era is always the default era.

When eras can change unexpectedly, calling a date and time instantiation method that relies on the default era can produce an ambiguous date. In the next example, the call to the JapaneseCalendar.ToDateTime method that uses the default era returns different dates depending on whether or not the new era has been defined in the registry. Note that the output for this and the following example uses the sortable date/time pattern.

You can do either of two things to avoid potential ambiguity:

  • Instantiate dates in the Gregorian calendar. Use the Japanese calendar or the Japanese Lunisolar calendar only for the string representation of dates.

Use relaxed era range checks

A basic problem of calendars that can add new eras is that you can’t be certain that a future date will always belong to the current era. If strict range checking is enabled, future dates that are valid before an era change may become invalid after the era change. For this reason, leave relaxed era range checking on (the default value).

Use the era in formatting and parsing operations

Because dates can be ambiguous, you should always format a date value with its era. This is the default behavior of the standard date and time format strings. If you are using a custom date and time format string, be sure to include the “g” or “gg” custom format specifier. Conventionally, the era precedes the other date components in the string representation of a Japanese calendar date.

For parsing operations, also ensure that an era is present unless you want all dates and times to default to the current era.

A call to action

The introduction of a new era in the Japanese calendar poses challenges for any application that uses either the JapaneseCalendar or the JapaneseLunisolarCalendar. We’ve discussed how eras work with calendars and dates and times in .NET, how .NET applications will be updated to use the new era, how .NET APIs are changing to help you handle the Japanese era change, and what you can do as a developer to test your application and minimize the effect of future era changes. Above all, we recommend that you:

  • Determine whether your applications are affected by the Japanese era change. All applications that use the JapaneseCalendar and the JapaneseLunisolarCalendar classes may be affected.
  • Test your application to determine whether it can handle all dates, and particularly dates that exceed the range of the current Japanese calendar era.
  • Adopt the practices outlined in the Handling Japanese calendar eras effectively section to ensure that you can handle era changes effectively.

See also

The Japanese Calendar’s Y2K Moment
Testing for New Japanese Era
Japanese calendar
Japanese era name
List of Japanese era names
Working with Calendars

.NET Framework November 2018 Preview of Quality Rollup

$
0
0

Today, we are releasing the November 2018 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addressed an issue with KB4096417 where we switched to CLR-implemented write-watch for pages. The GC will no longer call VirtualAlloc when running under workstation GC mode. [685611]

SQL

  • Provides an AppContext flag for making the default value of TransparentNetworkIPResolutionfalse in SqlClient connection strings. [690465]

WCF

  • Addressed a System.AccessViolationException due to accessing disposed X509Certificate2 instance in a rare race condition to defer the service certificate cleanup to GC. The impacted scenario is WCF NetTcp bindings using reliable sessions with certificate authentication. [657003]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup. The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Preview Quality Rollup KB
Windows 10 1803 (April 2018 Update) Catalog
4467682
.NET Framework 3.5, 4.7.2 4467682
Windows 10 1709 (Fall Creators Update) Catalog
4467681
.NET Framework 3.5, 4.7.1, 4.7.2 4467681
Windows 10 1703 (Creators Update) Catalog
4467699
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4467699
Windows 10 1607 (Anniversary Update) Catalog
4467684
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4467684

The following table is for earlier Windows and Windows Server versions.

>

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4467226
.NET Framework 3.5 Catalog
4459935
.NET Framework 4.5.2 Catalog
4459943
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 Catalog
4467087
Windows Server 2012 Catalog
4467225
.NET Framework 3.5 Catalog
4459932
.NET Framework 4.5.2 Catalog
4459944
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 Catalog
4467086
Windows 7
Windows Server 2008 R2
Catalog
4467224
.NET Framework 3.5.1 Catalog
4459934
.NET Framework 4.5.2 Catalog
4459945
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 Catalog
4467088
Windows Server 2008 Catalog
4467227
.NET Framework 2.0, 3.0 Catalog
4459933
.NET Framework 4.5.2 Catalog
4459945
.NET Framework 4.6 Catalog
4467088

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Announcing .NET Framework 4.8 Early Access build 3694

$
0
0

We are happy to let you know that .NET Framework 4.8 is now feature complete and we have an early access build to share with you all! We will continue to stabilize this release and take more fixes over the coming months, and we would greatly appreciate it if you could help us ensure this is a high-quality release by trying it out and providing feedback on the new features via .NET Framework Early Access GitHub repository.

This build includes an updated .NET 4.8 runtime as well as the .NET 4.8 Developer Pack (a single package that bundles the .NET Framework 4.8 runtime, the .NET 4.8 Targeting Pack and the .NET Framework 4.8 SDK). Please note: this build is not supported for production use.

Next steps:
To explore the new features, download the .NET 4.8 Developer Pack build 3694. Instead, if you want to try just the .NET 4.8 runtime, you can download either of these:

This preview build 3694 includes improvements/fixes in the following areas:

  • [BCL] – Reducing FIPS Impact on Cryptography
  • [CLR] – Antimalware scanning for all assemblies
  • [WCF] – ServiceHealthBehavior
  • [WPF] – Support for UIAutomation ControllerFor property
  • [WPF] – Tooltips on keyboard access
  • [WPF] – Added Support for SizeOfSet and PositionInSet UIAutomation properties

You can see the complete list of improvements in this build here.

.NET Framework build 3694 is also included in the next update for Windows 10. You can sign up for Windows Insiders to validate that your applications work great on the latest .NET Framework included in the latest Windows 10 releases.

BCL – Reducing FIPS Impact on Cryptography

.NET Framework 2.0+ have cryptographic provider classes such as SHA256Managed, which throw a CryptographicException when the system cryptographic libraries are configured in “FIPS mode”. These exceptions are thrown because the managed versions have not undergone FIPS (Federal Information Processing Standards) 140-2 certification (JIT and NGEN image generation would both invalidate the certificate), unlike the system cryptographic libraries. Few developers have their development machines in “FIPS mode”, which results in these exceptions being raised in production (or on customer systems). The “FIPS mode” setting was also used by .NET Framework to block cryptographic algorithms which were not considered an approved algorithm by the FIPS rules.

For applications built for .NET Framework 4.8, these exceptions will no longer be thrown (by default). Instead, the SHA256Managed class (and the other managed cryptography classes) will redirect the cryptographic operations to a system cryptography library. This policy change effectively removes a potentially confusing difference between developer environments and the production environments in which the code runs and makes native components and managed components operate under the same cryptographic policy.

Applications targeting .NET Framework 4.8 will automatically switch to the newer, relaxed policy and will no longer see exceptions being thrown from MD5Cng, MD5CryptoServiceProvider, RC2CryptoServiceProvider, RIPEMD160Managed, and RijndaelManaged when in “FIPS mode”. Applications which depend on the exceptions from previous versions can return to the previous behavior by setting the AppContext switch “Switch.System.Security.Cryptography.UseLegacyFipsThrow” to “true”.

Runtime – Antimalware Scanning for All Assemblies

In previous versions of .NET Framework, Windows Defender or third-party antimalware software would automatically scan all assemblies loaded from disk for malware. However, assemblies loaded from elsewhere, such as by using Assembly.Load(byte[]), would not be scanned and could potentially carry viruses undetected.

.NET Framework 4.8 on Windows 10 triggers scans for those assemblies by Windows Defender and many other antimalware solutions that implement the Antimalware Scan Interface. We expect that this will make it harder for malware to disguise itself in .NET programs.

WCF – ServiceHealthBehavior

Health endpoints have many benefits and are widely used by orchestration tools to manage the service based on the service health status. Health checks can also be used by monitoring tools to track and alert on the availability and performance of the service, where they serve as early problem indicators. 

ServiceHealthBehavior is a WCF service behavior that extends IServiceBehavior.  When added to the ServiceDescription.Behaviors collection, it will enable the following: 

  • Return service health status with HTTP response codes: One can specify in the query string the HTTP status code for a HTTP/GET health probe request.
  • Publication of service health: Service specific details including service state and throttle counts and capacity are displayed using an HTTP/GET request using the “?health” query string. Knowing and easily having access to the information displayed is important when trouble-shooting a mis-behaving WCF service.
Config ServiceHealthBehavior:

There are two ways to expose the health endpoint and publish WCF service health information: by using code or by using a configuration file.

    • Enable health endpoint using code
    • Enable health endpoint using config
Return service health status with HTTP response codes:

Health status can be queried by query parameters (OnServiceFailure, OnDispatcherFailure, OnListenerFailure, OnThrottlePercentExceeded). HTTP response code (200 – 599) can be specified for each query parameter. If the HTTP response code is omitted for a query parameter, a 503 HTTP response code is used by default.

Query parameters and examples:

1. OnServiceFailure:

  • Example: by querying https://contoso:81/Service1?health&OnServiceFailure=450, a 450 HTTP response status code is returned when ServiceHost.State is greater than CommunicationState.Opened.

2. OnDispatcherFailure:

  • Example: by querying https://contoso:81/Service1?health&OnDispatcherFailure=455, a 455 HTTP response status code is returned when the state of any of the channel dispatchers is greater than CommunicationState.Opened.

3. OnListenerFailure:

  • Example: by querying https://contoso:81/Service1?health&OnListenerFailure=465, a 465 HTTP response status code is returned when the state of any of the channel listeners is greater than CommunicationState.Opened. 

4. OnThrottlePercentExceeded: Specifies the percentage {1 – 100} that triggers the response and its HTTP response code {200 – 599}.

  • Example: by querying https://contoso:81/Service1?health&OnThrottlePercentExceeded= 70:350,95:500, when the throttle percentage is equal or larger than 95%, 500 the HTTP response code is returned; when the percentage is equal or larger than 70% and less then 95%,   350 is returned; otherwise, 200 is returned.
Publication of service health:

After enabling the health endpoint, the service health status can be displayed either in html (by specifying the query string: https://contoso:81/Service1?health) or xml (by specifying the query string: https://contoso:81/Service1?health&Xml) formats. https://contoso:81/Service1?health&NoContent returns empty html page.

Note:

It’s best practice to always limit access to the service health endpoint. You can restrict access by using the following mechanisms:

  1. Use a different port for the health endpoint than what’s used for the other services as well as use a firewall rule to control access.
  2. Add the desirable authentication and authorization to the health endpoint binding.

WPF – Support for UIAutomation ControllerFor property.

UIAutomation’s ControllerFor property returns an array of automation elements that are manipulated by the automation element that supports this property. This property is commonly used for Auto-suggest accessibility. ControllerFor is used when an automation element affects one or more segments of the application UI or the desktop. Otherwise, it is hard to associate the impact of the control operation with UI elements. This feature adds the ability for controls to provide a value for ControllerFor property.

A new virtual method has been added to AutomationPeer:

To provide a value for the ControllerFor property, simply override this method and return a list of AutomationPeers for the controls being manipulated by this AutomationPeer:

WPF – Tooltips on keyboard access

Currently tooltips only display when a user hovers the mouse cursor over a control. In .NET Framework 4.8, WPF is adding a feature that enables tooltips to show on keyboard focus, as well as via a keyboard shortcut.

To enable this feature, an application needs to target .NET Framework 4.8 or opt-in via AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” and “Switch.UseLegacyToolTipDisplay”.

Sample App.config file: 

Once enabled, all controls containing a tooltip will start to display it once the control receives keyboard focus. The tooltip can be dismissed over time or when keyboard focus changes. Users can also dismiss the tooltip manually via a new keyboard shortcut Ctrl + Shift + F10. Once the tooltip has been dismissed it can be displayed again via the same keyboard shortcut.

Note: RibbonToolTips on Ribbon controls won’t show on keyboard focus – they will only show via the keyboard shortcut.

WPF – Added Support for SizeOfSet and PositionInSet UIAutomation properties

Windows 10 introduced new UIAutomation properties SizeOfSet and PositionInSet which are used by applications to describe the count of items in a set. UIAutomation client applications such as screen readers can then query an application for these properties and announce an accurate representation of the application’s UI.

This feature adds support for WPF applications to expose these two properties to UIAutomation. This can be accomplished in two ways:

  1. DependencyProperties 

New DependencyProperties SizeOfSet and PositionInSet have been added to the System.Windows.Automation.AutomationProperties namespace. A developer can set their values via XAML:

  1. AutomationPeer virtual methods 

Virtual methods GetSizeOfSetCore and GetPositionInSetCore have also been added to the AutomationPeer class. A developer can provide values for SizeOfSet and PositionInSet by overriding these methods:

Automatic values 

Items in ItemsControls will provide a value for these properties automatically without additional action from the developer. If an ItemsControl is grouped, the collection of groups will be represented as a set and each group counted as a separate set, with each item inside that group providing it’s position inside that group as well as the size of the group. Automatic values are not affected by virtualization. Even if an item is not realized, it is still counted towards the total size of the set and affects the position in the set of it’s sibling items.

Automatic values are only provided if the developer is targeting .NET Framework 4.8 or has set the AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” – for example via App.config file:

Previous .NET Framework Early Access Build

Closing

Thanks for your continued support of the Early Access Program. We will do our best to ensure these builds are stable and compatible but if you see bugs or issues please take the time to report these to us on Github so we can address these issues before the official release.

Thank you!

 

Announcing ML.NET 0.8 – Machine Learning for .NET

$
0
0

alt text

ML.NET is an open-source and cross-platform framework (Windows, Linux, macOS) which makes machine learning accessible for .NET developers.

ML.NET allows you to create and use machine learning models targeting scenarios to achieve common tasks such as sentiment analysis, issue classification, forecasting, recommendations, fraud detection, image classification and more. You can  check out these common tasks at our  GitHub repo with ML.NET samples.

Today we’re happy to announce the release of ML.NET 0.8. (ML.NET 0.1 was released at //Build 2018). This release focuses on adding improved support for recommendation scenarios, model explainability in the form of feature importance, debuggability by previewing your in-memory datasets, API improvements such as caching, filtering, and more.

This blog post provides details about the following topics in the ML.NET 0.8 release:

New Recommendation Scenarios (e.g. Frequently Bought Together)

alt text

Recommender systems enable producing a list of recommendations for products in a product catalog, songs, movies, and more. Products like Netflix, Amazon, Pinterest have democratized use of Recommendation like scenarios over the last decade.

ML.NET uses Matrix Factorization and  Field Aware Factorization machines based approach for recommendation which enable the following scenarios. In general Field Aware Factorization machines is the more generalized case of Matrix Factorization and allows for passing additional meta data.

With ML.NET 0.8 we have added another scenario for Matrix Factorization which enables recommendations.

Recommendation Scenarios Recommended solution Link to Sample
Product Recommendations based upon Product Id, Rating, User Id, and additional meta data like  Product Description, User Demographics (age, country etc.) Field Aware Factorization Machines Since ML.NET 0.3 (Sample here)
Product Recommendations based upon  Product Id, Rating and User Id only Matrix Factorization Since ML.NET 0.7 (Sample here)
Product Recommendations based upon Product Id and Co-Purchased Product IDs One Class Matrix Factorization New in ML.NET 0.8 (Sample here)

Yes! product recommendations are still possible even if you only have historical order purchasing data for your store.

This is a popular scenario as in many situations you might not have ratings available to you.

With historical purchasing data you can still build recommendations by providing your users a list of “Frequently Bought Together” product items.

The below snapshot is from Amazon.com where its recommending a set of products based upon the product selected by the user.

alt text

We now support this scenario in ML.NET 0.8, and you can try out this sample which performs product recommendations based upon an Amazon Co-purchasing dataset.

Improved debuggability by previewing the data

alt text

In most of the cases when starting to work with your pipeline and loading your dataset it is very useful to peek at the data that was loaded into an ML.NET DataView and even look at it after some intermediate transformation steps to ensure the data is transformed as expected.

First what you can do is to review schema of your DataView.
All you need to do is hover over IDataView object, expand it, and look for the Schema property.

alt text

If you want to take a look to the actual data loaded in the DataView, you can do following steps shown in the animation below.

alt text

The steps are:

  • While debugging, open a Watch window.
  • Enter variable name of you DataView object (in this case testDataView) and call Preview() method for it.
  • Now, click over the rows you want to inspect. That will show you actual data loaded in the DataView.

By default we output first 100 values in ColumnView and RowView. But that can be changed by passing the amount of rows you interested into the to Preview() function as argument, such as Preview(500).

Model explainability

alt text

In ML.NET 0.8 release, we have included APIs for model explainability that we use internally at Microsoft to help machine learning developers better understand the feature importance of models (“Overall Feature Importance”) and create high-capacity models that can be interpreted by others (“Generalized Additive Models”).

Overall feature importance gives a sense of which features are overall most important for the model. When creating Machine Learning models, it is often not enough to simply make predictions and evaluate its accuracy. As illustrated in the previous image, feature importance helps you understand which data features are most valuable to the model for making a good prediction. For instance, when predicting the price of a car, some features are more important like mileage and make/brand, while other features might impact less, like the car’s color.

The “Overall feature importance” of a model is enabled through a technique named “Permutation Feature Importance” (PFI). PFI measures feature importance by asking the question, “What would the effect on the model be if the values for a feature were set to a random value (permuted across the set of examples)?”.

The advantage of the PFI method is that it is model agnostic — it works with any model that can be evaluated — and it can use any dataset, not just the training set, to compute feature importance.

You can use PFI like so to produce feature importances with code like the following:

// Compute the feature importance using PFI
var permutationMetrics = mlContext.Regression.PermutationFeatureImportance(model, data);

// Get the feature names from the training set
var featureNames = data.Schema.GetColumns()
                .Select(tuple => tuple.column.Name) // Get the column names
                .Where(name => name != labelName) // Drop the Label
                .ToArray();

// Write out the feature names and their importance to the model's R-squared value
for (int i = 0; i < featureNames.Length; i++)
  Console.WriteLine($"{featureNames[i]}\t{permutationMetrics[i].rSquared:G4}");

You would get a similar output in the console than the metrics below:

Console output:

    Feature            Model Weight    Change in R - Squared
    --------------------------------------------------------
    RoomsPerDwelling      50.80             -0.3695
    EmploymentDistance   -17.79             -0.2238
    TeacherRatio         -19.83             -0.1228
    TaxRate              -8.60              -0.1042
    NitricOxides         -15.95             -0.1025
    HighwayDistance        5.37             -0.09345
    CrimesPerCapita      -15.05             -0.05797
    PercentPre40s         -4.64             -0.0385
    PercentResidental      3.98             -0.02184
    CharlesRiver           3.38             -0.01487
    PercentNonRetail      -1.94             -0.007231

Note that in current ML.NET v0.8, PFI only works for binary classification and regression based models, but we’ll expand to additional ML tasks in the upcoming versions.

See the sample in the ML.NET repository for a complete example using PFI to analyze the feature importance of a model.

Generalized Additive Models, or (GAMs) have very explainable predictions. They are similar to linear models in terms of ease of understanding but are more flexible and can have better performance and and could also be visualized/plotted for easier analysis.

Example usage of how to train a GAM model, inspect and interpret the results, can be found here.

Additional API improvements in ML.NET 0.8

In this release we have also added other enhancements to our APIs which help with filtering rows in DataViews, caching data, allowing users to save data to the IDataView (IDV) binary format. You can learn about these features here.

Filtering rows in a DataView

alt text

Sometimes you might need to filter the data used for training a model. For example, you might need to remove rows where a certain column’s value is lower or higher than certain boundaries because of any reason like ‘outliers’ data.

This can now be done with additional filters like FilterByColumn() API such as in the following code from this sample app at ML.NET samples, where we want to keep only payment rows between $1 and $150 because for this particular scenario, because higher than $150 are considered “outliers” (extreme data distorting the model) and lower than $1 might be errors in data:

IDataView trainingDataView = mlContext.Data.FilterByColumn(baseTrainingDataView, "FareAmount", lowerBound: 1, upperBound: 150);

Thanks to the added DataView preview in Visual Studio previously mentioned above, you could now inspect the filtered data in your DataView.

Additional sample code can be check-out here.

Caching APIs

alt text

Some estimators iterate over the data multiple times. Instead of always reading from file, you can choose to cache the data to sometimes speed training execution.

A good example is the following when the training is using an OVA (One Versus All) trainer which is running multiple iterations against the same data. By eliminating the need to read data from disk multiple times you can reduce model training time by up to 50%:

var dataProcessPipeline = mlContext.Transforms.Conversion.MapValueToKey("Area", "Label")
        .Append(mlContext.Transforms.Text.FeaturizeText("Title", "TitleFeaturized"))
        .Append(mlContext.Transforms.Text.FeaturizeText("Description", "DescriptionFeaturized"))
        .Append(mlContext.Transforms.Concatenate("Features", "TitleFeaturized", "DescriptionFeaturized"))
        //Example Caching the DataView 
        .AppendCacheCheckpoint(mlContext) 
        .Append(mlContext.BinaryClassification.Trainers.AveragedPerceptron(DefaultColumnNames.Label,                                  
                                                                          DefaultColumnNames.Features,
                                                                          numIterations: 10));

This example code is implemented and execution time measured in this sample app at the ML.NET Samples repo.

An additional test example can be found here.

Enabled saving and loading data in IDataView (IDV) binary format for improved performance

alt text

It is sometimes useful to save data after it has been transformed. For example, you might have featurized all the text into sparse vectors and want to perform repeated experimentation with different trainers without continuously repeating the data transformation.

IDV format is a binary dataview file format provided by ML.NET.

Saving and loading files in IDV format is often significantly faster than using a text format because it is compressed.

In addtion, because it is already schematized ‘in-file’, you don’t need to specify the column types like you need to do when using a regular TextLoader, so the code to use is simpler in addition to faster.

Reading a binary data file can be done using this simple line of code:

mlContext.Data.ReadFromBinary("pathToFile");

Writing a binary data file can be done using this code:

mlContext.Data.SaveAsBinary("pathToFile");

Enabled stateful prediction engine for time series problems such as anomaly detection

alt text

ML.NET 0.7 enabled anomaly detection scenarios based on Time Series. However, the prediction engine was stateless, which means that every time you want to figure out whether the latest data point is anomolous, you need to provide historical data as well. This is unnatural.

The prediction engine can now keep state of time series data seen so far, so you can now get predictions by just providing the latest data point. This is enabled by using CreateTimeSeriesPredictionFunction() instead of CreatePredictionFunction().

Example usage can be found here

Get started!

alt text

If you haven’t already get started with ML.NET here.

Next, going further explore some other resources:

We will appreciate your feedback by filing issues with any suggestions or enhancements in the ML.NET GitHub repo to help us shape ML.NET and make .NET a great platform of choice for Machine Learning.

Thanks,

The ML.NET Team.

This blog was authored by Cesar de la Torre, Gal Oshri, Rogan Carr plus additional contributions from the ML.NET team

Announcing Entity Framework Core 2.2

$
0
0

Today we’re making the final version of EF Core 2.2 available, alongside ASP.NET Core 2.2 and .NET Core 2.2. This is the latest release of our open-source and cross-platform object-database mapping technology.

EF Core 2.2 RTM includes more than a hundred bug fixes and a few new features:

Spatial data support

Spatial data can be used to represent the physical location and shape of objects. Many databases can natively store, index, and query spatial data. Common scenarios include querying for objects within a given distance, and testing if a polygon contains a given location. EF Core 2.2 now supports working with spatial data from various databases using types from the NetTopologySuite (NTS) library.

Spatial data support is implemented as a series of provider-specific extension packages. Each of these packages contributes mappings for NTS types and methods, and the corresponding spatial types and functions in the database. Such provider extensions are now available for SQL Server, SQLite, and PostgreSQL (from the Npgsql project). Spatial types can be used directly with the EF Core in-memory provider without additional extensions.

Once the provider extension is installed, you can add properties of supported types to your entities. For example:

using NetTopologySuite.Geometries;

namespace MyApp
{
  public class Friend
  {
    [Key]
    public string Name { get; set; }

    [Required]
    public Point Location { get; set; }
  }
}

You can then persist entities with spatial data:

using (var context = new MyDbContext())
{
    context.Add(
        new Friend
        {
            Name = "Bill",
            Location = new Point(-122.34877, 47.6233355) {SRID = 4326 }
        });
    context.SaveChanges();
}

And you can execute database queries based on spatial data and operations:

  var nearestFriends =
      (from f in context.Friends
      orderby f.Location.Distance(myLocation) descending
      select f).Take(5).ToList();

For more information on this feature, see the spatial data documentation.

Collections of owned entities

EF Core 2.0 added the ability to model ownership in one-to-one associations. EF Core 2.2 extends the ability to express ownership to one-to-many associations. Ownership helps constrain how entities are used.

For example, owned entities: – Can only ever appear on navigation properties of other entity types. – Are automatically loaded, and can only be tracked by a DbContext alongside their owner.

In relational databases, owned collections are mapped to separate tables from the owner, just like regular one-to-many associations. But in document-oriented databases, we plan to nest owned entities (in owned collections or references) within the same document as the owner.

You can use the feature by calling the new OwnsMany() API:

modelBuilder.Entity<Customer>().OwnsMany(c => c.Addresses);

For more information, see the updated owned entities documentation.

Query tags

This feature simplifies the correlation of LINQ queries in code with generated SQL queries captured in logs.

To take advantage of query tags, you annotate a LINQ query using the new TagWith() method. Using the spatial query from a previous example:

  var nearestFriends =
      (from f in context.Friends.TagWith(@"This is my spatial query!")
      orderby f.Location.Distance(myLocation) descending
      select f).Take(5).ToList();

This LINQ query will produce the following SQL output:

-- This is my spatial query!

SELECT TOP(@__p_1) [f].[Name], [f].[Location]
FROM [Friends] AS [f]
ORDER BY [f].[Location].STDistance(@__myLocation_0) DESC

For more information, see the query tags documentation.

Getting EF Core 2.2

The EF Core NuGet packages are available on the NuGet Gallery, and also as part of ASP.NET Core 2.2 and the new .NET Core SDK.

If you want to use EF Core in an application based on ASP.NET Core, we recommend that first you upgrade your application to ASP.NET Core 2.2.

In general, the best way to use EF Core in an application is to install the corresponding NuGet package for the provider your application will use. For example, to add the 2.2 version of the SQL Server provider in a .NET Core project from the command line, use:

$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer -v 2.2.0

Or from the Package Manager Console in Visual Studio:

PM> Install-Package Microsoft.EntityFrameworkCore.SqlServer -Version 2.2.0

For more information on how to add EF Core to your projects, see our documentation on Installing Entity Framework Core.

Compatibility with EF Core 2.1

We spent much time and effort making sure that EF Core 2.2 is backwards compatible with existing EF Core 2.1 providers, and that updating an application to build on EF Core 2.2 won’t cause compatibility issues. We expect most upgrades to be smooth, however if you find any unexpected issues, please report them to our issue tracker.

There is one known change in EF Core 2.2 that could require minor updates in application code. Read the description of the following issue for more details:

  • #13986 Type configured as both owned entity and regular entity requires a primary key to be defined after upgrading from 2.1 to 2.2

We intend to maintain a list of issues that may require adjustments to existing code on our issue tracker.

What’s next: EF Core 3.0

With EF Core 2.2 out the door, our main focus is now EF Core 3.0. We haven’t completed any new features yet, so the EF Core 3.0 Preview 1 packages available on the NuGet Gallery today only contain minor changes made since EF Core 2.2.

In fact, there are several details of the next major release still under discussion, and we plan to share more information in upcoming announcements, but here are some of the themes we know about so far:

  • LINQ improvements: LINQ enables you to write database queries without leaving your language of choice, taking advantage of rich type information to get IntelliSense and compile-time type checking. But LINQ also enables you to write an unlimited number of complicated queries, and that has always been a huge challenge for LINQ providers. In the first few versions of EF Core, we solved that in part by figuring out what portions of a query could be translated to SQL, and then by allowing the rest of the query to execute in memory on the client. This client-side execution can be desirable in some situations, but in many other cases it can result in inefficient queries that may not identified until an application is deployed to production. In EF Core 3.0, we are planning to make profound changes to how our LINQ implementation works, and how we test it. The goals are to make it more robust (for example, to avoid breaking queries in patch releases), to be able to translate more expressions correctly into SQL, to generate efficient queries in more cases, and to prevent inefficient queries from going undetected.
  • Cosmos DB support: We’re working on a Cosmos DB provider for EF Core, to enable developers familiar with the EF programing model to easily target Azure Cosmos DB as an application database. The goal is to make some of the advantages of Cosmos DB, like global distribution, “always on” availability, elastic scalability, and low latency, even more accessible to .NET developers. The provider will enable most EF Core features, like automatic change tracking, LINQ, and value conversions, against the SQL API in Cosmos DB. We started this effort before EF Core 2.2, and we have made some preview versions of the provider available. The new plan is to continue developing the provider alongside EF Core 3.0.
  • C# 8.0 support: We want our customers to take advantage some of the new features coming in C# 8.0 like async streams (including await for each) and nullable reference types while using EF Core.
  • Reverse engineering database views into query types: In EF Core 2.1, we added support for query types, which can represent data that can be read from the database, but cannot be updated. Query types are a great fit for mapping database views, so in EF Core 3.0, we would like to automate the creation of query types for database views.
  • Property bag entities: This feature is about enabling entities that store data in indexed properties instead of regular properties, and also about being able to use instances of the same .NET class (potentially something as simple as a Dictionary<string, object>) to represent different entity types in the same EF Core model. This feature is a stepping stone to support many-to-many relationships without a join entity, which is one of the most requested improvements for EF Core.
  • EF 6.3 on .NET Core: We understand that many existing applications use previous versions of EF, and that porting them to EF Core only to take advantage of .NET Core can sometimes require a significant effort. For that reason, we will be adapting the next version of EF 6 to run on .NET Core 3.0. We are doing this to facilitate porting existing applications with minimal changes. There are going to be some limitations (for example, it will require new providers, spatial support with SQL Server won’t be enabled), and there are no new features planned for EF 6.

Thank you

The EF team would like to thank everyone for all the community feedback and contributions that went into EF Core 2.2. Once more, you can report any new issues you find on our issue tracker.


Announcing .NET Core 2.2

$
0
0

We’re excited to announce the release of .NET Core 2.2. It includes diagnostic improvements to the runtime, support for ARM32 for Windows and Azure Active Directory for SQL Client. The biggest improvements in this release are in ASP.NET Core.

ASP.NET Core 2.2 and Entity Framework Core 2.2 are also releasing today.

You can download and get started with .NET Core 2.2, on Windows, macOS, and Linux:

.NET Core 2.2 is supported by Visual Studio 15.9, Visual Studio for Mac and Visual Studio Code.

Docker images are available at microsoft/dotnet for .NET Core and ASP.NET Core.

You can see complete details of the release in the .NET Core 2.2 release notes. Related instructions, known issues, and workarounds are included in the release notes. Please report any issues you find in the comments or at dotnet/core #2098.

Thanks for everyone that contributed to .NET Core 2.2. You’ve helped make .NET Core a better product!

Tiered Compilation

Tiered compilation is a feature that enables the runtime to more adaptively use the Just-In-Time (JIT) compiler to get better performance, both at startup and to maximize throughput. It was added as an opt-in feature in .NET Core 2.1 and then was enabled by default in .NET Core 2.2 Preview 2. We decided that we were not quite ready to enable it by default in the final .NET Core 2.2 release, so we switched it back to opt-in, just like .NET Core 2.1. It is enabled by default in .NET Core 3.0 and we expect it to stay in that configuration.

Runtime Events

It is often desirable to monitor runtime services such as the GC, JIT, and ThreadPool of the current process to understand how these services are behaving while running your application. On Windows systems, this is commonly done using ETW and monitoring the ETW events of the current process. While this continues to work well, it is not always easy or possible to use ETW. Whether you’re running in a low-privilege environment or running on Linux or macOS, it may not be possible to use ETW.

Starting with .NET Core 2.2, CoreCLR events can now be consumed using the EventListener class. These events describe the behavior of GC, JIT, ThreadPool, and interop. They are the same events that are exposed as part of the CoreCLR ETW provider on Windows. This allows for applications to consume these events or use a transport mechanism to send them to a telemetry aggregation service.

You can see how to subscribe to events in the following code sample:

Support for AccessToken in SqlConnection

The ADO.NET provider for SQL Server, SqlClient, now supports setting the AccessToken property to authenticate SQL Server connections using Azure Active Directory. In order to use the feature, you can obtain the access token value using Active Directory Authentication Library for .NET, contained in the Microsoft.IdentityModel.Clients.ActiveDirectory NuGet package.

The following sample shows how to authenticate SQL Server connections using Azure Active directory:

For more information see ADAL.NET and the Azure Active Directory documentation.

Injecting code prior to Main

.NET Core now enables injecting code prior to running an application main method via a Startup Hook. Startup hooks make it possible for a host to customize the behavior of applications after they have been deployed, without needing to recompile or change the application.

We expect hosting providers to define custom configuration and policy, including settings that potentially influence load behavior of the main entry point such as the AssemblyLoadContext behavior. The hook could be used to set up tracing or telemetry injection, to set up callbacks for handling, or other environment-dependent behavior. The hook is separate from the entry point, so that user code doesn’t need to be modified.

See Host startup hook for more information.

Windows ARM32

We are adding support for Windows ARM32, similar to the Linux ARM32 support we added in .NET Core 2.1. Windows has had support for ARM32 with Windows IoT Core for some time. As part of the Windows Server 2019 release, ARM32 support was also added for Nanoserver. .NET Core can be used on both Nanoserver and IoT Core.

Docker will be provided for Nanoserver for ARM32 at microsoft/dotnet on Docker Hub.

We ran into a late bug that prevented us from publishing .NET Core builds for Windows ARM32 today. We expect those builds to be in place for .NET Core 2.2.1, in January 2019.

Platform Support

.NET Core 2.2 is supported on the following operating systems:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2008 R2 SP1+
  • macOS: 10.12+
  • RHEL: 6+
  • Fedora: 26+
  • Ubuntu: 16.04+
  • Debian: 9+
  • SLES: 12+
  • openSUSE: 42.3+
  • Alpine: 3.7+

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Linux (Ubuntu 16.04+, Debian 9+)
  • ARM32 on Windows (1809+; available in January)

Closing

.NET Core 2.2 includes key improvements for the product. Please try them out and tell us what you think. Also make sure to check out the improvements in ASP.NET Core 2.2 and Entity Framework 2.2.

Announcing .NET Core 3 Preview 1 and Open Sourcing Windows Desktop Frameworks

$
0
0

Today, we are announcing .NET Core 3 Preview 1. It is the first public release of .NET Core 3. We have some exciting new features to share and would love your feedback. You can develop .NET Core 3 applications with Visual Studio 2019 Preview 1, Visual Studio for Mac and Visual Studio Code.

Download and get started with .NET Core 3 Preview 1 right now on Windows, Mac and Linux.

You can see complete details of the release in the .NET Core 3 Preview 1 release notesPlease report any issues you find in the comments or at dotnet/core #2099.

Visual Studio 2019 will be the release to support building .NET Core 3 applications and the preview was also released today so we also encourage you to check that out.

.NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). ASP.NET Core 3 enables client-side development with Razor Components. EF Core 3 will have support for Azure Cosmos DB. It will also include support for C# 8 and .NET Standard 2.1 and much more!

.NET Framework 4.8

Before diving into .NET Core 3 let’s take a quick look at .NET Framework. Next year we will ship .NET Framework 4.8. With monitors supporting 4K and 8K resolutions we are adding better support for high DPI to WPF and Windows Forms. Many .NET applications use browser and media controls, which are based on older versions of Internet Explorer and Windows Media player. We are adding new controls that use the latest browser and media players in Windows 10 and support the latest standards. And WPF and Windows Forms applications will have access to Windows UI XAML Library (WinUI) via XAML Islands for modern look and touch support. Visual Studio 2019 is based on .NET Framework and uses many of these features. For more information on .NET Framework 4.8 see our post: Update on .NET Core 3.0 and .NET Framework 4.8.

Windows Desktop Comes to .NET Core

The first two versions of .NET Core focused primarily on supporting web applications, web APIs, IoT and console applications. .NET Core 3 adds support for building Windows desktop applications using WPF and Windows Forms frameworks and modern controls and Fluent styling from the Windows UI XAML Library (WinUI) via XAML Islands.

Many desktop applications today use Entity Framework for data access and so we are supporting Entity Framework 6 on .NET Core 3 as well. These frameworks enable developers building Windows desktop applications to take advantage of the new features in .NET Core such as side by side deployment, self-contained applications (shipping .NET Core inside the application), the latest improvements in CoreFX, and more.

WPF, Windows Forms, and WinUI Open Sourced

On November 12, 2014 we announced the open sourcing of .NET Core. It has been a tremendous success. The .NET platform has received over 60,000 community accepted pull requests from more than 3700 companies outside of Microsoft.

Today, we are excited to announce that are open sourcing WPF, Windows Forms, and WinUI, so the three major Windows UX technologies will be open sourced. For the first time ever, the community will be able to see the development of WPF, Windows Forms, and WinUI happen in the open and we will take contributions for these frameworks on .NET Core. The first wave of code will be available in GitHub today and more will appear over the next few months.

At the Connect conference today, we merged the first two community PRs on stage. Thanks @onovotny and @dotMorten!

WPF and Windows Forms projects are under the stewardship of the .NET Foundation which also announced changes today so that the community will directly guide foundation operations. It is also expanding the current sponsors – Red Hat, JetBrains, Google, Unity, Microsoft and Samsung, by also welcoming Pivotal, Progress Telerik and Insight. This new structure will help the .NET Foundation scale to meet the needs of the growing .NET open source ecosystem.

Truly an exciting time to be a .NET developer!

WPF and Windows Forms

WPF and Windows Forms can now be used with .NET Core. They ship in a new component called “Windows Desktop” that is part of the Windows version of the SDK.

We’ve been working with community developers who have been making their Windows Desktop applications run on early builds of .NET Core 3. They’ve given us great feedback, that the WPF and Windows Forms APIs are compatible and that they’ve been successful!

You can create new .NET Core projects for WPF and Windows Forms from the command line.

dotnet new wpf
dotnet new winforms

Once a project has been created, you can dotnet run them. The following image illustrates what a new WPF app looks like.

 

WPF on .NET Core App

Windows Forms is very similar, displayed in the following image.

 

Windows Forms App on .NET Core

You can also open, launch and debug WPF and Windows Forms projects in Visual Studio 2019 Preview 1. It is currently possible to open .NET Core 3.0 projects in Visual Studio 2017 15.9, however, it is not a supported scenario (and you need to enable previews).

The new projects are the same as existing .NET Core projects, with a couple additions. Here is the comparison of the basic .NET Core console project and a basic Windows Forms and WPF project.

In a .NET Core console project, the file uses Microsoft.NET.Sdk SDK and declares a dependency on .NET Core 3.0 via the netcoreapp3.0 target framework:

The WPF and Windows Forms projects look similar but use a different SDK and also use properties to declare which UI framework is being used:

For WPF:

For Windows Forms:

The UseWPF and UseWindowsForms properties allow a project to specify whether it uses Windows Forms, or WPF. This will allow tooling (such as Intellisense, or the toolbox or menus in Visual Studio) to provide an experience tailored to the UI framework(s) being used. Both properties can be set to true if the app uses both frameworks, for example when a Windows Forms dialog is hosting a WPF control.

In the upcoming months, we’re focusing on completing the open sourcing of WPF and Windows Forms, enabling the Visual Studio designers to work with .NET Core, and adding support for APIs that are typically used in Windows Desktop apps. Please share your feedback on the dotnet/winforms, dotnet/wpf and dotnet/core repos.

Applications now have executables by default

.NET Core applications are now built with executables. This is new for applications that use a globally installed version of .NET Core. Until now, only self-contained applications had executables. You can see executables produced in the following examples.

You can expect the same things with these executables as you would other native executables, such as:

  • You can double click on the executable.
  • You can launch the application from a command prompt without using the dotnet tool, using myconsole.exe, on Windows, and ./myconsole, on Linux and macOS, as you can see in the following examples.

On Windows:

C:\Users\rlander\myconsole>dotnet build
C:\Users\rlander\myconsole>cd bin\Debug\netcoreapp3.0
C:\Users\rlander\myconsole\bin\Debug\netcoreapp3.0>dir /b 
myconsole.deps.json
myconsole.dll
myconsole.exe
myconsole.pdb
myconsole.runtimeconfig.dev.json
myconsole.runtimeconfig.json
C:\Users\rlander\myconsole\bin\Debug\netcoreapp3.0>myconsole.exe
Hello World!
C:\Users\rlander\myconsole\bin\Debug\netcoreapp3.0>dotnet myconsole.dll
Hello World!

On Linux (and macOS will be similar):

root@cc08212a1da6:/myconsole# dotnet build
root@cc08212a1da6:/myconsole# cd bin/Debug/netcoreapp3.0/
root@cc08212a1da6:/myconsole/bin/Debug/netcoreapp3.0# ls
myconsole            myconsole.dll  myconsole.runtimeconfig.dev.json
myconsole.deps.json  myconsole.pdb  myconsole.runtimeconfig.json
root@cc08212a1da6:/myconsole/bin/Debug/netcoreapp3.0# ./myconsole
Hello World!
root@cc08212a1da6:/myconsole/bin/Debug/netcoreapp3.0# dotnet myconsole.dll
Hello World!

An executable is provided that matches the environment of the SDK you are using. We have not yet enabled specifying -r arguments for other runtime environments.

dotnet build now copies dependencies

dotnet build now copies NuGet dependencies for your application from the NuGet cache to your build output folder during the build operation. Until this release,those dependencies were only copied as part of dotnet publish. This change allows you to xcopy your build output to different machines.

There are some operations, like linking and razor page publishing that will still require publishing.

You can see the new experience in the following example:

C:\Users\rlander\myconsole>dotnet add package Newtonsoft.json
C:\Users\rlander\myconsole>dotnet build
C:\Users\rlander\myconsole>dir /b bin\Debug\netcoreapp3.0\*.dll
myconsole.dll
Newtonsoft.Json.dll

Local dotnet tools

.NET Core tools has been updated to include a local tools scenario. We added global tools in .NET Core 2.1. Global tools are available from any location on the machine for the current user. This is great, but this does not allow the version to be selected per location (usually per repository) and they do not offer an easy way to restore a development or build tool environment. A particular location on disk can now be associated with a set of local tools and their versions. Local tools rely on a tool manifest file named dotnet-tools.json. We suggest supplying the tool manifest file at the root of the repository.

Local tools have a different experience for adding global tools to a tool manifest file (usually a repository) and cloning a repository containing them. If you clone a repo that contains local tools, you simply need to run the following command:

dotnet tool restore

After restoring, you call a local tool with:

dotnet tool run <toolCommandName>

When a local tool is called, dotnet searches for a manifest up the directory structure. When a tool manifest file is found, it is searched for the requested tool. If the tool is found, it includes the information needed to find the tool in the NuGet global packages location. The correct version of the tool from the tool manifest file was placed in the cache on restore.

If the tool is found in the manifest, but not the cache, the user receives an error. The message will be improved after Preview 1 to request the user run dotnet tool restore.

To add local tools to a directory, you need to first create the tool manifest file. After Preview 1 we will offer a mechanism for creating tool manifest files, probably via a dotnet new template. For Preview 1 you must create the file names dotnet-tools.json with the following contents:

Once the manifest is created, you can add local tools to it using:

dotnet tool install <toolPackageId>

This command installs the latest version of the tool, unless another version is specified. This version is written into the tool manifest file to allow the correct version of the tool to be called. The tool manifest file is designed to allow hand editing – which you might do to update the required version for working with the repository. An example dotnet-tools.json file follows:

To remove a tool from the tool manifest file, run the following command:

dotnet tool uninstall <toolPackageId>

If the tool manifest file is checked into your source control, programmers cloning your repo can gain access to the correct tools as explained above.

For both global and local tools, a compatible version of the runtime is required. Many tools currently on NuGet.org target .NET Core Runtime 2.1. If you only install the preview of .NET Core 3.0, you may also need to manually install the .NET Core 2.1 Runtime.

For more information, see Local Tools Early Preview Documentation.

Introducing a fast in-box JSON Reader

System.Text.Json.Utf8JsonReader is a high-performance, low allocation, forward-only reader for UTF-8 encoded JSON text, read from a ReadOnlySpan<byte>. The Utf8JsonReader is a foundational, low-level type, that can be leveraged to build custom parsers and deserializers. Reading through a JSON payload using the new Utf8JsonReader is 2x faster than using the reader from Json.NET. It does not allocate until you need to actualize JSON tokens as (UTF16)strings.

This new API will include the following components:

  • In Preview 1: JSON reader (sequential access)
  • Coming next: JSON writer, DOM (random access), poco serializer, poco deserializer

Here is the basic reader loop for the Utf8JsonReader that can be used as a starting point:

The .NET ecosystem has relied on Json.NET and other popular JSON libraries, which continue to be good choices. JSON.NET uses .NET strings as its base datatype, which are UTF16 under the hood. In .NET Core 2.1 and 3.0, we added new APIs that makes it possible to write JSON APIs that require much less memory, based on using Span<T> and UTF8 strings, and better serve the needs of high-throughput applications like Kestrel, ASP.NET Core web server. That’s what we’ve done with Utf8JsonReader.

You might wonder why we can’t just update Json.NET to include support for parsing JSON using Span<T>? James Newton-King — the author of Json.NET — has the following to say about that:

Json.NET was created over 10 years ago, and since then it has added a wide range of features aimed to help developers work with JSON in .NET. In that time Json.NET has also become far and away NuGet’s most depended on and downloaded package, and is the go-to library for JSON support in .NET. Unfortunately, Json.NET’s wealth of features and popularity works against making major changes to it. Supporting new technologies like Span would require fundamental breaking changes to the library and would disrupt existing applications and libraries that depend on it.

Going forward Json.NET will continue to be worked on and invested in, both addressing known issues today and supporting new platforms in the future. Json.NET has always existed alongside other JSON libraries for .NET, and there will be nothing to prevent you using one or more together, depending on whether you need the performance of the new JSON APIs or the large feature set of Json.NET.

See dotnet/corefx #33115 and System.Text.Json Roadmap for more information.

Ranges and indices

We’re adding a type Index, which can be used for indexing. You can create one from an int that counts from the beginning, or with a prefix ^ operator that counts from the end:

We’re also introducing a Range type, which consists of two Index values, one for the start and one for the end, and can be written with a x..y range expression. You can then index with a Range in order to produce a slice:

Note: This feature is also part of C# 8.

Async streams

We introduce IAsyncEnumerable<T>, which is exactly what you’d expect; an asynchronous version of IEnumerable<T>. The language lets you await foreach over these to consume their elements, and yield return to them to produce elements.

The following example demonstrates both production and consumption of async streams. The foreach statement is async and itself uses yield return to produce an async stream for callers. This pattern – using yield return — is the recommended model for producing async streams.

In addition to being able to await foreach, you can also create async iterators, e.g. an iterator that returns an IAsyncEnumerable/IAsyncEnumerator that you can both await and yield in. For objects that need to be disposed, you can use IAsyncDisposable, which various BCL types implement, such as Stream and Timer.

Note: This feature is also part of C# 8.

System.Buffers.SequenceReader

We added System.Buffers.SequenceReader as a reader for ReadOnlySequence<T>. This allows easy, high performance, low allocation parsing of System.IO.Pipelines data that can cross multiple backing buffers. The following example breaks an input Sequence into valid CR/LF delimited lines:

Serial Port APIs now supported on Linux

The serial port may seem like an old peripheral that you have not seen on a machine in years. It’s not a commonly used or available port on laptops and desktops, but it’s critical for Internet of Things (IoT) solutions. We now have support for serial ports on Linux. Up until now, support was limited to Windows.

We have been talking to IoT developers about this capability over the last few months. One of the specific requests was to communicate with an Arduino from a Raspberry Pi. The sample and following video demonstrates that scenario.

We’re also working on making it possible to flash an Arduino from a Raspberry Pi with .NET Core APIs. We have had requests from developers that want to use specific standards like RS-485 and protocols such as MODBUS, CANBUS and ccTalk. Please tell us which protocols are needed for your application scenarios.

GPIO, PWM, SPI and I2C APIs now available

IoT devices expose much more than serial ports. They typically expose multiple kinds of pins that can be programmatically used to read sensors, drive LED/LCD/eInk displays and communicate with our devices. .NET Core now has APIs for GPIO, PWM, SPI, and I²C pin types.

These APIs are available via the System.Device.GPIO NuGet package. It will be supported for .NET Core 2.1 and later releases.

The previous section shows a raspberry Pi communicating with an Arduino using Pins 8 and 10 (TX and RX, respectively) for serial port communication. That’s one example of programmatically programming these pins.

The following image demonstrates printing text to an LCD panel.

.NET Core IoT LCD Display

You can see what a “developer installation” looks like for building an IoT device with .NET Core.

.NET Core IoT Display

The packages are built from dotnet/iot repo. The repo also includes samples and device bindings. We hope that the community will join us by contributing samples and device bindings. We have noticed that the Python community has a rich set of samples and bindings, particularly as provided by AdaFruit. In fact, we ported some of those samples (where licenses allowed) to C#. We aspire to provide a similarly rich set of samples and bindings for .NET developers over time.

Most of our effort has been spent on supporting these APIs in Raspberry Pi 3. We plan to support other devices, like the Hummingboard. Please tell us which boards are important to you. We are in the process of testing Mono on the Raspberry Pi Zero.

Supporting TLS 1.3 and OpenSSL 1.1.1 now Supported on Linux

.NET Core will now take advantage of TLS 1.3 support in OpenSSL 1.1.1, when it is available in a given environment. There are multiple benefits of TLS 1.3, per the OpenSSL team:

  • Improved connection times due to a reduction in the number of round trips required between the client and server
  • Improved security due to the removal of various obsolete and insecure cryptographic algorithms and encryption of more of the connection handshake

.NET Core 3.0 Preview 1 is capable of utilizing OpenSSL 1.1.1, OpenSSL 1.1.0, or OpenSSL 1.0.2 (whatever the best version found is, on a Linux system). When OpenSSL 1.1.1 is available the SslStream and HttpClient types will use TLS 1.3 when using SslProtocols.None (system default protocols), assuming both the client and server support TLS 1.3.

The following sample demonstrates .NET Core 3.0 Preview 1 on Ubuntu 18.10 connecting to https://www.cloudflare.com:

jbarton@jsb-ubuntu1810:~/tlstest$ cat Program.cs
using System;
using System.Net.Security;
using System.Net.Sockets;
using System.Threading.Tasks;

namespace tlstest
{
    class Program
    {
        static async Task Main()
        {
            using (TcpClient tcpClient = new TcpClient())
            {
                string targetHost = "www.cloudflare.com";

                await tcpClient.ConnectAsync(targetHost, 443);

                using (SslStream sslStream = new SslStream(tcpClient.GetStream()))
                {
                    await sslStream.AuthenticateAsClientAsync(targetHost);

                    await Console.Out.WriteLineAsync($"Connected to {targetHost} with {sslStream.SslProtocol}");
                }
            }
        }
    }
}
jbarton@jsb-ubuntu1810:~/tlstest$ dotnet run
Connected to www.cloudflare.com with Tls13
jbarton@jsb-ubuntu1810:~/tlstest$ openssl version
OpenSSL 1.1.1  11 Sep 2018

Note: Windows and macOS do not yet support TLS 1.3. .NET Core will support TLS 1.3 on those operating systems — we expect automatically — when support becomes available.

Cryptography

We added support for AES-GCM and AES-CCM ciphers, implemented via System.Security.Cryptography.AesGcm and System.Security.Cryptography.AesCcm. These algorithms are both Authenticated Encryption with Association Data (AEAD) algorithms, and the first Authenticated Encryption (AE) algorithms added to .NET Core.

The following code demonstrates using AesGcm cipher to encrypt and decrypt random data. The code for AesCcm would look almost identical (only the class variable names would be different).

// key should be: pre-known, derived, or transported via another channel, such as RSA encryption
byte[] key = new byte[16];
RandomNumberGenerator.Fill(key);

byte[] nonce = new byte[12];
RandomNumberGenerator.Fill(nonce);

// normally this would be your data
byte[] dataToEncrypt = new byte[1234];
byte[] associatedData = new byte[333];
RandomNumberGenerator.Fill(dataToEncrypt);
RandomNumberGenerator.Fill(associatedData);

// these will be filled during the encryption
byte[] tag = new byte[16];
byte[] ciphertext = new byte[dataToEncrypt.Length];

using (AesGcm aesGcm = new AesGcm(key))
{
    aesGcm.Encrypt(nonce, dataToEncrypt, ciphertext, tag, associatedData);
}

// tag, nonce, ciphertext, associatedData should be sent to the other part

byte[] decryptedData = new byte[ciphertext.Length];

using (AesGcm aesGcm = new AesGcm(key))
{
    aesGcm.Decrypt(nonce, ciphertext, tag, decryptedData, associatedData);
}

// do something with the data
// this should always print that data is the same
Console.WriteLine($"AES-GCM: Decrypted data is{(dataToEncrypt.SequenceEqual(decryptedData) ? "the same as" : "different than")} original data.");

Cryptographic Key Import/Export

.NET Core 3.0 Preview 1 now supports the import and export of asymmetric public and private keys from standard formats, without needing to use an X.509 certificate.

All key types (RSA, DSA, ECDsa, ECDiffieHellman) support the X.509 SubjectPublicKeyInfo format for public keys, and the PKCS#8 PrivateKeyInfo and PKCS#8 EncryptedPrivateKeyInfo formats for private keys. RSA additionally supports PKCS#1 RSAPublicKey and PKCS#1 RSAPrivateKey. The export methods all produce DER-encoded binary data, and the import methods expect the same; if a key is stored in the text-friendly PEM format the caller will need to base64-decode the content before calling an import method.

jbarton@jsb-ubuntu1810:~/rsakeyprint$ cat Program.cs
using System;
using System.IO;
using System.Security.Cryptography;

namespace rsakeyprint
{
    class Program
    {
        static void Main(string[] args)
        {
            using (RSA rsa = RSA.Create())
            {
                byte[] keyBytes = File.ReadAllBytes(args[0]);
                rsa.ImportRSAPrivateKey(keyBytes, out int bytesRead);
 
                Console.WriteLine($"Read {bytesRead} bytes, {keyBytes.Length-bytesRead} extra byte(s) in file.");
                RSAParameters rsaParameters = rsa.ExportParameters(true);
                Console.WriteLine(BitConverter.ToString(rsaParameters.D));
            }
        }
    }
}
jbarton@jsb-ubuntu1810:~/rsakeyprint$ echo Making a small key to save on screen space.
Making a small key to save on screen space.
jbarton@jsb-ubuntu1810:~/rsakeyprint$ openssl genrsa 768 | openssl rsa -outform der -out rsa.key
Generating RSA private key, 768 bit long modulus (2 primes)
..+++++++
........+++++++
e is 65537 (0x010001)
writing RSA key
jbarton@jsb-ubuntu1810:~/rsakeyprint$ dotnet run rsa.key
Read 461 bytes, 0 extra byte(s) in file.
0F-D0-82-34-F8-13-38-4A-7F-C7-52-4A-F6-93-F8-FB-6D-98-7A-6A-04-3B-BC-35-8C-7D-AC-A5-A3-6E-AD-C1-66-30-81-2C-2A-DE-DA-60-03-6A-2C-D9-76-15-7F-61-97-57-
79-E1-6E-45-62-C3-83-04-97-CB-32-EF-C5-17-5F-99-60-92-AE-B6-34-6F-30-06-03-AC-BF-15-24-43-84-EB-83-60-EF-4D-3B-BD-D9-5D-56-26-F0-51-CE-F1
jbarton@jsb-ubuntu1810:~/rsakeyprint$ openssl rsa -in rsa.key -inform der -text -noout | grep -A7 private
privateExponent:
    0f:d0:82:34:f8:13:38:4a:7f:c7:52:4a:f6:93:f8:
    fb:6d:98:7a:6a:04:3b:bc:35:8c:7d:ac:a5:a3:6e:
    ad:c1:66:30:81:2c:2a:de:da:60:03:6a:2c:d9:76:
    15:7f:61:97:57:79:e1:6e:45:62:c3:83:04:97:cb:
    32:ef:c5:17:5f:99:60:92:ae:b6:34:6f:30:06:03:
    ac:bf:15:24:43:84:eb:83:60:ef:4d:3b:bd:d9:5d:
    56:26:f0:51:ce:f1

PKCS#8 files can be inspected with the System.Security.Cryptography.Pkcs.Pkcs8PrivateKeyInfo class.

PFX/PKCS#12 files can be inspected and manipulated with System.Security.Cryptography.Pkcs.Pkcs12Info and System.Security.Cryptography.Pkcs.Pkcs12Builder, respectively.

More BCL Improvements

We optimized Span<T>, Memory<T> and related types that were introduced in .NET Core 2.1. Common operations such as span construction, slicing, parsing, and formatting now perform better. Additionally, types like String have seen under-the-cover improvements to make them more efficient when used as keys with Dictionary<TKey, TValue> and other collections. No code changes are required to enjoy these improvements.

The following improvements are also new in .NET Core 3 Preview 1:

  • Brotli support built-in to HttpClient
  • ThreadPool.UnsafeQueueWorkItem(IThreadPoolWorkItem)
  • Unsafe.Unbox
  • CancellationToken.Unregister
  • Complex arithmetic operators
  • Socket APIs for TCP keep alive
  • StringBuilder.GetChunks
  • IPEndPoint parsing
  • RandomNumberGenerator.GetInt32

Default implementations of interface members

Today, once you publish an interface it’s game over: you can’t add members to it without breaking all the existing implementers of it.

In C# 8.0 we let you provide a body for an interface member. Thus, if somebody doesn’t implement that member (perhaps because it wasn’t there yet when they wrote the code), they will just get the default implementation instead.

In this example. the ConsoleLogger class doesn’t have to implement the Log(Exception) overload of ILogger, because it is declared with a default implementation. Now you can add new members to existing public interfaces as long as you provide a default implementation for existing implementors to use.

Tiered Compilation

Tiered compilation is on by default with .NET Core 3.0. It’s a feature that enables the runtime to more adaptively use the Just-In-Time (JIT) compiler to get better performance, both at startup and to maximize throughput. It was added as an opt-in feature in .NET Core 2.1 and then was enabled by default in .NET Core 2.2 Preview 2. We decided that we were not quite ready to enable it by default in the final .NET Core 2.2 release, so switched it back to opt-in, just like .NET Core 2.1. It is enabled by default in .NET Core 3.0 and we expect it to stay in that configuration.

Assembly Metadata Reading with MetadataLoadContext

We have added the new MetadataLoadContext type that enables reading assembly metadata without affecting the caller’s application domain. Assemblies are read as data, including assemblies built for different architectures and platforms than the current runtime environment. MetadataLoadContext overlaps with the ReflectionOnlyLoad type, which is only available in the .NET Framework.

MetdataLoadContext is available in the System.Reflection.MetadataLoadContext package. It is a .NET Standard 2.0 package.

The MetadataLoadContext exposes APIs similar to the AssemblyLoadContext type, but is not based on that type. Much like AssemblyLoadContext, the MetadataLoadContext enables loading assemblies within an isolated assembly loading universe. MetdataLoadContext APIs returns Assembly objects, enabling the use of familiar reflection APIs. Execution-oriented APIs, such as MethodBase.Invoke, are not allowed and will throw InvalidOperationException.

The following sample demonstrates how to find concrete types in an assembly that implements a given interface:

Scenarios for MetadataLoadContext include design-time features, build-time tooling, and runtime light-up features that need to inspect a set of assemblies as data and have all file locks and memory freed after inspection is performed.

The MetadataLoadContext has a resolver class passed to its constructor. The resolver’s job is to load an Assembly given its AssemblyName. The resolver class derives from the abstract MetadataAssemblyResolver class. An implementation of the resolver for path-based scenarios is provided with PathAssemblyResolver.

The MetadataLoadContext tests demonstrate many use cases. The Assembly tests are a good place to start.

Note: The following (now ancient) article about .NET assembly metadata would have benefited from this new API: Displaying Metadata in .NET EXEs with MetaViewer.

ARM64

We are adding support for ARM64 for Linux this release. For context, we added support for ARM32 for Linux with .NET Core 2.1 and Windows with .NET Core 2.2. The primary use case for ARM64 is currently with IoT scenarios. Some developers we are working with want to deploy on ARM32 environments and others on ARM64.

Alpine, Debian and Ubuntu Docker images are available for .NET Core for ARM64.

Please check .NET Core ARM64 Status for more information.

Platform Support

.NET Core 3 will be supported on the following operating systems:

  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 20012 R2 SP1+
  • macOS: 10.12+
  • RHEL: 6+
  • Fedora: 26+
  • Ubuntu: 16.04+
  • Debian: 9+
  • SLES: 12+
  • openSUSE: 42.3+
  • Alpine: 3.8+

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Windows (coming with preview 2) and Linux
  • ARM64 on Linux

For Linux, ARM32 is supported on Debian 9+ and Ubuntu 16.04+. For ARM64, it is the same with the addition of Alpine 3.8. These are the same versions of those distros as is supported for X64. We made a conscious decision to make supported platforms as similar as possible between X64, ARM32 and ARM64.

Docker images for .NET Core 3.0 are available at microsoft/dotnet on Docker Hub, including for ARM64.

Summary

We are excited to have the first preview of .NET Core 3 available today! .NET Core 3 Preview 1 also includes features in .NET Core 2.2, which you can read about in Announcing .NET Core 2.2.

We’ll be releasing more detailed posts on .NET Core 3 in the coming weeks, which will also include an update on .NET Standard 2.1, which didn’t make it into Preview 1 yet.

Know that if you have existing .NET Framework apps that there is not pressure to port them to .NET Core. We will be adding features to .NET Framework 4.8 to support new desktop scenarios. While we do recommend that new desktop apps should consider targeting .NET Core, the .NET Framework will keep the high compatibility bar and will provide support for your apps for a very long time to come. And with .NET Core 3 we will provide a great experience for applications that want to take advantage of the latest features in .NET Core.

.NET Framework December 5, 2018 Preview of Cumulative Update for Windows 10 version 1809 and Windows Server 2019

$
0
0

Today, we are releasing the December 5, 2018 Preview of .NET Framework Cumulative Update for Windows 10 version 1809 and Windows Server 2019.

For more information about the new Cumulative Updates for .NET Framework for Windows 10 version 1809 and Windows Server 2019 please refer to this recent announcement.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addressed a situation where the System.Security.Cryptography.Algorithms reference was not correctly loaded on .NET Framework 4.7.1 after the 7B/8B patch. [673870]
  • Updated Japanese dates that are formatted for the first year in an era and for which the format pattern uses “y年”. The format of the year together with the symbol “元” is supported instead of using year number 1. Also, formatting day numbers that include “元” is supported. [646179]
  • Updated Venezuela currency information, this change affected the culture of “es-VE” in the following ways. [616146]
    1) currency symbol changed to “Bs.S”
    2) English currency name is changed to “Bolívar Soberano”
    3) Native Currency name is changed to “bolívar soberano”
    4) Intl Currency Code changed to “VES”
  • Addressed an issue with KB4096417 where we switched to CLR-implemented write-watch for pages. The GC will no longer call VirtualAlloc when running under workstation GC mode. [685611]

SQL

  • Provides an AppContext flag for making the default value of TransparentNetworkIPResolution false in SqlClient connection strings. [690465]

WCF

  • Addressed a System.AccessViolationException due to accessing disposed X509Certificate2 instance in a rare race condition to defer the service certificate cleanup to GC. The impacted scenario is WCF NetTcp bindings using reliable sessions with certificate authentication. [657003]  

WPF

  • Addressed a crash due to TaskCanceledException that can occur during shutdown of some WPF apps. Apps that continue to do work involving weak events or data binding after Application.Run() returns are known to be vulnerable to this crash. [655427]
  • Addressed a race condition involving temporary files and some anti-virus scanners. This was causing crashes with the message “The process cannot access the file “. [638468]

Winforms

  • Addressed an issue on some .NET Remoting scenarios, when using TransactionScopeAsyncFlowOption.Enabled, it was possible to have Transaction.Current reset to null after a remoting call. This occurred when the remoting call did not leave the caller’s AppDomain (with 4.7.2). [669153]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10 version 1809 and Windows Server 2019, .NET Framework updates are independent of the Windows 10 Monthly Rollup.

Product Version Preview of Cumulative Update KB
Windows 10 1809 (October 2018 Update) Catalog
4469041
.NET Framework 3.5, 4.7.2 4469041

 

Take C# 8.0 for a spin

$
0
0

Take C# 8.0 for a spin

Yesterday we announced the first preview of both Visual Studio 2019 (Making every developer more productive with Visual Studio 2019) and .NET Core 3.0 (Announcing .NET Core 3 Preview 1 and Open Sourcing Windows Desktop Frameworks).

One of the exciting aspects of that is that you get to play with some of the features coming in C# 8.0! Here I am going to take you on a little guided tour through three new C# features you can try out in the preview. If you want an overview of all the major features, go read the recent post Building C# 8.0, or check the short (13 mins) video "What’s new in C# 8.0" on Channel 9 or YouTube.

Getting ready

First of all, download and install Preview 1 of .NET Core 3.0 and Preview 1 of Visual Studio 2019. In Visual Studio, make sure you select the workload ".NET Core cross-platform development" (if you forgot, you can just add it later by opening the Visual Studio Installer and clicking "Modify" on the Visual Studio 2019 Preview channel).

Launch Visual Studio 2019 Preview, Create a new project, and select "Console App (.NET Core)" as the project type.

Once the project is up and running, change its target framework to .NET Core 3.0 (right click the project in Solution Explorer, select Properties and use the drop down menu on the Application tab). Then select C# 8.0 as the language version (on the Build tab of the project page click "Advanced…" and select "C# 8.0 (beta)").

Now you have all the language features and the supporting framework types ready at your fingertips!

Nullable reference types

The nullable reference types feature intends to warn you about null-unsafe behavior in the code. Since we didn’t do that before, it would be a breaking change to just start now! To avoid that, you need to opt in to the feature.

Before we do turn it on, though, let’s write some really bad code:

using static System.Console;

class Program
{
    static void Main(string[] args)
    {
        string s = null;
        WriteLine($"The first letter of {s} is {s[0]}");
    }
}

If you run it you get, of course, a null reference exception. You’ve fallen into the black hole! How were you supposed to know not to dereference s in that particular place? Well duh, because null was assigned to it on the previous line. But in real life, it’s not on the previous line, but in somebody else’s assembly running on the other side of the planet three years after you wrote your line of code. How could you have known not to write that? That’s the question that nullable reference types set out to answer! So let’s turn them on!

For a new project you should just turn them on right away. In fact I think they should probably be on by default in new projects, but we didn’t do that in the preview. The way to turn them on is to add the following line to your .csproj file, e.g. right after the LanguageVersion that was just inserted when you switched to C# 8.0 above:

    <NullableReferenceTypes>true</NullableReferenceTypes>

Save the .csproj file and return to your program: What happened? You got two warnings! Each represents one "half" of the feature. Let’s look at them in turn. The first one is on the null in this line:

        string s = null;

It complains that you are assigning null to a "non-nullable type": Whaaat?!? When the feature is turned on nulls are no longer welcome in ordinary reference types such as string! Because, you know what, null is not a string! We’ve been pretending for the last fifty years of object-oriented programming, but actually null is in fact not an object: That’s why everything explodes whenever you try to treat it like one!

So no more of that: null is verboten, unless you ask for it. How do you ask for it? By using a nullable reference type, such as string?. The trailing question mark signals that null is allowed:

        string? s = null;

The warning goes away: we have explicitly expressed the intent for this variable to hold null, so now it’s fine.

Until the next line of code! On the line

        WriteLine($"The first letter of {s} is {s[0]}");

It complains about the s in s[0] that you may be dereferencing a null reference. And sure enough: you are! Well done, compiler! How do you fix it, though? Well that’s pretty much up to you – whichever way you would always have fixed it! Let’s try for starters to only execute the line when s is not null:

        if (s != null) WriteLine($"The first letter of {s} is {s[0]}");

The warning goes away! Why? Because the compiler can see that you only go to the offending code when s is not null. It actually does a full flow analysis, tracking every variable across every line of code to keep tabs on where it might be null and where it probably won’t be. It watches your tests and assignments, and does the bookkeeping.

Let’s try another version:

        WriteLine($"The first letter of {s} is {s?[0] ?? '?'}");

This uses the null conditional indexing operator s?[0] which avoids the dereference and produces a null if s is null. Now we have a nullable char?, but the null-coalescing operator ?? '?' replaces a null value with the char '?'. So all null dereferences are avoided. The compiler is happy, and no warnings are given.

As you can see, the feature keeps you honest while you code: it forces you to express your intent whenever you want null in the system, by using a nullable reference type. And once null is there, it forces you to deal with it responsibly, making you check whenever there’s a risk that a null value may be dereferenced to trigger a null reference exception.

Are you completely null-safe now? No. There are a couple of ways in which a null may slip through and cause a null reference exception:

  • If you call code that didn’t have the nullable reference types feature on (maybe it was compiled before the feature even existed), then we cannot know what the intent of that code was: it doesn’t distinguish between nullable and nonnullable – we say that it is "null-oblivious". So we give it a pass; we simply don’t warn on such calls.
  • The analysis itself has certain holes. Most of them are a trade-off between safety and convenience; if we complained, it would be really hard to fix. For instance, when you write new string[10], we create an array full of nulls, typed as non-null strings. We don’t warn on that, because how would the compiler keep track of you initializing all the array elements?

But on the whole, if you use the feature extensively (i.e. turn it on everywhere) it should take care of the vast majority of null dereferences.

It is definitely our intention that you should start using the feature on existing code! Once you turn it on, you may get a lot of warnings. Some of these actually represent a problem: Yay, you found a bug! Some of them are maybe a bit annoying; your code is clearly null safe, you just didn’t have the tools to express your intent when you wrote it: you didn’t have nullable reference types! For instance, the line we started out with:

        string s = null;

That’s going to be super common in existing code! And as you saw, we did get a warning on the next line, too, where we tried to dereference it. So the assignment warning here is strictly speaking superfluous from a safety standpoint: It keeps you honest in new code, but fixing all occurences in existing code would not make it any safer. For that kind of situation we are working on a mode where certain warnings are turned off, when it doesn’t impact the null safety, so that it is less daunting to upgrade existing code.

Another feature to help upgrade is that you can turn the feature on or off "locally" in your code, using compiler directives #nullable enable and #nullable disable. That way you can go through your project and deal with annotations and warnings gradually, piece by piece.

To learn more about nullable reference types check out the Overview of Nullable types and the Introduction to nullable tutorial on docs.microsoft.com.

For a deeper design rationale, last year I wrote a post Introducing Nullable Reference Types in C#.

If you want to immerse yourself in the day-to-day of the design work, look at the Language Design Notes on GitHub, or follow along as I try to put together a Nullable Reference Types Specification.

Ranges and indices

C# is getting more expressiveness around working with indexed data structures. Ever wanted simple syntax for slicing out a part of an array, string or span? Now you can!

Go ahead and change your program to the following:

using System.Collections.Generic;
using static System.Console;

class Program
{
    static void Main(string[] args)
    {
        foreach (var name in GetNames())
        {
            WriteLine(name);
        }
    }

    static IEnumerable<string> GetNames()
    {
        string[] names =
        {
            "Archimedes", "Pythagoras", "Euclid", "Socrates", "Plato"
        };
        foreach (var name in names)
        {
            yield return name;
        }
    }
}

Let’s go to that bit of code that iterates over the array of names. Modify the foreach as follows:

        foreach (var name in names[1..4])

It looks like we’re iterating over names 1 to 4. And indeed when you run it that’s what happens! The endpoint is exclusive, i.e. element 4 is not included. 1..4 is actually a range expression, and it doesn’t have to occur like here, as part of an indexing operation. It has a type of its own, called Range. If we wanted, we could pull it out into its own variable, and it would work the same:

        Range range = 1..4; 
        foreach (var name in names[range])

The endpoints of a range expression don’t have to be ints. In fact they’re of a type, Index, that non-negative ints convert to. But you can also create an Index with a new ^ operator, meaning "from end". So ^1 is one from the end:

        foreach (var name in names[1..^1])

This lobs off an element at each end of the array, producing an array with the middle three elements.

Range expressions can be open at either or both ends. ..^1 means the same as 0..^1. 1.. means the same as 1..^0. And .. means the same as 0..^0: beginning to end. Try them all out and see! Try mixing and matching "from beginning" and "from end" Indexes at either end of a Range and see what happens.

Ranges aren’t just meant for use in indexers. For instance, we plan to have overloads of string.SubString, SPan<T>.Slice and the AsSpan extension methods that take a Range. Those aren’t in this Preview of .NET Core 3.0 though.

Asynchronous streams

IEnumerable<T> plays a special role in C#. "IEnumerables" represent all kinds of different sequences of data, and the language has special constructs for consuming and producing them.

As we see in our current program, they are consumed through the foreach statement, which deals with the drudgery of obtaining an enumerator, advancing it repeatedly, extracting the elements along the way, and finally disposing the enumerator. And they can be produced with iterators: Methods that yield return their elements as they are being asked for by a consumer. Both are synchronous, though: the results better be ready when they are asked for, or the thread blocks!

async and await were added to C# to deal with results that are not necessarily ready when you ask for them. They can be asynchronously awaited, and the thread can go do other stuff until they become available. But that works only for single values, not sequences that are gradually and asynchronously produced over time, such as for instance measurements from an IoT sensor or streaming data from a service.

Asynchronous streams bring async and enumerables together in C#! Let’s see how, by gradually "async’ifying" our current program.

First, let’s add another using directive at the top of the file:

using System.Threading.Tasks;

Now let’s simulate that GetNames does some asynchronous work by adding an asynchronous delay before the name is yield returned:

            await Task.Delay(1000);
            yield return name;

Of course we get an error that you can only await in an async method. So let’s make it async:

    static async IEnumerable<string> GetNames()

Now we’re told that we’re not returning the right type for an async method, which is fair. But there’s a new candidate on the list of types it can return besides the usual Task stuff: IAsyncEnumerable<T>. This is our async version of IEnumerable<T>! Let’s return that:

    static async IAsyncEnumerable<string> GetNames()

Just like that we’ve produced an asynchronous stream of strings! In accordance with naming guidelines, let’s rename GetNames to GetNamesAsync.

    static async IAsyncEnumerable<string> GetNamesAsync()

Now we get an error on this line in the Main method:

        foreach (var name in GetNamesAsync())

Which doesn’t know how to foreach over an IAsyncEnumerable<T>. That’s because foreach’ing over asynchronous streams requires explicit use of the await keyword:

        await foreach (var name in GetNamesAsync())

It’s the version of foreach that takes an async stream and awaits every element! Of course it can only do that in an async method, so we have to make our Main method async. Fortunately C# 7.2 added support for that:

    static async Task Main(string[] args)

Now all the squiggles are gone, and the program is correct. But if you try compiling and running it, you get an embarassing number of errors. That’s because we messed up a bit, and didn’t get the previews of .NET Core 3.0 and Visual Studio 2019 perfectly aligned. Specifically, there’s an implementation type that async iterators leverage that’s different from what the compiler expects.

You can fix this by adding a separate source file to your project, containing this bridging code. Compile again, and everything should work just fine.

Next steps

Please let us know what you think! If you try these features and have ideas for how to improve them, please use the feedback button in the Visual Studio 2019 Preview. The whole purpose of a preview is to have a last chance to course correct, based on how the features play out in the hands of real life users, so please let us know!

Happy hacking,

Mads Torgersen, Design Lead for C#

Open Sourcing XAML Behaviors for WPF

$
0
0

Today, we are excited to announce that we are open sourcing XAML Behaviors for WPF.

In the past, we open sourced XAML Behaviors for UWP which has been a great success and the Behaviors NuGet package has been downloaded over 500k times. One of the top community asks has been to support WPF in the same way. XAML Behaviors for WPF now ships as a NuGet package – Microsoft.Xaml.Behaviors.Wpf . This will allow new features and bug fixes to be addressed faster. When a new Behavior or feature is added to the repo, it can be consumed and used almost immediately. Opening to contributions lets the Behaviors platform grow by empowering the community to set the pace and direction. While you can continue to use the Extension SDK, further development will only take place on GitHub and be published in the NuGet package under the new namespace Microsoft.Xaml.Behaviors.

Start using XAML Behaviors for WPF now!

You can install the latest version of WPF XAML Behaviors in both Visual Studio and Blend using the NuGet Package Manager:

From the package manager console:

PM > Install-Package Microsoft.Xaml.Behaviors.Wpf

From Blend Assets pane:

Like UWP, we have made updates to Blend for Visual Studio 2019.Instead of presenting a pre=populated list of Behaviors in the Assets Pane, Blend prompts the user with a link to install the NuGet Package. Clicking this link will download and reference the latest NuGet Package and populate the list with the latest and greatest Behaviors. Note that if this is an existing project which references the old Behaviors SDK, the list will be pre-populated with the Behaviors from the SDK. See below for steps to migrate to the NuGet package.

Migrating .NET Framework projects from Extension SDK to NuGet

The NuGet package ships with DLLs under the “Microsoft.Xaml.Behaviors” namespace.  Since the APIs for WPF are the same as the original Extension SDK, switching over is as easy as installing the NuGet package and updating the xmlns and the usings. Note that Behaviors are not yet fully supported on .NET Core.

Steps to migrate:

  1. Remove reference to “Microsoft.Expression.Interactions” and “System.Windows.Interactivity”
  2. Install the “Microsoft.Xaml.Behaviors.Wpf” NuGet package.
  3. XAML files – replace the xmlns namespaces “http://schemas.microsoft.com/expression/2010/interactivity” and “http://schemas.microsoft.com/expression/2010/interactions“with “http://schemas.microsoft.com/xaml/behaviors
  4. C# files – replace the usings in c# files “Microsoft.Xaml.Interactivity” and “Microsoft.Xaml.Interactions” with “Microsoft.Xaml.Behaviors”

Conclusion

A big thank you to our MVP leaders for dedicating their time and effort in helping guide this project as WPF XAML Behaviors are opened to the community.

Contributions of new and useful Behaviors are welcomed and encouraged. Have feedback, suggestions, or comments? We would love to hear them – please submit an issue on the GitHub page or email us.

Viewing all 4000 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>