Continuous Integration for UWP projects – Making Builds Faster

December 3, 2015 Coding 2 comments , , ,

Continuous Integration for UWP projects – Making Builds Faster

Are you developing a UWP app? Are you doing continuous integration? Do you want to improve your CI build times while still generating the .appxupload required for store submission? If so, read-on.

Prerequisites

You’ll need VS 2015 with the UWP 1.1 tools installed. The UWP 1.1 tooling has some important fixes for creating app bundles and app upload files for command line/CI builds.

You’ll also need to register your app on the Windows Dev Center and associate it with your app. Follow the docs for setting linking your project to a store from within VS first.

If you’re using VSO, you may need to setup your own VM to run a vNext build agent. I’m not sure VSO’s hosted agents have all the latest tools as of today. I run my builds in an A2 VM on Azure; it’s not the fastest build server but it’s good enough.

Building on a Server

Now that you have a solution with one or more projects that create an appx (UWP) app, you can start setting up your build scripts. One problem you’ll need to solve is updating your .appxmanifest with an incrementing version each time. I’ve solved this using the fantastic GitVersion tool. There’s a number of different ways to use it, but on VSO it sets environment variables as part of a build step that I use to update the manifest on build.

I use a .proj msbuild file with a set of targets the CI server calls, but you can use your favorite build scripting tool.

My code looks like this:

<Target Name="UpdateVersion">
    <PropertyGroup>
      <Version>$(GITVERSION_MAJOR).$(GITVERSION_MINOR).$(GITVERSION_BUILDMETADATA)</Version>
    </PropertyGroup>    
    <ItemGroup>
      <RegexTransform Include="$(SolutionDir)\**\*.appxmanifest">
          <Find><![CDATA[ Version="\d+\.\d+\.\d+\.\d+"]]></Find>
          <ReplaceWith><![CDATA[ Version="$(Version).0"]]></ReplaceWith>
      </RegexTransform>
    </ItemGroup>
    <RegexTransform Items="@(RegexTransform)" />    
    <Message Text="Assm: Ver $(Version)" />
</Target>

The idea is to call GitVersion, either by calling GitVersion.exe earlier in the build process, or by using the GitVersion VSO Build Task in a step prior to the build step.

GitVersion can also update your AssemblyInfo files too, if you’d like.

Finally, at the end of the build step, you’ll want to collect certain files for the output. In this case, it’s the .appxupload for the store. In VSO, I look for the contents in my app dir, MyApp\AppPackages\**\*.appxupload.

If you setup your build definition to build in Release mode, you should have a successful build with a .appxupload artifact available you can submit to the store. Remember, we’ve already associated this app with the store, and we’ve enabled building x86, x64, and arm as part of our initial run-through in Visual Studio.

The problem

For your safety, a CI build will by default only generate the .appxupload file if you’re in Release mode with .NET Native enabled. This is to help you catch compile-time errors that would delay your store submission.

That’s well-intentioned, but it can severely slow down your builds. On one project I’m working on, on that A2 VM, a “normal” debug build takes about 14 min while a Release build takes 81 minutes! That’s too long for CI.

Fortunately, there’s a few things we can do to speed things up if you’re willing to live a bit dangerously.

  1. Force MSBuild to create the .appxupload without actually – yes, it is possible!
    • In your build definition, pass the additional arguments to MSBuild: /p:UseDotNetNativeToolchain=false /p:BuildAppxUploadPackageForUap=true. This overrides two variables that control the use of .NET Native and packaging.
  2. If you have any UWP Unit Test projects, you can disable package generation for them if you’re not running those unit tests on the CI box. There is a g̶o̶o̶d̶  reason for this — it’s hard. Running UWP CI tests requires your test agent to be running as an interactive process, not a service. You need to configure your build box to auto-login on reboot and then startup the agent.

    In your test projects, add the following <PropertyGroup> to your csproj file:

<!-- Don't build an appx for this in TFS/command line msbuild -->
<PropertyGroup>
  <GenerateAppxPackageOnBuild Condition="'$(GenerateAppxPackageOnBuild)' == '' and '$(BuildingInsideVisualStudio)' != 'true'">false</GenerateAppxPackageOnBuild>
</PropertyGroup>

This works because the .appxupload doesn’t actually contain native code. It contains three app bundles (one per platform) with MSIL, that the store compiles to native code in the cloud. The local .NET Native step is only a “safety” check, as is running WACK. If you regularly test your code in Release mode locally, and have run WACK to ensure your code is ok, then there’s no need to run either on every build.

After making those two adjustments, I’m able to generate the .appxupload files on every build and the build takes the same 13 min as debug mode.

Surface Book or Surface Pro 4?

October 6, 2015 Coding 3 comments

This evening, at the Windows 10 Devices fan celebration in NYC, I got to use the Surface Book (and the other devices announced today) and talk to the product guys about the Surface Pro 4 and the Surface Book. One of my questions to them stemmed from a question at work regarding the split-hinge on the Surface Book; I thought the answer was interesting, so here goes (read on further for my comparison of SP4 vs SB?)

They said the split hinge was a deliberate design decision. It also stems from the following goals:

  • To keep the base as thin as possible
  • To have a “perfect” keyboard. The travel on the keys is 1.6mm, which is greater than most laptop keyboards
  • When the lid is closed, they didn’t want the keys to scuff up the screen.
    • To address this, sometimes the keyboard is slightly recessed in the case – it is like that on the MacBook Pro I have. The problem is that’s wasted space that and they wanted to make the thing thinner.

Also, when the lid is open, they had to get the balance exactly right so that when you push against the screen (it is a touch screen after all), it doesn’t tip over. Many/most other 2-in-1’s don’t have the balance quite right and are “tipsy”. Having tried the Surface Book, I can say it’s certainly not tipsy. The “dynamic fulcrum hinge” has some role in this too.

When it comes to a choice between a Surface Pro 4 and the Surface Book, I’d have to say that the differences are primarily around usage:

  • Surface Pro 4 is a tablet and can go it’s full battery charge without its keyboard
  • Surface Pro 4’s keyboard is better than the previous gen one, but for people that do a lot of typing (developers?), it may not be as ideal. In “lap mode”, the SP4 keyboard still has some “bounciness” as the cover overall could be stiffer.
  • Surface Book’s “clipboard” has three hours of battery life on its own. The remaining 9 hours are in the base (for a total of 12). That’s why they call it a clipboard and not a tablet, because the tablet usage is intended as a secondary/auxiliary mode, not the primary.
  • The Surface Book’s keyboard is really, really nice.
  • The 13.5” screen size feels bigger than it is due to its aspect ratio and the resolution. It also has a very narrow bezel, so the screen goes almost to the edge.

Both devices will have the same memory/storage capabilities, maxing out at 16 GB/1TB. The 1TB storage isn’t available yet (will be a month or two) as they are finishing testing those components. They are using Samsung 3d V-NAND modules so the more storage, the faster it actually is. The pen is really nice and has a great feel to it. Even for people with messy handwriting, the friction level on the screen is the right amount to have control and write something legibly.

Both machines are priced at about $2700 fully loaded (16GB/1TB). Which one to get really depends on your usage and needs; I have a feeling most developers would be happiest with the Surface Book while non-developers would probably like the Surface Pro 4 best.

Enabling source code debugging for your NuGet packages with GitLink

September 23, 2015 Coding 3 comments , , , ,

Enabling source code debugging for your NuGet packages with GitLink

Recently on Twitter, someone was complaining that their CI builds were failing due to SymbolSource.org either being down or rejecting their packages. Fortunately, there’s a better way than using SymbolSource if you’re using a public Git repo (like GitHub) to host your project — GitLink.

Symbols, SymbolSource and NuGet

Hopefully by now, most of you know that you need to create symbols (PDB’s) for your release libraries in addition to your debug builds. Having symbols helps your users troubleshoot issues that may crop up when they’re using your library. Without symbols, you need to rely on hacks, like using dotPeek as a Symbol Server. It’s a hack because the generated source code usually doesn’t match the original, and it certainly doesn’t include any helpful comments (you do comment your code, right?)

So you’ve updated your project build properties to create symbols for release, now you need someplace to put them so your users can get them. Up until recently, the easiest way has been to publish them on SymbolSource. You’d include the pdb files in your NuGet NuSpec, and then run nuget pack MyLibrary.nuspec -symbols. NuGet then creates two packages, one with your library and one just with the symbols. If you then run nuget push MyLibrary.1.0.0.nupkg, if there’s also a symbols package alongside, NuGet will push that to SymbolSource instead of NuGet.org. If you’re lucky, things will just work. However, sometimes SymbolSource doesn’t like your PDB’s and your push will fail.

The issues

While SymbolSource is a great tool, there are some shortcomings.
* It requires manual configuration by the library consumer
* They have to know to go to VS and add the SymbolSource URL to the symbol search path
* It slows down your debugging experience. VS will by default check every configured Symbol Server for matching PDB’s. That leads many people to either disable symbol loading entirely or selectively load symbols. Even if you selectively load symbols, the load is still slow as VS has know way to know which Symbol Server a PDB might be on and must check all of them.
* Doesn’t enable Source Code debugging. PDB’s can be indexed to map original source code file metadata into them (the file location, not contents). If you’ve source-indexed your PDB’s and the user has source server support enabled, VS will automatically download the matching source code. This is great for OSS projects with their code on GitHub.

GitLink to the Rescue

GitLink provides us an elegant solution. When GitLink is run after your build step, it detects the current commit (assuming the sln is in a git repo clone), detects the provider (BitBucket and GitHub are currently supported) and indexes the PDB’s to point to the exact source location online. Of course, there are options to specify commits, remote repo location URLs, etc if you need to override the defaults.

After running GitLink, just include the PDB files in your nuspec/main nupkg alongside your dll files and you’re done. Upload that whole package to NuGet (and don’t use the -symbols parameter with nuget pack). This also means that users don’t need to configure a symbol server as the source-indexed PDB’s will be alongside the dll — the location VS will auto-load them from.

An example

Over at xUnit and xUnit for Devices, we’ve implemented GitLink as part of our builds. xUnit builds are setup to run msbuild on an “outer” .msbuild project with high-level tasks; we have a GitLink task that runs after our main build task.

As we want the build to be fully automated and not rely on exe’s external to the project, we “install” the GitLink NuGet package on build if necessary.

Here’s the gist of our main CI target that we call on build msbuild xunit.msbuild /t:CI (abbreviated for clarity):

<PropertyGroup>
  <SolutionName Condition="'$(SolutionName)' == ''">xunit.vs2015.sln</SolutionName>
  <SolutionDir Condition="'$(SolutionDir)' == '' Or '$(SolutionDir)' == '*Undefined*'">$(MSBuildProjectDirectory)</SolutionDir>
  <NuGetExePath Condition="'$(NuGetExePath)' == ''">$(SolutionDir)\.nuget\nuget.exe</NuGetExePath>
</PropertyGroup>

<Target Name="CI" DependsOnTargets="Clean;PackageRestore;GitLink;Build;Packages" />

<Target Name="PackageRestore" DependsOnTargets="_DownloadNuGet">
  <Message Text="Restoring NuGet packages..." Importance="High" />
  <Exec Command="&quot;$(NuGetExePath)&quot; install gitlink -SolutionDir &quot;$(SolutionDir)&quot; -Verbosity quiet -ExcludeVersion -pre" Condition="!Exists('$(SolutionDir)\packages\gitlink\')" />
  <Exec Command="&quot;$(NuGetExePath)&quot; restore &quot;$(SolutionDir)\$(SolutionName)&quot; -NonInteractive -Source @(PackageSource) -Verbosity quiet" />
</Target>

<Target Name='GitLink'>
  <Exec Command='packages\gitlink\lib\net45\GitLink.exe $(MSBuildThisFileDirectory) -f $(SolutionName) -u https://github.com/xunit/xunit' IgnoreExitCode='true' />
</Target>

<Target Name='Packages'>
  <Exec Command='"$(NuGetExePath)" pack %(NuspecFiles.Identity) -NoPackageAnalysis -NonInteractive -Verbosity quiet' />
</Target>

There are a few things to note from the snippet:
* When installing GitLink, I use the -ExcludeVersion switch. This is so it’s easier to call later in the script w/o remembering to update a target path each time.
* I’m currently using -pre as well. There’s a number of bugs fixed since the last stable release.

The end result

If you use xUnit 2.0+ or xUnit for Devices and have source server support enabled in your VS debug settings, VS will let you step into xUnit code seamlessly.

If you do this for your library, your users will thank you.

UWP NuGet Package Dependencies

August 29, 2015 Coding 4 comments , , ,

UWP NuGet Package Dependencies

[Updated: 9/15/15 on the NuGet package contents at the end]

In my last post, Targeting .NET Core, I mentioned that NuGet packages targeting .NET Core and using the dotnet TFM need to list their dependencies. What may not be immediately obvious, as this is new behavior for UWP projects, is that UWP packages need to list their BCL dependencies too, not just “regular” NuGet references.

The reason for this is that UWP projects also use .NET Core and may elect to use newer BCL package versions than the default. While the uap10.0 TFM does imply BCL + Windows Runtime, it doesn’t really say what version of the dependencies you get. Instead, that’s in your project.json file, which by default includes the Microsoft.NETCore.UniversalWindowsPlatform v5.0.0 “meta-package”, which pulls in most of the .NET Core libraries at a particular version. But what happens if newer BCL packages are published? Right now, the OSS BCL .NET Core packages are being worked on and they’re a higher version – System.Runtime is 4.0.21-beta*.

In Windows 8.1 and Windows 8, this wasn’t an issue because those platforms each had a fixed set of BCL references. You’d know for sure what BCL version’s you’d get for each of those. But now with UWP, that’s no longer true, so you need to specify them.

Fortunately, you don’t have to figure out all of the dependencies by hand. Instead, you can use my handy NuSpec.ReferenceGenerator tool (NuGet|GitHub) to add those dependencies to your NuSpec file.

The ReadMe is fairly detailed, but for the majority of projects, if you have a NuSpec file whose filename matches your project name (like MyAwesomeLibrary.csproj with a MyAwesomeLibrary.nuspec sitting somewhere under the .sln dir), adding the reference should be all you need.

For a UWP Class Library package, you should have the following in your NuSpec:

  • A dependency group with the uap10.0 TFM
  • In your Project Build options for Release mode, choose “generate library layout”
  • Copy the entire directory structure of the output to your \lib\uap10.0 dir.

Targeting .NET Core

July 29, 2015 Coding 7 comments , , , ,

Targeting .NET Core

Problem

Since DNX was announced, library authors have been inundated with requests to support .NET Core and the CoreCLR. Up until now, the only real option was to use the DNX-based project.json build system with the Visual Studio xproj projects. Adding these project types into an existing project that already supports a wide-range of platform targets can be challenging. There are a few issues with the current approach:
– Not all project types can be built with project.json
– It’s been a moving target as DNX is rightfully still in beta.
– Without proper guidance, authors have been targeting dnxcore50 in their packages intended for .NET Core instead of dotnet
– To be fair, dotnet is a recent update that has been little publicized

Starting today though, there’s a better way. Just make sure to install the Windows developer tooling as it includes this new functionality.

Terminology

If we go back to the .NET Core presentation back in November, you may remember this diagram:

In terms of terminology, .NET Core should be your target; CoreCLR is just a runtime. Referring to the diagram, the dnxcore50 Target Framework Moniker refers to the box in the upper-right — it’s the ASPNet 5 app model. It is BCL + DNX specific libraries. Similarly, uap10.0 is the Windows Universal app model, BCL + Windows Runtime.

Many (most?) libraries do not actually need the DNX or WinRT dependencies. All they really need are the BCL libraries. What then is the target there? The answer is dotnet. By using dotnet, you instead specify your dependencies in your nuget package and your package will then run on any supported runtime, including CoreCLR, .NET Native and .NET 4.6 (assuming you’re using the newest BCL packages.)

Existing Libraries

What has been lost in the commotion around DNX, CoreCLR and .NET Core is the fact that “Profile 259″+ Portable Class Libraries, class libraries that target a minimum of .NET 4.5, Windows 8 and Windows Phone 8, can run on CoreCLR as-is. You do not need to create a new project or target newer contract/BCL references. All you need is to put your existing library into \lib\dotnet in your NuGet package in addition to the \lib\portable-* directory it is now and list your dependencies in the package.

The only time you might need a new project is if you have platform-specific code. In that case, the new UWP tools for Windows 10 has a better option: “Modern PCLs”. Once you install the UWP tools, create a new Class Library (Portable) in your solution and make sure only .NET 4.6, Windows Universal 10 and ASP Net 5 is checked. When you do that, you’ll get a modern PCL that uses project.json and pulls in the newest .NET Core packages as references. You can then use linked files, shared projects and your existing techniques to build a class library that targets .NET Core. Then, put that in your \lib\dotnet directory and create the dependencies element for it. No magic needed. Using this technique, I was able to adapt several OSS libraries to support .NET Core in very little time.

NuGet Dependencies – the heart of dotnet

As I described in my previous post, the key to making dotnet work is specifying all of your dependencies. This can be a tedious and error-prone process. I’ve built a tool, NuGet.ReferenceGenerator that automates creation of the dependency element for the majority of cases. The tool works with either existing compatible PCL projects and the new “modern PCL” projects.

Just add the NuSpec.ReferenceGenerator NuGet to your package and build. I won’t go over all of the docs, but you can find those on the project site.

At build time, the tool will read the references your assembly requires, determine the source NuGet package and version, and create the <dependencies> element in the NuSpec.

Call To Action

  • If you maintain a library, review any areas where you are currently targeting dnxcore50 and update your NuGet package to put those bits in dotnet. If you are not using any Microsoft.Dnx references, and the majority of libraries do not, then there’s no reason to target dnxcore50 when dotnet reaches a far broader set of targets.
    • Bonus by using the “Modern PCL” projects and/or reusing your existing PCL, your dependencies will be the stable versions, not pre-release. That means your package can be stable too and not wait until Q1 2016!.
  • If you currently have a library that’s a “System.Runtime”-based PCL, one that’s at least portable-win8+net45+wp8, then simply add a copy of the binary to your NuGet package in the dotnet directory. Adding it to \lib\dotnet and leaving a copy in lib\portable-win8+net45+wp8 allows it to work with .NET Core and the existing NuGet v2 clients.
  • Ensure your NuGet package lists all of its dependencies in a <dependencies targetFramework="dotnet"> element. Use the stable package versions, not the DNX pre-release versions. If you don’t want to create and maintain this by hand, use my ReferenceGenerator.
  • Last, but most importantly, make sure your nuget.exe version is up-to-date by running nuget update -self. Version 2.8.6 or later is required to properly package dotnet.

Demystifying PCLs, .NET Core, DNX and UWP (Redux)

June 16, 2015 Coding 1 comment , , ,

[Disclaimer: Many of the things I talk about here may not work in the RC of Visual Studio 2015. The information is taken from Microsoft’s public repos on GitHub and from conversations with members of the .NET team. The information herein is accurate at the time of writing but as with everything pre-release, things may change!]

Intro

A few days ago, I posted an article trying to explain my current understanding of how the new .NET Core libraries fit into the existing ecosystem. Since then, I’ve had more conversations with a few people on the .NET Team (many thanks to David Kean and Eric St. John!) that clarify the meaning of the dotnet target framework and how the pieces all fit together. This blog will attempt to explain further.

TL;DR

dotnet is not a specific target framework—it means “I’m compatible with any target framework that my dependencies are compatible with.” Read on for more.

Let’s start at the very beginning (a very good place to start!)

To help explain where things are going, it helps to have some background for context. Before we had any such thing as Portable Class Libraries (PCLs), if we wanted to use a library on multiple frameworks, we had to compile it multiple times. The figure below illustrates the state of the world circa 2010.
Before PCLs

The only real strategy for code sharing was to use linked files and many #ifdefs, as there were wide differences in capabilities between the frameworks. A solution would contain multiple projects, one per target framework. Each project would contain platform-specific references and would generate a binary compatible only with its target platform. This situation was not scalable as future frameworks and platforms would only lead to even more file linking.

The birth of PCLs

In early 2011, Microsoft released the first version of Portable Class Libraries as a toolset for Visual Studio 2010. These tools allowed creation of single binary targeting the .NET Framework, Silverlight, Windows Phone 7 and Xbox 360. They accomplished this by finding the lowest common denominator of functionality shared among the target frameworks. The available functionality changed to match your selection:
PCL target framework dialog

From this early start, the tools grew over time. Visual Studio 2012 included support for PCLs without the need for an add-in. The list of target frameworks and versions increased; now you could choose .NET Framework 4 or 4.5. You could choose Silverlight 4 or Silverlight 5. Windows Phone gained options for 7.5, 8 and 8.1. We saw support added for additional platforms like Windows 8 and 8.1 Store applications. In 2013, Windows Phone App 8.1 made its first appearance. In early 2014 Xamarin added support for Portable Class Libraries, providing additional target frameworks for their iOS and Android platforms.

Making the sausage

They say that if you enjoy eating sausage, you should never see how it’s made. I personally don’t find ignorance to be bliss and strive to understand how things are made. The same could be said for PCLs—don’t look under the covers unless you’re prepared for what you may see! As one might imagine, there’s quite a bit going on to enable PCLs. In the current system, there are really two main components: contract assemblies and profiles.

Contract Assemblies

Contract assemblies are a special kind of assembly that contains types/metadata but no actual implementation. Think of this as a compile-time reference. A library can reference one or more contract assemblies and the compiler will use the type information in the file. At runtime, when a type is requested from the contract assembly, the loader sees either a TypeForwarder pointing to a concrete implementation or assembly metadata indicating redirection is allowed for the library. The indirection enables types to live in different assemblies in the implementation (think Silverlight vs .NET) but be referenced from a single dll. It also enables the runtime to substitute one type for another even if the assembly versions don’t match.

The best way to think of a contract assembly is like a promise that a specified surface area is present. Your library can reference that assembly and then it’ll run on any target framework that implements that contract. Not all target frameworks support all versions of a particular contract. When working with a least-common-denominator based system, like PCLs, you’ll see fewer types available when you check more/older target frameworks. What Microsoft has done is pre-generate all of the permutations of those checkboxes so that you have a contract assembly for each possible option.

Profiles

That leads us squarely into PCL profiles. These are the things like Profile259 or Profile78 that people most associate with PCLs. In order to support every permutation of target frameworks that you, as a library author, want to choose, Microsoft pre-computed over fifty profiles to date. The profiles are collections of contract assemblies that represent the intersections of the public surface area from the targets. What people really mean by saying Profile259 is that they’re targeting .NET 4.5, Windows 8, Windows Phone 8 Silverlight and Windows Phone 8.1. The number is just a shorthand for spelling out each target framework. It was never really the intent for the profiles to be what people talked about, it was always supposed to be about the target platforms.

What each profile represents, then, is a set of contract assemblies supported by a set of target frameworks. The profiles, in sum, represent every combination of possible contract assemblies. Taken one step further, what ultimately matters to a library isn’t the target framework; rather, what matters to a library are the contracts available to it through the selected set of target frameworks. The profile itself is just a transitive way to get that set of contracts.

Enter the NuGet

It’s not possible to have a complete discussion about PCLs without mentioning NuGet. In parallel to the rise of PCLs, community support was growing around using NuGet (and its package format by extension) as the de facto way of distributing library components. One of a NuGet’s key features is the ability to support multiple target platform versions within a single package. NuGet accomplishes this by using Target Framework Monikers (TFMs) that represent each platform. For example, net means .NET Framework, wp is Windows Phone and netcore is Windows Store. NuGet adds a version number to the TFM so that we get the common usage: net45, wp8, netcore451, which translates to .NET 4.5, Windows Phone 8 and .NET Core 4.5.1 (Windows 8.1) respectively. PCLs are supported in NuGet by using the portable TFM combined with the set of supported TFMs that the library targets. Using our earlier example of PCL Profile259, that would be portable-net45+netcore45+wpa81+wp8 inside a NuGet package.

The breaking point

There are two breaking points in this system: 1) Library authors need to update their NuGet packages to specify compatible targets, and 2) Using pre-computed contracts for PCLs is not scalable. This summer, two new runtimes, CoreCLR and .NET Native are being introduced; the desktop .NET Framework has a new 4.6 version coming out too. At the same time, a new application platform, the .NET Execution Environment (DNX), on which ASP.Net 5 is based, and a new version of the Windows “modern” platform, the Universal Windows Platform (UWP), are set to appear. It was time for a change. Adding support for UWP and DNX in combination with CoreCLR, Desktop .NET and .Net Native would be untenable with pre-computing contracts. Further, with .NET Core becoming Open Source and moving to GitHub, .NET 4.6, CoreCLR and .NET Native would support an application-local Base Class Library (BCL). The surface area available to those newer platforms was poised to explode.

To make the issue concrete, let’s look at an example. Most people are likely familiar with the Newtonsoft.Json NuGet package for working with JSON data. The library, Json.NET, aims to support every .NET platform available. In addition to compiling the code many different times with #ifdefs to accommodate older platforms, as new platforms appear, the Json.NET author needs to update the NuGet package too. That means that as new platforms like UWP and DNX appear, despite targeting a set of contract libraries (remember, all libraries really reference contracts, not platforms), the author needs to keep updating packages to add each new platform to the supported platform list.

What we’re experiencing here is an impedance mismatch between what the library cares about and what NuGet supports. The mismatch highlights, as fundamentally broken, a model that puts the onus on each library author to keep up-to-date with the available platforms and contract-to-platform support matrix. Libraries that would otherwise work on a target platform may not be understood as compatible by NuGet. While it is true that NuGet has a set of heuristics to accommodate additional platforms, the heuristics are also not scalable as they’re hard-coded into each NuGet client version.

Fixing the impedance mismatch: dotnet to the rescue

Over the past year, as “One Microsoft” has taken hold, you started to see the NuGet and .NET CLR teams work much closer together. Based on community feedback, NuGet was chosen as the de facto mechanism to deliver future versions of .NET that can run as self-contained app-local packages. In order to support the ever-increasing complexity placed upon it, NuGet had to evolve. You can read more about NuGet’s evolution to 3.0 on the NuGet team blog in posts from April 2014-November 2014.

One of the most recent changes to NuGet, and the .NET ecosystem by extension, is support for the dotnet TFM. The meaning of dotnet wasn’t clear at first and as reflected in my earlier blog post, it seemed like it was the new target for the “new” portable .NET packages being published to NuGet and consumed by DNX and UWP. The reality isn’t quite like that but is far more interesting. Rather than dotnet representing a particular target like netcore45, dnxcore5 or net46, it really means “I’m compatible with any targets that my dependencies are, check those.” It gets NuGet out of the platform guessing game and instead walks the dependency graph.

Practically speaking, the most common set of dependencies for any package will be its contracts – the assemblies referenced at build time. Today, with the platform-TFMs, those contracts don’t need to be listed in the NuGet package as they’re implied by the TFM. With the dotnet-based TFM, NuGet packages will have to specify their dependencies, even system ones. You can see this today with the project.json file that DNX projects use. By explicitly listing the dependencies (which may be CLR contracts), the mismatch between target framework and supported contracts is removed. Instead, each contract package declares its own support by way of its implementation.

The way this is done is beyond the scope of this post, but you can get a sense of it by looking at the layout of the System.IO.FileSystem package below.
System.IO.FileSystem package layout

In the package, you can see two assemblies in the ref folder, called design-time façades, one for .NET 4.6 and one for everything else (CoreCLR, .NET Native, etc). The surface area is identical but they function a bit differently. The façades are used at build time to enable portable assemblies which were built against contracts (System.Runtime-based) to actually resolve those types against the desktop reference assemblies (mscorlib-based). This lets an mscorlib assembly pass its version of string, that lives in mscorlib, to an API in a PCL that takes a string from System.Runtime. The same façades are used at runtime as well. This is something that should usually be considered trivia as most people need not concern themselves about the minutia.

The package contains three implementations of the contract, one for dnxcore50, one for net46 and one for netcore50 (UWP). When I said earlier that the new .NET Core packages would only support the newer platforms, this is the how/what/why. One last thing to note in the above picture, you can see that System.IO.FileSystem itself declares many other dependencies. This is expected; with small, granular, libraries the end result is that you pull in only what you need, not the whole framework.

None of this is to say that dotnet explicitly means the newer platforms though. Microsoft may release the existing contract assemblies, the ones currently in the Profile* directories, as NuGet packages. If they do that, then a library that “targets” dotnet could target .NET 4.5/Win8 as well. The key is that version number of each dependency would be lower than the new ones. The new .NET Core libraries, and their contracts, would all have a higher version number than the existing contracts.

This drives home the point that what dotnet really means is “check my dependencies and I’ll run on any platform my dependencies do.”

The fact that the new .NET Core libraries use this mechanism is actually orthogonal to dotnet’s meaning. dotnet adds its value today with existing code and libraries by changing the question of “what platforms does my library support” to “what dependencies does my library require?”

Coming back to the earlier example of Json.NET, if it were to use dotnet, it would also declare the contracts, with its version, that it needs. It would not have to know or care about what platforms are currently supported by those contracts. In the future, if some new unicorn platform were to appear, so long as newer versions of the contracts were published that supported the unicorn platform, Json.NET would happily run there without any foreknowledge.

Contracts or Dependencies?

Throughout this discussion, I’ve used the terms contracts and dependencies. From the perspective of a library author or consumer, these terms are often interchangeably, but there is a difference. Contracts are one type of dependency – they are specifically crafted reference assemblies. Contracts are useful if you need to have multiple implementations of library for different platforms. Aside from the built-in system reference assemblies, the other place you see contracts are libraries that use the “bait and switch” PCL technique. The vast majority of libraries can be implemented without any platform-specific references and are thus simply dependencies. If this sounds confusing, don’t worry too much about it. This is an advanced technique that most packages don’t need to consider; the only takeaway is that whether contract or “regular” library, they both appear as dependencies in a package.

Wrapping it all up

At first glance, it’s easy to think “whoa, this is complicated!” Upon stepping back though, hopefully the initial complexity melts away with the newfound understanding that what’s happening here is that a layer is being removed. The layer was the platform. Up until NuGet v3 we were trying to cram a round peg into a square hole. We’d gather up an intersection of target frameworks and call it a profile. We’d calculate the contract assemblies for those and the compiler would reference those, but they stayed firmly in the background. Visual Studio intentionally hides the references behind a single .NET entry in a PCL project’s references. This lead to the platform support list being encoded within the NuGet package structure, leaving package authors scrambling to update their packages should a new platform emerge. In many cases, the existing code is already compatible but a package update was required. NuGet v3 eliminates this problem by removing the platform layer and have the ability to go “direct to the dependencies.” This is an opt-in approach for packages that use the new dotnet TFM. Packages can contain both dotnet and the existing TFMs; they are not mutually exclusive.

The new version of .NET Core is dependent on these dependency-driven, framework agnostic packages, but the existing PCL profiles could fit into the model too. That said, dotnet doesn’t mean .NET Core any more than it means any other platform. They’re different things.

Demystifying PCL’s, .NET Core, DNX and UWP

June 9, 2015 Coding 6 comments , , ,

Since the announcement of .NET Core there’s been confusion around what that means for Portable Class Libraries, runtime support, NuGet support and how these “new” libraries relate to the existing PCLs. At least I was confused.

As ASPNet 5 started taking shape, we started hearing about new target frameworks for NuGet, like dnxcore50. Other posts mentioned that the new Windows 10 Universal Windows Platform (UWP) would be using the new .NET Core 5 libraries too, but that lead to the question, what do we call it in NuGet? dnxcore5 is clearly the wrong one as that refers to the Dotnet Runtime Environment.

Current NuGet conventions don’t make things any more clear. Today we have the following target framework names:

  • Win Windows 8 and Windows 8.1
  • Net .NET Framework
  • Wpa Windows Phone App 8.1
  • NetCore Also refers to Windows 8 and 8.1
    • NetCore and Win are used interchangeably and are the same

So far, NuGet has added the following over the course of ASPNet vNext:

  • dnx The Dotnet Runtime Environment for the .NET Framework
  • dnxcore The Dotnet Runtime Environment for the .NET Core CLR

Over the past few days, it seems like the .NET Core team has been busy updating the target names to change from dnxcore5 to something new called dotnet. More confusion to ensue.

Brice Lambson was kind enough to explain it this afternoon and it finally all makes sense, so here is (don’t take this as official advice!) my current understanding. The new world distinguishes between the platform (.NET Framework/CoreCLR) and the app model (desktop/aspnet/UWP) cleanly.

  • dotnet This is the new .NET Core for packages that don’t have any app model requirements.
  • net Existing .NET Framework platform
  • netcore For UWP apps, based on dotnet plus app model specifics
  • dnx ASPNet apps based on the .NET Framework
  • dnxcore ASPNet apps based on the .NET Core framework

These are the targets you’ll most likely care about going forward. Most libraries will want to target dotnet to hit the widest range of consuming apps. dotnet will run on the .NET 4.6 Framework. If you need specific UWP functionality (like XAML in your library), then you’ll need netcore5. If you need AspNet specific items, then you’ll need dnxcore5. If you need something that’s only part of the full .NET Framework, then you’ll need either net46 or dnx46.

This ties into the existing PCL structure by being a new platform. Today you have libraries that support multiple platforms like this portable-net45+netcore45+wpa81. If you want to also include dotnet, then it simply becomes portable-net45+netcore45+wpa81+dotnet. If you can afford to target just Windows 10, .NET 4.6 and ASPNet 5, then having the older platforms severely limits your available surface area. In that case, better to target just dotnet, which can then be consumed by all of the modern platforms.

What does this all mean?

The table below should help explain things. The columns represent target frameworks and the rows are platforms/apps. That is, if your library targets x it’ll run on y.

xUnit for Xamarin is dead, long live xUnit for Devices!

March 16, 2015 Coding 1 comment , ,

xUnit for Xamarin is dead, long live xUnit for Devices!

In conjunction with today’s release of xUnit 2.0 RTM, I’m happy to announce the initial release of xUnit for Devices (GitHub | NuGet). This has been a long time coming and I’d like to thank Brad Wilson and James Newkirk for their tremendous efforts over the years.

Project rename

xUnit for Devices started out as xUnit for Xamarin. Over the course of development however, it became apparent that what we really have is an MVVM-based test runner where the view was only an implementation detail. Current support is limited to platforms Xamarin Forms supports, but in the future it’s pretty easy to add a desktop/WPF view and support any additional GUI platform as needed.

Upgrading your projects to RTM

If you’ve been using xUnit for Xamarin, the easiest way to update is by using NuGet. There’s a final xunit.runner.xamarin package that pulls in the new xunit.runner.devices package. After upgrading, you can simply remove the old package and keep the dependencies. Windows Phone 8 users will need to make one additional change in the MainPage.xaml to update the assemly name from xunit.runner.xamarin to xunit.runner.devices.

Getting Started

The RTM 1.0 release is available on NuGet and on GitHub.

For iOS and Android, create a new blank Xamarin app project to host the unit test runner. Make sure to give any capabilities/permissions you need in the appropriate manifest.

For WP8, create a new Unit Test Project and then remove the MSTestFramework reference.

Then for all platforms, install/update the xunit.runner.devices package via the GUI or Package Manager Console:
Install-Package xunit.runner.devices

Then look for the .cs.txt, xaml.txt files that are the templates for your platform and copy/paste the contents into the app. Specifically,
– iOS: replace the contents of AppDelegate.cs with AppDelegate.cs.txt
– Android: replace the contents of MainActivity.cs with MainActivity.cs.txt
– WP8: replace the contents of MainPage.xaml.cs with MainPage.xaml.cs.txt and MainPage.xaml with MainPage.xaml.txt

xUnit Device Runners 1.0 RC3

February 23, 2015 Coding 1 comment , ,

Following the release of xUnit 2.0 RC3, the Xamarin Device Runners have been updated to work with RC3.

One note for Android users: due to a dependence on Xamarin Forms, your runner app project needs to use API level 21 as its target and SDK. You can target down to API level 15 if you wish. You can also reference other MonoAndroid or Portable Class Libraries if you want to keep your unit tests at a different API level. You also might need to specify a default theme on some devices to workaround a different Xamarin Forms bug. Please see the updated MainActivity.cs.txt for the specifics.

Wait, what happened to RC2?

If you blinked, you missed it. RC2 of the Device Runners came out Saturday. With xUnit RC3 being a quick update from RC2, it’s best to skip to the latest.

As always, if you run into any issues, feel free to reach out to @onovotny on Twitter or post an issue on GitHub.

Getting Started

RC3 is available on NuGet and on GitHub.

For iOS and Android, create a new blank Xamarin app project to host the unit test runner. Make sure to give any capabilities/permissions you need in the appropriate manifest.

For WP8, create a new Unit Test Project and then remove the MSTestFramework reference.

Then for all platforms, install/update the xUnit.Runner.Xamarin package via the GUI or Package Manager Console:
Install-Package xunit.runner.xamarin -Pre

Then look for the .cs.txt, xaml.txt files that are the templates for your platform and copy/paste the contents into the app. Specifically,
– iOS: replace the contents of AppDelegate.cs with AppDelegate.cs.txt
– Android: replace the contents of MainActivity.cs with MainActivity.cs.txt
– WP8: replace the contents of MainPage.xaml.cs with MainPage.xaml.cs.txt and MainPage.xaml with MainPage.xaml.txt

Announcing: xUnit Device Runner RC1

January 31, 2015 Coding 1 comment , , , ,

xUnit Device Runner 1.0 RC1

I’m pleased to announce the release of the xUnit Device Runners Release Candidate 1. This release adds support for the Xamarin.iOS Unified profile, required for all new iOS applications now and updates starting in July.

Other notable enhancements include a filter for searching test cases by name and status (pass/fail/not run).

To get started, please see the following posts:

If you run into any issues, please file a report in the issue tracker.