Project.json all the things

February 8, 2016 Coding 34 comments ,

Project.json all the things

One of the less known features of Visual Studio 2015 is that it is possible to use project.json with any project type, not just “modern PCL’s,” UWP projects, or xproj projects. Read on to learn why you want to switch and how you can update your existing solution.

Background

Since the beginning of NuGet, installed packages were tracked in a file named packages.config placed alongside the project file. The package installation process goes something like this:

  1. Determine the full list of packages to install, walking the tree of all dependent packages
  2. Download all of those packages to a \packages directory alongside your solution file
  3. Update your project file with correct libraries from the package (looking at \lib\TFM
    • If the package contains a build directory, add any appropriate props or targets files found
  4. Create or update a packages.config file along the project that lists each package along with the current target framework

Terms

  • TFM – Target Framework Moniker. The name that represents a specific Platform (platforms being .NET Framework 4.6, MonoTouch, UWP, etc.)
  • Short Moniker – a short way of referring to a TFM in a NuGet file (e.g., net46). Full list is here.
  • Full Moniker – a longer way of specifying the TFM (e.g., .NETPortable,Version=v4.5,Profile=Profile111). Easiest way to determine this is to compile and let the NuGet error message tell you what to add (see below).

Limitations

The above steps are roughly the same for NuGet up to and including the 2.x series. While it works for basic projects, larger, more complex projects quickly ran into issues. I do not consider the raw number of packages that a project has to be an issue by itself – that is merely showing oodles of reuse and componentization of packages into small functional units. What does become an issue are the UI and the time it takes to update everything.

As mentioned, because NuGet modifies the project file with the relative location of the references, every time you update, it has to edit the project file. This is slow and can lead to merge conflicts across branches.

Furthermore, the system is unable to pivot on different compile-time needs. With many projects needing to provide some native support, NuGet v2.0 had no way of providing different dependencies based on build configuration.

One more issue surfaces with the use of “bait and switch” PCLs. Some packages provide a PCL for reference purpose (the bait), and then also provide platform-specific implementations that have the same external surface area (the switch). This enables libraries to take advantage of platform specific functionality that’s not available in a portable class library alone. The catch with these packages is that to function correctly in a multi-project solution containing a PCL and an application, the application must also add a NuGet reference to all of the packages its PCL libraries use to ensure that the platform-specific version winds up in the output directory. If you forget, you’ll likely get a runtime error due to an incomplete reference assembly being used.

NuGet v3 and Project.json to the rescue

NuGet 3.x introduces a number of new features aimed at addressing the above limitations:

  • Project files are no longer modified to contain the library location. Instead, an MSBuild task and target gets auto-included by the build system. This task creates references and content-file items at build time enabling the meta-data values to be calculated and not baked into a project file.
    • Per-platform files can exist by using the runtimes directories. See the native light-up section in the docs for the details.
  • Packages are now stored in a per-user cache instead of alongside the solution. This means that common packages do not have to be re-downloaded since they’ll already be present on your machine. Very handy for those packages you use in many different solutions. The MSBuild task enables this as the location is no longer baked into the project file.
  • Reference assemblies are now more formalized with a new ref top-level directory. This would be the “bait” assembly, one that could target a wide range of frameworks via either a portable- or dotnet or netstandard TFM. The implementation library would then reside in \lib\TFM. The version in the ref directory would be used as the compile-time reference while the version in the lib directory is placed in the output location.
  • Transitive references. This is a biggie. Now only the top-level packages you require are listed. The full chain of packages is still downloaded (to the shared per-user cache), but it’s hidden in the tooling and doesn’t get in your way. You can continue to focus on the packages you care about. This also works with project-to-project references. If I have a bait-and-switch package reference in my portable project, and I have an application that references that portable library, the full package list will be evaluated for output in the application and the per-architecture, per-platform assemblies will get put in the output directories. You no longer have to reference each package again in the application.

It is important to note that these features only work when a project is using the new project.json format of package management. Having NuGet v3 alone isn’t enough. The good news is that we can use project.json in any project type with a few manual steps.

Using project.json in your current solution

You can use project.json in your current solution. There are a couple of small caveats here:

  1. Only Visual Studio 2015 with Update 1 currently supports project.json. Xamarin Studio does not yet support it but it is planned. That said, Xamarin projects in Visual Studio do support project.json.
    • If you’re using TFS Team Build, you need TFS 2015 Update 1 on the build agent in addition to VS 2015 Update 1.
  2. Some packages that rely on content files being placed into the project may not work correctly. project.json has a different mechanism for this, so the package would need to be updated. The workaround would be to manually copy the content into your project file.
  3. All projects in your solution would need to be updated for the transitive references to resolve correctly. That’s to say that an application using NuGet v2/packages.config won’t pull in the correct transitive references of a portable project reference that’s using project.json.

With that out of the way, lets get started. If you’d like to skip this and see some examples, please look at the following projects that have been converted over. These are all libraries that have a combination of reference assemblies, platform specific implementations, test applications and unit tests, so the spectrum of scenarios should be covered there. They have everything you need in them:

One last note before diving deep: make sure your .gitignore file contains the following entries:

  • *.lock.json
  • *.nuget.props
  • *.nuget.targets

These files should not generally be checked in. In particular, the .nuget.props/targets files will contain a per-user path to the NuGet cache. These files are created by calling NuGet restore on your solution file.

Diving deep

As you start, have the following blank project.json handy as you’ll need it later:

{
    "dependencies": {        
    },
    "frameworks": {        
        "net452": { }
    },
    "runtimes": {
        "win": { }
    } 
}

This represents an empty project.json for a project targeting .NET 4.5.2. I’m using the short moniker here, but you can also use the full one. The string to use here is the thing you’ll likely hit the most trouble with. Fortunately, when you’re wrong and try to build, you’ll get what’s probably the most helpful error message of all time:

Your project is not referencing the “.NETPortable,Version=v4.5,Profile=Profile111” framework. Add a reference to “.NETPortable,Version=v4.5,Profile=Profile111” in the “frameworks” section of your project.json, and then re-run NuGet restore.

The error literally tells you how to fix it. Awesome! The fix is to put .NETPortable,Version=v4.5,Profile=Profile111 in your frameworks section to wind up with something like:

{
    "dependencies": {        
    },
    "frameworks": {        
        ".NETPortable,Version=v4.5,Profile=Profile111": { }
    },
    "supports": { }
}

The eagle-eyed reader will notice that the first example had a runtimes section with win in it. This is required for a desktop .NET Framework projects and for projects where CopyNuGetImplementations is set to true like your application (we’ll come back that in a bit), but is not required for other library project types. If you have the runtimes section, then there’s rarely, if ever, a reason to have both the supports section too.

The easiest way to think about this:

  • For library projects, use supports and not runtimes
  • For your application project, (.exe, .apk, .appx, .ipa, website) use runtimes and not supports
  • If it’s a desktop .NET Framework project, use runtimes for both class libraries and your application
  • If it’s a unit test library executing in-place and you need references copied to its output directory, use runtimes and not supports

Now, take note of any packages with the versions that you already have installed. You might want to copy/paste your packages.config file into a temporary editor window.

The next step is to remove all of your existing packages from your project. There are two ways to do this: via the NuGet package manager console or by hand.

Using the NuGet Package Manager Console

Pull up the NuGet Package Manager Console and ensure the drop-down is set to the project you’re working on. For each package in the project, uninstall each package with the following command:
Uninstall-Package <package name> -Force -RemoveDependencies
Repeat this for each package until they’re all gone.

By Hand

Delete your packages.config file, save the project file then right-click the project and choose “Unload project”. Now right-click the project and select Edit. We need to clean up a few things in the project file.

  • At the top of the project file, remove any .props files that were added by NuGet (look for the ones going to a \packages directory.
  • Find any <Reference> element where the HintPath points to a NuGet package library. Remove all of them.
  • At the bottom of the file, remove any .targets files that NuGet added. Also remove any NuGet targets or Tasks that NuGet added (might be a target that starts with the following line <Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">).
  • If you have any packages that contain Roslyn Analyzers, make sure to remove any analyzer items that come from them.

Save your changes, right click the project in the solution explorer and reload the project.

Adding the project.json

In your project, add a new blank project.json file using one of the templates above. Ensure that the Build Action is set to None (should be the default). Once present, you might need to unload your project and reload it for NuGet to recognize it, so save your project, right-click your project and unload it and reload it.

Now you can either use the Manage NuGet Packages UI to re-add your packages or add them to the project.json by hand. Remember, you don’t necessarily have to re-add every package, only the top-level ones. For example, if you use Reactive Extensions, you only need Rx-Main, not the four other packages that it pulls in.

Build your project. If there are any errors related to NuGet, the error messages should guide you to the answer. Your project should build.

What you’ll notice for projects other than desktop .NET executables or UWP appx’s, is that the output directory will no longer contain every referenced library. This saves disk space and helps the build be faster by eliminating extra file copying. If you want the files to be in the output directory, like for unit test libraries that need to execute in-place, or for an application itself, there’s two extra steps to take:

  1. Unload the project once more and edit it to add the following to the first <PropertyGroup> at the top of the project file: <CopyNuGetImplementations>true</CopyNuGetImplementations>. This tells NuGet to copy all required implementation files to the output directory.
  2. Save and reload the project file. You’ll next need to add that runtimes section from above. The exact contents will depend on your project type. Rather than list them all out here, please see the Zeroconf or xUnit for Devices for the full examples.
    • For an AnyCPU Desktop .NET project win is sufficient
    • For Windows Store projects, you’ll need more

Once you repeat this for all of your projects, you’ll hopefully still have a working build(!) but now one where the projects are using the rich NuGet v3 capabilities. If you have a CI build system, you need to ensure that you’re using the latest nuget.exe to call restore on your solution prior to build. My preference is to always download the latest stable version from the dist link here: https://dist.nuget.org/win-x86-commandline/latest/nuget.exe.

Edge Cases

There may be some edge cases you hit when it comes to the transitive references. If you need to prevent any of the automatic project-to-project propagation of dependencies, the NuGet Docs can help.

In some rare cases, if you start getting compile errors due to missing System references, you may be hitting this bug, currently scheduled to be fixed in the upcoming 3.4 release. This happens if a NuGet package contains a <frameworkAssembly /> dependency that contains a System.* assembly. The workaround for now is to add <IncludeFrameworkReferencesFromNuGet>false</IncludeFrameworkReferencesFromNuGet> to your project file.

What this doesn’t do

There is often confusion between the use of project.json and its relation to the DNX/CLI project tooling that enables cross-compilation to different sets of targets. Visual Studio 2015 uses a new project type (.xproj) as a wrapper for these. This post isn’t about enabling an existing .csproj or .vbproj project type (the one most people have been using on “regular”) projects to start cross-compiling. Converting an existing project to use .xproj is a topic for another day and not all project types are supported by .xproj.

What this does do is enable the NuGet v3 features to be used by the existing project types today. If you have a .NET 4.6 desktop project, this will not change that. Likewise if your project is using the Xamarin Android 6 SDK, this won’t alter that either. It’ll simply make package management easier.

Acknowledgments

I would like to thank Andrew Arnott for his persistence in figuring out how to make this all work. He explained it to me as he was figuring it out and then recently helped to review this post. Thanks Andrew! A shout out is also due to Scott Dorman and Jason Malinowski for their valuable feedback reviewing this post.

Syntax highlighting on WordPress with Prism and Markdown

February 5, 2016 Coding 2 comments

Using the JetPack plugin, WordPress supports using Markdown natively in the editor. This makes it much easier to write posts, but one feature has been a bit wonky — syntax highlighting.

Out-of-the-box, there’s no syntax highlighting plug-ins that met my need. Prism.js is a popular highlighter and while there are a few plugins to support it, the languages they pre-selected didn’t have what I wanted. Also, they didn’t seem to be regularly updated.

Fortunately, it wasn’t hard to create a child theme that contained a custom-downloaded version of the Prism JavaScript and CSS. I won’t walk through that part as it’s well documented elsewhere (see the previous link). What I did with the base child theme was to place my configured prism.js and prism.css files into a sub-directory and register them to be loaded by WordPress.

That almost worked. The trick is that Prism expects the syntax highlighting to be in tags matching <code class="language-foo"> where foo is whatever syntax rules it should apply. The trouble is that by default, JetPack’s Markdown -> HTML processor turns into <code class="json">. Seeing that, I hacked together a little fix-up script to inject prior to the prism code executing:

jQuery(function() {
    $ = jQuery;
    $("code").each(function() {
       var className = $(this).attr('class');
       if(className && !(className.lastIndexOf('language-', 0) === 0)) {
            // No language, prepend
            $(this).attr('class', 'language-' + className);
        }
    });  
});

Caveat: I am not a “real” JavaScript programmer; you might have a better way to do this!

When it was all assembled and uploaded to WordPress, I can now use the normal syntax and things get highlighted as expected.

I’ve zipped up my child theme that only contains these changes here.

You’re welcome to use that as a starting point; you’ll probably need to rename the child theme and specify the correct parent. You can also merge it with your current child theme.

As the Prism JavaScript and CSS is highly customizable, you may wish to generate your own from their site and use those in place.

Announcing Humanizer 2.0

January 30, 2016 Coding 2 comments

Announcing Humanizer 2.0

Earlier today we finalized and published the next major release of Humanizer. This version includes many fixes and new features, many of them coming directly from the community. A huge thank you to all those who have contributed!

You can find the latest Humanizer on NuGet and the website contains the latest documentation. The release notes contains the full details of the changes.

I wanted to call out a few things though:

  • The Humanizer package now supports selecting locales to install. This was done by using a little-known feature of NuGet called satellite packages. The main Humanizer package is now a meta-package that pulls in all language packages plus the core library; this is the existing behavior of Humanizer today.
    • To install English only, you may elect to install Humanizer.Core directly
    • To install specific a language or set of languages, you can specify Humanizer.Core.<locale> where <locale> represents a supported language package.
  • There is currently a known issue with DNX with satellite packages. It might affect CLI too; track that one here.
  • For best results, using project.json/NuGet v3 is highly recommended over packages.config/NuGet v2. The key difference is that all of the child packages are transitively included instead of directly referenced in your packages.config file. project.json is supported in any project type, not just .NET Core or UWP projects.

Finally, I wanted to thank Mehdi Khalili for trusting me with the stewardship of the project. Mehdi did a fantastic job building Humanizer up and getting the community involved to contribute back. I also would like to thank Alexander I. Zaytsev and Max Malook for their efforts in coordinating the community contributions and guide the project forward.

Xamarin MVP

January 25, 2016 Coding No comments

Xamarin MVP

Xamarin just announced their newest round of MVP awards, and I am very honored to have received one. I have always thought that the C# and .NET are the best, most productive, way to create applications for iOS and Android and Xamarin’s tools make this possible.

Hope to see you at Xamarin Evolve later this April!

Continuous Integration for UWP projects – Making Builds Faster

December 3, 2015 Coding 2 comments , , ,

Continuous Integration for UWP projects – Making Builds Faster

Are you developing a UWP app? Are you doing continuous integration? Do you want to improve your CI build times while still generating the .appxupload required for store submission? If so, read-on.

Prerequisites

You’ll need VS 2015 with the UWP 1.1 tools installed. The UWP 1.1 tooling has some important fixes for creating app bundles and app upload files for command line/CI builds.

You’ll also need to register your app on the Windows Dev Center and associate it with your app. Follow the docs for setting linking your project to a store from within VS first.

If you’re using VSO, you may need to setup your own VM to run a vNext build agent. I’m not sure VSO’s hosted agents have all the latest tools as of today. I run my builds in an A2 VM on Azure; it’s not the fastest build server but it’s good enough.

Building on a Server

Now that you have a solution with one or more projects that create an appx (UWP) app, you can start setting up your build scripts. One problem you’ll need to solve is updating your .appxmanifest with an incrementing version each time. I’ve solved this using the fantastic GitVersion tool. There’s a number of different ways to use it, but on VSO it sets environment variables as part of a build step that I use to update the manifest on build.

I use a .proj msbuild file with a set of targets the CI server calls, but you can use your favorite build scripting tool.

My code looks like this:

<Target Name="UpdateVersion">
    <PropertyGroup>
      <Version>$(GITVERSION_MAJOR).$(GITVERSION_MINOR).$(GITVERSION_BUILDMETADATA)</Version>
    </PropertyGroup>    
    <ItemGroup>
      <RegexTransform Include="$(SolutionDir)\**\*.appxmanifest">
          <Find><![CDATA[ Version="\d+\.\d+\.\d+\.\d+"]]></Find>
          <ReplaceWith><![CDATA[ Version="$(Version).0"]]></ReplaceWith>
      </RegexTransform>
    </ItemGroup>
    <RegexTransform Items="@(RegexTransform)" />    
    <Message Text="Assm: Ver $(Version)" />
</Target>

The idea is to call GitVersion, either by calling GitVersion.exe earlier in the build process, or by using the GitVersion VSO Build Task in a step prior to the build step.

GitVersion can also update your AssemblyInfo files too, if you’d like.

Finally, at the end of the build step, you’ll want to collect certain files for the output. In this case, it’s the .appxupload for the store. In VSO, I look for the contents in my app dir, MyApp\AppPackages\**\*.appxupload.

If you setup your build definition to build in Release mode, you should have a successful build with a .appxupload artifact available you can submit to the store. Remember, we’ve already associated this app with the store, and we’ve enabled building x86, x64, and arm as part of our initial run-through in Visual Studio.

The problem

For your safety, a CI build will by default only generate the .appxupload file if you’re in Release mode with .NET Native enabled. This is to help you catch compile-time errors that would delay your store submission.

That’s well-intentioned, but it can severely slow down your builds. On one project I’m working on, on that A2 VM, a “normal” debug build takes about 14 min while a Release build takes 81 minutes! That’s too long for CI.

Fortunately, there’s a few things we can do to speed things up if you’re willing to live a bit dangerously.

  1. Force MSBuild to create the .appxupload without actually – yes, it is possible!
    • In your build definition, pass the additional arguments to MSBuild: /p:UseDotNetNativeToolchain=false /p:BuildAppxUploadPackageForUap=true. This overrides two variables that control the use of .NET Native and packaging.
  2. If you have any UWP Unit Test projects, you can disable package generation for them if you’re not running those unit tests on the CI box. There is a g̶o̶o̶d̶  reason for this — it’s hard. Running UWP CI tests requires your test agent to be running as an interactive process, not a service. You need to configure your build box to auto-login on reboot and then startup the agent.

    In your test projects, add the following <PropertyGroup> to your csproj file:

<!-- Don't build an appx for this in TFS/command line msbuild -->
<PropertyGroup>
  <GenerateAppxPackageOnBuild Condition="'$(GenerateAppxPackageOnBuild)' == '' and '$(BuildingInsideVisualStudio)' != 'true'">false</GenerateAppxPackageOnBuild>
</PropertyGroup>

This works because the .appxupload doesn’t actually contain native code. It contains three app bundles (one per platform) with MSIL, that the store compiles to native code in the cloud. The local .NET Native step is only a “safety” check, as is running WACK. If you regularly test your code in Release mode locally, and have run WACK to ensure your code is ok, then there’s no need to run either on every build.

After making those two adjustments, I’m able to generate the .appxupload files on every build and the build takes the same 13 min as debug mode.

Surface Book or Surface Pro 4?

October 6, 2015 Coding 3 comments

This evening, at the Windows 10 Devices fan celebration in NYC, I got to use the Surface Book (and the other devices announced today) and talk to the product guys about the Surface Pro 4 and the Surface Book. One of my questions to them stemmed from a question at work regarding the split-hinge on the Surface Book; I thought the answer was interesting, so here goes (read on further for my comparison of SP4 vs SB?)

They said the split hinge was a deliberate design decision. It also stems from the following goals:

  • To keep the base as thin as possible
  • To have a “perfect” keyboard. The travel on the keys is 1.6mm, which is greater than most laptop keyboards
  • When the lid is closed, they didn’t want the keys to scuff up the screen.
    • To address this, sometimes the keyboard is slightly recessed in the case – it is like that on the MacBook Pro I have. The problem is that’s wasted space that and they wanted to make the thing thinner.

Also, when the lid is open, they had to get the balance exactly right so that when you push against the screen (it is a touch screen after all), it doesn’t tip over. Many/most other 2-in-1’s don’t have the balance quite right and are “tipsy”. Having tried the Surface Book, I can say it’s certainly not tipsy. The “dynamic fulcrum hinge” has some role in this too.

When it comes to a choice between a Surface Pro 4 and the Surface Book, I’d have to say that the differences are primarily around usage:

  • Surface Pro 4 is a tablet and can go it’s full battery charge without its keyboard
  • Surface Pro 4’s keyboard is better than the previous gen one, but for people that do a lot of typing (developers?), it may not be as ideal. In “lap mode”, the SP4 keyboard still has some “bounciness” as the cover overall could be stiffer.
  • Surface Book’s “clipboard” has three hours of battery life on its own. The remaining 9 hours are in the base (for a total of 12). That’s why they call it a clipboard and not a tablet, because the tablet usage is intended as a secondary/auxiliary mode, not the primary.
  • The Surface Book’s keyboard is really, really nice.
  • The 13.5” screen size feels bigger than it is due to its aspect ratio and the resolution. It also has a very narrow bezel, so the screen goes almost to the edge.

Both devices will have the same memory/storage capabilities, maxing out at 16 GB/1TB. The 1TB storage isn’t available yet (will be a month or two) as they are finishing testing those components. They are using Samsung 3d V-NAND modules so the more storage, the faster it actually is. The pen is really nice and has a great feel to it. Even for people with messy handwriting, the friction level on the screen is the right amount to have control and write something legibly.

Both machines are priced at about $2700 fully loaded (16GB/1TB). Which one to get really depends on your usage and needs; I have a feeling most developers would be happiest with the Surface Book while non-developers would probably like the Surface Pro 4 best.

Enabling source code debugging for your NuGet packages with GitLink

September 23, 2015 Coding 3 comments , , , ,

Enabling source code debugging for your NuGet packages with GitLink

Recently on Twitter, someone was complaining that their CI builds were failing due to SymbolSource.org either being down or rejecting their packages. Fortunately, there’s a better way than using SymbolSource if you’re using a public Git repo (like GitHub) to host your project — GitLink.

Symbols, SymbolSource and NuGet

Hopefully by now, most of you know that you need to create symbols (PDB’s) for your release libraries in addition to your debug builds. Having symbols helps your users troubleshoot issues that may crop up when they’re using your library. Without symbols, you need to rely on hacks, like using dotPeek as a Symbol Server. It’s a hack because the generated source code usually doesn’t match the original, and it certainly doesn’t include any helpful comments (you do comment your code, right?)

So you’ve updated your project build properties to create symbols for release, now you need someplace to put them so your users can get them. Up until recently, the easiest way has been to publish them on SymbolSource. You’d include the pdb files in your NuGet NuSpec, and then run nuget pack MyLibrary.nuspec -symbols. NuGet then creates two packages, one with your library and one just with the symbols. If you then run nuget push MyLibrary.1.0.0.nupkg, if there’s also a symbols package alongside, NuGet will push that to SymbolSource instead of NuGet.org. If you’re lucky, things will just work. However, sometimes SymbolSource doesn’t like your PDB’s and your push will fail.

The issues

While SymbolSource is a great tool, there are some shortcomings.
* It requires manual configuration by the library consumer
* They have to know to go to VS and add the SymbolSource URL to the symbol search path
* It slows down your debugging experience. VS will by default check every configured Symbol Server for matching PDB’s. That leads many people to either disable symbol loading entirely or selectively load symbols. Even if you selectively load symbols, the load is still slow as VS has know way to know which Symbol Server a PDB might be on and must check all of them.
* Doesn’t enable Source Code debugging. PDB’s can be indexed to map original source code file metadata into them (the file location, not contents). If you’ve source-indexed your PDB’s and the user has source server support enabled, VS will automatically download the matching source code. This is great for OSS projects with their code on GitHub.

GitLink to the Rescue

GitLink provides us an elegant solution. When GitLink is run after your build step, it detects the current commit (assuming the sln is in a git repo clone), detects the provider (BitBucket and GitHub are currently supported) and indexes the PDB’s to point to the exact source location online. Of course, there are options to specify commits, remote repo location URLs, etc if you need to override the defaults.

After running GitLink, just include the PDB files in your nuspec/main nupkg alongside your dll files and you’re done. Upload that whole package to NuGet (and don’t use the -symbols parameter with nuget pack). This also means that users don’t need to configure a symbol server as the source-indexed PDB’s will be alongside the dll — the location VS will auto-load them from.

An example

Over at xUnit and xUnit for Devices, we’ve implemented GitLink as part of our builds. xUnit builds are setup to run msbuild on an “outer” .msbuild project with high-level tasks; we have a GitLink task that runs after our main build task.

As we want the build to be fully automated and not rely on exe’s external to the project, we “install” the GitLink NuGet package on build if necessary.

Here’s the gist of our main CI target that we call on build msbuild xunit.msbuild /t:CI (abbreviated for clarity):

<PropertyGroup>
  <SolutionName Condition="'$(SolutionName)' == ''">xunit.vs2015.sln</SolutionName>
  <SolutionDir Condition="'$(SolutionDir)' == '' Or '$(SolutionDir)' == '*Undefined*'">$(MSBuildProjectDirectory)</SolutionDir>
  <NuGetExePath Condition="'$(NuGetExePath)' == ''">$(SolutionDir)\.nuget\nuget.exe</NuGetExePath>
</PropertyGroup>

<Target Name="CI" DependsOnTargets="Clean;PackageRestore;GitLink;Build;Packages" />

<Target Name="PackageRestore" DependsOnTargets="_DownloadNuGet">
  <Message Text="Restoring NuGet packages..." Importance="High" />
  <Exec Command="&quot;$(NuGetExePath)&quot; install gitlink -SolutionDir &quot;$(SolutionDir)&quot; -Verbosity quiet -ExcludeVersion -pre" Condition="!Exists('$(SolutionDir)\packages\gitlink\')" />
  <Exec Command="&quot;$(NuGetExePath)&quot; restore &quot;$(SolutionDir)\$(SolutionName)&quot; -NonInteractive -Source @(PackageSource) -Verbosity quiet" />
</Target>

<Target Name='GitLink'>
  <Exec Command='packages\gitlink\lib\net45\GitLink.exe $(MSBuildThisFileDirectory) -f $(SolutionName) -u https://github.com/xunit/xunit' IgnoreExitCode='true' />
</Target>

<Target Name='Packages'>
  <Exec Command='"$(NuGetExePath)" pack %(NuspecFiles.Identity) -NoPackageAnalysis -NonInteractive -Verbosity quiet' />
</Target>

There are a few things to note from the snippet:
* When installing GitLink, I use the -ExcludeVersion switch. This is so it’s easier to call later in the script w/o remembering to update a target path each time.
* I’m currently using -pre as well. There’s a number of bugs fixed since the last stable release.

The end result

If you use xUnit 2.0+ or xUnit for Devices and have source server support enabled in your VS debug settings, VS will let you step into xUnit code seamlessly.

If you do this for your library, your users will thank you.

UWP NuGet Package Dependencies

August 29, 2015 Coding 4 comments , , ,

UWP NuGet Package Dependencies

[Updated: 9/15/15 on the NuGet package contents at the end]

In my last post, Targeting .NET Core, I mentioned that NuGet packages targeting .NET Core and using the dotnet TFM need to list their dependencies. What may not be immediately obvious, as this is new behavior for UWP projects, is that UWP packages need to list their BCL dependencies too, not just “regular” NuGet references.

The reason for this is that UWP projects also use .NET Core and may elect to use newer BCL package versions than the default. While the uap10.0 TFM does imply BCL + Windows Runtime, it doesn’t really say what version of the dependencies you get. Instead, that’s in your project.json file, which by default includes the Microsoft.NETCore.UniversalWindowsPlatform v5.0.0 “meta-package”, which pulls in most of the .NET Core libraries at a particular version. But what happens if newer BCL packages are published? Right now, the OSS BCL .NET Core packages are being worked on and they’re a higher version – System.Runtime is 4.0.21-beta*.

In Windows 8.1 and Windows 8, this wasn’t an issue because those platforms each had a fixed set of BCL references. You’d know for sure what BCL version’s you’d get for each of those. But now with UWP, that’s no longer true, so you need to specify them.

Fortunately, you don’t have to figure out all of the dependencies by hand. Instead, you can use my handy NuSpec.ReferenceGenerator tool (NuGet|GitHub) to add those dependencies to your NuSpec file.

The ReadMe is fairly detailed, but for the majority of projects, if you have a NuSpec file whose filename matches your project name (like MyAwesomeLibrary.csproj with a MyAwesomeLibrary.nuspec sitting somewhere under the .sln dir), adding the reference should be all you need.

For a UWP Class Library package, you should have the following in your NuSpec:

  • A dependency group with the uap10.0 TFM
  • In your Project Build options for Release mode, choose “generate library layout”
  • Copy the entire directory structure of the output to your \lib\uap10.0 dir.

Targeting .NET Core

July 29, 2015 Coding 7 comments , , , ,

Targeting .NET Core

Problem

Since DNX was announced, library authors have been inundated with requests to support .NET Core and the CoreCLR. Up until now, the only real option was to use the DNX-based project.json build system with the Visual Studio xproj projects. Adding these project types into an existing project that already supports a wide-range of platform targets can be challenging. There are a few issues with the current approach:
– Not all project types can be built with project.json
– It’s been a moving target as DNX is rightfully still in beta.
– Without proper guidance, authors have been targeting dnxcore50 in their packages intended for .NET Core instead of dotnet
– To be fair, dotnet is a recent update that has been little publicized

Starting today though, there’s a better way. Just make sure to install the Windows developer tooling as it includes this new functionality.

Terminology

If we go back to the .NET Core presentation back in November, you may remember this diagram:

In terms of terminology, .NET Core should be your target; CoreCLR is just a runtime. Referring to the diagram, the dnxcore50 Target Framework Moniker refers to the box in the upper-right — it’s the ASPNet 5 app model. It is BCL + DNX specific libraries. Similarly, uap10.0 is the Windows Universal app model, BCL + Windows Runtime.

Many (most?) libraries do not actually need the DNX or WinRT dependencies. All they really need are the BCL libraries. What then is the target there? The answer is dotnet. By using dotnet, you instead specify your dependencies in your nuget package and your package will then run on any supported runtime, including CoreCLR, .NET Native and .NET 4.6 (assuming you’re using the newest BCL packages.)

Existing Libraries

What has been lost in the commotion around DNX, CoreCLR and .NET Core is the fact that “Profile 259″+ Portable Class Libraries, class libraries that target a minimum of .NET 4.5, Windows 8 and Windows Phone 8, can run on CoreCLR as-is. You do not need to create a new project or target newer contract/BCL references. All you need is to put your existing library into \lib\dotnet in your NuGet package in addition to the \lib\portable-* directory it is now and list your dependencies in the package.

The only time you might need a new project is if you have platform-specific code. In that case, the new UWP tools for Windows 10 has a better option: “Modern PCLs”. Once you install the UWP tools, create a new Class Library (Portable) in your solution and make sure only .NET 4.6, Windows Universal 10 and ASP Net 5 is checked. When you do that, you’ll get a modern PCL that uses project.json and pulls in the newest .NET Core packages as references. You can then use linked files, shared projects and your existing techniques to build a class library that targets .NET Core. Then, put that in your \lib\dotnet directory and create the dependencies element for it. No magic needed. Using this technique, I was able to adapt several OSS libraries to support .NET Core in very little time.

NuGet Dependencies – the heart of dotnet

As I described in my previous post, the key to making dotnet work is specifying all of your dependencies. This can be a tedious and error-prone process. I’ve built a tool, NuGet.ReferenceGenerator that automates creation of the dependency element for the majority of cases. The tool works with either existing compatible PCL projects and the new “modern PCL” projects.

Just add the NuSpec.ReferenceGenerator NuGet to your package and build. I won’t go over all of the docs, but you can find those on the project site.

At build time, the tool will read the references your assembly requires, determine the source NuGet package and version, and create the <dependencies> element in the NuSpec.

Call To Action

  • If you maintain a library, review any areas where you are currently targeting dnxcore50 and update your NuGet package to put those bits in dotnet. If you are not using any Microsoft.Dnx references, and the majority of libraries do not, then there’s no reason to target dnxcore50 when dotnet reaches a far broader set of targets.
    • Bonus by using the “Modern PCL” projects and/or reusing your existing PCL, your dependencies will be the stable versions, not pre-release. That means your package can be stable too and not wait until Q1 2016!.
  • If you currently have a library that’s a “System.Runtime”-based PCL, one that’s at least portable-win8+net45+wp8, then simply add a copy of the binary to your NuGet package in the dotnet directory. Adding it to \lib\dotnet and leaving a copy in lib\portable-win8+net45+wp8 allows it to work with .NET Core and the existing NuGet v2 clients.
  • Ensure your NuGet package lists all of its dependencies in a <dependencies targetFramework="dotnet"> element. Use the stable package versions, not the DNX pre-release versions. If you don’t want to create and maintain this by hand, use my ReferenceGenerator.
  • Last, but most importantly, make sure your nuget.exe version is up-to-date by running nuget update -self. Version 2.8.6 or later is required to properly package dotnet.

Demystifying PCLs, .NET Core, DNX and UWP (Redux)

June 16, 2015 Coding 1 comment , , ,

[Disclaimer: Many of the things I talk about here may not work in the RC of Visual Studio 2015. The information is taken from Microsoft’s public repos on GitHub and from conversations with members of the .NET team. The information herein is accurate at the time of writing but as with everything pre-release, things may change!]

Intro

A few days ago, I posted an article trying to explain my current understanding of how the new .NET Core libraries fit into the existing ecosystem. Since then, I’ve had more conversations with a few people on the .NET Team (many thanks to David Kean and Eric St. John!) that clarify the meaning of the dotnet target framework and how the pieces all fit together. This blog will attempt to explain further.

TL;DR

dotnet is not a specific target framework—it means “I’m compatible with any target framework that my dependencies are compatible with.” Read on for more.

Let’s start at the very beginning (a very good place to start!)

To help explain where things are going, it helps to have some background for context. Before we had any such thing as Portable Class Libraries (PCLs), if we wanted to use a library on multiple frameworks, we had to compile it multiple times. The figure below illustrates the state of the world circa 2010.
Before PCLs

The only real strategy for code sharing was to use linked files and many #ifdefs, as there were wide differences in capabilities between the frameworks. A solution would contain multiple projects, one per target framework. Each project would contain platform-specific references and would generate a binary compatible only with its target platform. This situation was not scalable as future frameworks and platforms would only lead to even more file linking.

The birth of PCLs

In early 2011, Microsoft released the first version of Portable Class Libraries as a toolset for Visual Studio 2010. These tools allowed creation of single binary targeting the .NET Framework, Silverlight, Windows Phone 7 and Xbox 360. They accomplished this by finding the lowest common denominator of functionality shared among the target frameworks. The available functionality changed to match your selection:
PCL target framework dialog

From this early start, the tools grew over time. Visual Studio 2012 included support for PCLs without the need for an add-in. The list of target frameworks and versions increased; now you could choose .NET Framework 4 or 4.5. You could choose Silverlight 4 or Silverlight 5. Windows Phone gained options for 7.5, 8 and 8.1. We saw support added for additional platforms like Windows 8 and 8.1 Store applications. In 2013, Windows Phone App 8.1 made its first appearance. In early 2014 Xamarin added support for Portable Class Libraries, providing additional target frameworks for their iOS and Android platforms.

Making the sausage

They say that if you enjoy eating sausage, you should never see how it’s made. I personally don’t find ignorance to be bliss and strive to understand how things are made. The same could be said for PCLs—don’t look under the covers unless you’re prepared for what you may see! As one might imagine, there’s quite a bit going on to enable PCLs. In the current system, there are really two main components: contract assemblies and profiles.

Contract Assemblies

Contract assemblies are a special kind of assembly that contains types/metadata but no actual implementation. Think of this as a compile-time reference. A library can reference one or more contract assemblies and the compiler will use the type information in the file. At runtime, when a type is requested from the contract assembly, the loader sees either a TypeForwarder pointing to a concrete implementation or assembly metadata indicating redirection is allowed for the library. The indirection enables types to live in different assemblies in the implementation (think Silverlight vs .NET) but be referenced from a single dll. It also enables the runtime to substitute one type for another even if the assembly versions don’t match.

The best way to think of a contract assembly is like a promise that a specified surface area is present. Your library can reference that assembly and then it’ll run on any target framework that implements that contract. Not all target frameworks support all versions of a particular contract. When working with a least-common-denominator based system, like PCLs, you’ll see fewer types available when you check more/older target frameworks. What Microsoft has done is pre-generate all of the permutations of those checkboxes so that you have a contract assembly for each possible option.

Profiles

That leads us squarely into PCL profiles. These are the things like Profile259 or Profile78 that people most associate with PCLs. In order to support every permutation of target frameworks that you, as a library author, want to choose, Microsoft pre-computed over fifty profiles to date. The profiles are collections of contract assemblies that represent the intersections of the public surface area from the targets. What people really mean by saying Profile259 is that they’re targeting .NET 4.5, Windows 8, Windows Phone 8 Silverlight and Windows Phone 8.1. The number is just a shorthand for spelling out each target framework. It was never really the intent for the profiles to be what people talked about, it was always supposed to be about the target platforms.

What each profile represents, then, is a set of contract assemblies supported by a set of target frameworks. The profiles, in sum, represent every combination of possible contract assemblies. Taken one step further, what ultimately matters to a library isn’t the target framework; rather, what matters to a library are the contracts available to it through the selected set of target frameworks. The profile itself is just a transitive way to get that set of contracts.

Enter the NuGet

It’s not possible to have a complete discussion about PCLs without mentioning NuGet. In parallel to the rise of PCLs, community support was growing around using NuGet (and its package format by extension) as the de facto way of distributing library components. One of a NuGet’s key features is the ability to support multiple target platform versions within a single package. NuGet accomplishes this by using Target Framework Monikers (TFMs) that represent each platform. For example, net means .NET Framework, wp is Windows Phone and netcore is Windows Store. NuGet adds a version number to the TFM so that we get the common usage: net45, wp8, netcore451, which translates to .NET 4.5, Windows Phone 8 and .NET Core 4.5.1 (Windows 8.1) respectively. PCLs are supported in NuGet by using the portable TFM combined with the set of supported TFMs that the library targets. Using our earlier example of PCL Profile259, that would be portable-net45+netcore45+wpa81+wp8 inside a NuGet package.

The breaking point

There are two breaking points in this system: 1) Library authors need to update their NuGet packages to specify compatible targets, and 2) Using pre-computed contracts for PCLs is not scalable. This summer, two new runtimes, CoreCLR and .NET Native are being introduced; the desktop .NET Framework has a new 4.6 version coming out too. At the same time, a new application platform, the .NET Execution Environment (DNX), on which ASP.Net 5 is based, and a new version of the Windows “modern” platform, the Universal Windows Platform (UWP), are set to appear. It was time for a change. Adding support for UWP and DNX in combination with CoreCLR, Desktop .NET and .Net Native would be untenable with pre-computing contracts. Further, with .NET Core becoming Open Source and moving to GitHub, .NET 4.6, CoreCLR and .NET Native would support an application-local Base Class Library (BCL). The surface area available to those newer platforms was poised to explode.

To make the issue concrete, let’s look at an example. Most people are likely familiar with the Newtonsoft.Json NuGet package for working with JSON data. The library, Json.NET, aims to support every .NET platform available. In addition to compiling the code many different times with #ifdefs to accommodate older platforms, as new platforms appear, the Json.NET author needs to update the NuGet package too. That means that as new platforms like UWP and DNX appear, despite targeting a set of contract libraries (remember, all libraries really reference contracts, not platforms), the author needs to keep updating packages to add each new platform to the supported platform list.

What we’re experiencing here is an impedance mismatch between what the library cares about and what NuGet supports. The mismatch highlights, as fundamentally broken, a model that puts the onus on each library author to keep up-to-date with the available platforms and contract-to-platform support matrix. Libraries that would otherwise work on a target platform may not be understood as compatible by NuGet. While it is true that NuGet has a set of heuristics to accommodate additional platforms, the heuristics are also not scalable as they’re hard-coded into each NuGet client version.

Fixing the impedance mismatch: dotnet to the rescue

Over the past year, as “One Microsoft” has taken hold, you started to see the NuGet and .NET CLR teams work much closer together. Based on community feedback, NuGet was chosen as the de facto mechanism to deliver future versions of .NET that can run as self-contained app-local packages. In order to support the ever-increasing complexity placed upon it, NuGet had to evolve. You can read more about NuGet’s evolution to 3.0 on the NuGet team blog in posts from April 2014-November 2014.

One of the most recent changes to NuGet, and the .NET ecosystem by extension, is support for the dotnet TFM. The meaning of dotnet wasn’t clear at first and as reflected in my earlier blog post, it seemed like it was the new target for the “new” portable .NET packages being published to NuGet and consumed by DNX and UWP. The reality isn’t quite like that but is far more interesting. Rather than dotnet representing a particular target like netcore45, dnxcore5 or net46, it really means “I’m compatible with any targets that my dependencies are, check those.” It gets NuGet out of the platform guessing game and instead walks the dependency graph.

Practically speaking, the most common set of dependencies for any package will be its contracts – the assemblies referenced at build time. Today, with the platform-TFMs, those contracts don’t need to be listed in the NuGet package as they’re implied by the TFM. With the dotnet-based TFM, NuGet packages will have to specify their dependencies, even system ones. You can see this today with the project.json file that DNX projects use. By explicitly listing the dependencies (which may be CLR contracts), the mismatch between target framework and supported contracts is removed. Instead, each contract package declares its own support by way of its implementation.

The way this is done is beyond the scope of this post, but you can get a sense of it by looking at the layout of the System.IO.FileSystem package below.
System.IO.FileSystem package layout

In the package, you can see two assemblies in the ref folder, called design-time façades, one for .NET 4.6 and one for everything else (CoreCLR, .NET Native, etc). The surface area is identical but they function a bit differently. The façades are used at build time to enable portable assemblies which were built against contracts (System.Runtime-based) to actually resolve those types against the desktop reference assemblies (mscorlib-based). This lets an mscorlib assembly pass its version of string, that lives in mscorlib, to an API in a PCL that takes a string from System.Runtime. The same façades are used at runtime as well. This is something that should usually be considered trivia as most people need not concern themselves about the minutia.

The package contains three implementations of the contract, one for dnxcore50, one for net46 and one for netcore50 (UWP). When I said earlier that the new .NET Core packages would only support the newer platforms, this is the how/what/why. One last thing to note in the above picture, you can see that System.IO.FileSystem itself declares many other dependencies. This is expected; with small, granular, libraries the end result is that you pull in only what you need, not the whole framework.

None of this is to say that dotnet explicitly means the newer platforms though. Microsoft may release the existing contract assemblies, the ones currently in the Profile* directories, as NuGet packages. If they do that, then a library that “targets” dotnet could target .NET 4.5/Win8 as well. The key is that version number of each dependency would be lower than the new ones. The new .NET Core libraries, and their contracts, would all have a higher version number than the existing contracts.

This drives home the point that what dotnet really means is “check my dependencies and I’ll run on any platform my dependencies do.”

The fact that the new .NET Core libraries use this mechanism is actually orthogonal to dotnet’s meaning. dotnet adds its value today with existing code and libraries by changing the question of “what platforms does my library support” to “what dependencies does my library require?”

Coming back to the earlier example of Json.NET, if it were to use dotnet, it would also declare the contracts, with its version, that it needs. It would not have to know or care about what platforms are currently supported by those contracts. In the future, if some new unicorn platform were to appear, so long as newer versions of the contracts were published that supported the unicorn platform, Json.NET would happily run there without any foreknowledge.

Contracts or Dependencies?

Throughout this discussion, I’ve used the terms contracts and dependencies. From the perspective of a library author or consumer, these terms are often interchangeably, but there is a difference. Contracts are one type of dependency – they are specifically crafted reference assemblies. Contracts are useful if you need to have multiple implementations of library for different platforms. Aside from the built-in system reference assemblies, the other place you see contracts are libraries that use the “bait and switch” PCL technique. The vast majority of libraries can be implemented without any platform-specific references and are thus simply dependencies. If this sounds confusing, don’t worry too much about it. This is an advanced technique that most packages don’t need to consider; the only takeaway is that whether contract or “regular” library, they both appear as dependencies in a package.

Wrapping it all up

At first glance, it’s easy to think “whoa, this is complicated!” Upon stepping back though, hopefully the initial complexity melts away with the newfound understanding that what’s happening here is that a layer is being removed. The layer was the platform. Up until NuGet v3 we were trying to cram a round peg into a square hole. We’d gather up an intersection of target frameworks and call it a profile. We’d calculate the contract assemblies for those and the compiler would reference those, but they stayed firmly in the background. Visual Studio intentionally hides the references behind a single .NET entry in a PCL project’s references. This lead to the platform support list being encoded within the NuGet package structure, leaving package authors scrambling to update their packages should a new platform emerge. In many cases, the existing code is already compatible but a package update was required. NuGet v3 eliminates this problem by removing the platform layer and have the ability to go “direct to the dependencies.” This is an opt-in approach for packages that use the new dotnet TFM. Packages can contain both dotnet and the existing TFMs; they are not mutually exclusive.

The new version of .NET Core is dependent on these dependency-driven, framework agnostic packages, but the existing PCL profiles could fit into the model too. That said, dotnet doesn’t mean .NET Core any more than it means any other platform. They’re different things.