msbuild

Create and pack reference assemblies (made easy)

July 9, 2018 Coding No comments , , , ,

Create and pack reference assemblies (made easy)

Last week I blogged about reference assemblies, and how to create them. Since then, I’ve incorporated everything into my MSBuild.Sdk.Extras package to make it much easier. Please read the previous post to get an idea of the scenarios.

Using the Extras, most of that is eliminated. Instead, what you need is the following:

  1. A project for your reference assemblies. This project specifies the TargetFrameworks you wish to produce. Note: this project no longer has any special naming or directory conventions. Place it anywhere and call it anything.
  2. A pointer (ReferenceAssemblyProjectReference) from your main project to the reference assembly project.
  3. Both projects need to be using the Extras. Add a global.json to specify the Extras version (must be 1.6.30-preview or later):
    {
     "msbuild-sdks": {
       "MSBuild.Sdk.Extras": "1.6.30-preview"
     }
    }
    

    And at the top of your project files, change Sdk="Microsoft.NET.Sdk" to Sdk="MSBuild.Sdk.Extras"

  4. In your reference assembly project, use a wildcard to include the source files you need, something like: <Compile Include="..\..\System.Interactive\**\*.cs" Exclude="..\..\System.Interactive\obj\**" />.
  5. In your main project, point to your reference assembly by adding an ItemGroup with an ReferenceAssemblyProjectReference item like this:

    <ItemGroup>
     <ReferenceAssemblyProjectReference Include="..\refs\System.Interactive.Ref\System.Interactive.Ref.csproj" />
    </ItemGroup>
    

    In this case, I am using System.Interactive.Ref as the project name so I can tell them apart in my editor.

  6. That’s it. Build/pack your main project normally and it’ll restore/build the reference assembly project automatically.

Notes

  • The tooling will pass AssemblyName, AssemblyVersion, FileVersion, InformationalVersion, GenerateDocumentationFile, NeutralLanguage, and strong naming properties into the reference assembly based on the main project, so you don’t need to set them twice.
  • The REFERENCE_ASSEMBLY symbol is defined for reference assemblies, so you can do ifdef‘s on that.
  • Please see System.Interactive as a working example.

Create and Pack Reference Assemblies

July 3, 2018 Coding 2 comments , , , ,

Update July 9: Read the follow-up post for an easier way to implement.

Create and Pack Reference Assemblies

Reference Assemblies, what are they, why do I need that? Reference Assemblies are a special kind of assembly that’s passed to the compiler as a reference. They do not contain any implementation and are not valid for normal assembly loading (you’ll get an exception if you try outside of a reflection-only load context).

Why do you need a reference assembly?

There’s two main reasons you’d use a reference assembly:

  1. Bait and switch assemblies. If your assembly can only have platform-specific implementations (think of a GPS implementation library), and you want portable code to reference it, you can define your common surface area in a reference assembly and provide implementations for each platform you support.

  2. Selectively altering the public surface area due to moving types between assemblies. I recently hit this with System.Interactive (Ix). Ix provides extension methods under the System.Linq namespace. Two of those methods, TakeLast, and SkipLast were added to .NET Core 2.0’s Enumerable type. This meant that if you referenced Ix in a .NET Core 2.0 project, you could not use either of those as an extension method. If you tried, you’d get an error:

    error CS0121: The call is ambiguous between the following methods or properties: 'System.Linq.EnumerableEx.SkipLast<TSource>(System.Collections.Generic.IEnumerable<TSource>, int)' and 'System.Linq.Enumerable.SkipLast<TSource>(Sy stem.Collections.Generic.IEnumerable<TSource>, int)'.
    

    The only way out of this is to explicitly call the method like EnumerableEx.SkipLast(...). Not a great experience. However, we cannot simply remove those overloads from the .NET Core version since:

    • It’s not in .NET Standard or .NET Framework
    • If you use TakeLast from a .NET Standard library, then are running on .NET Core, you’d get a MissingMethodException.

The method needs to be in the runtime version, but we need to hide it from the compiler. Fortunately, we can do this with a reference assembly. We can exclude the duplicate methods from the reference on platforms where it’s built-in, so those get resolved to the built-in Enumerable type, and for other platforms, they get the implementation from EnumerableEx.

Creating reference assemblies

I’m going to explore how I solved this for Ix, but the same concepts apply for the first scenario. I’m assuming you have a multi-targeted project containing your code. For Ix, it’s here.

It’s easiest to think of a reference assembly as a different project, with the same name, as your main project. I put mine in a refs directory, which enables some conventions that I’ll come back to shortly.

The key to these projects is that the directory/project name match, so it creates the same assembly identity. If you’re doing any custom versioning, be sure it applies to these as well.

There’s a couple things to note:

  • In the project file itself, we’ll include all of the original files
    <ItemGroup>
      <Compile Include="..\..\System.Interactive\**\*.cs" Exclude="..\..\System.Interactive\obj\**" />
    </ItemGroup>
    
  • The TargetFrameworks should be for what you want as reference assemblies. These do not have to match that you have an implementation for. For scenario #1 above, you’ll likely only have a single netstandard2.0 target. For scenario #2, Ix, given that the surface area has to be reduced on specific platforms, it has more.
  • There is a Directory.Build.props file that provides common properties and an extra set of targets these reference assembly projects need. (Ignore the bit with NETStandardMaximumVersion, that’s me cheating a bit for the future 😉)

    In that props, it defines REF_ASSM as an extra symbol, and sets ProduceReferenceAssembly to true so the compiler generates a reference assembly.

    The other key thing in there is a target we’ll need to gather the reference assemblies from the main project during packing.

    <Target Name="_GetReferenceAssemblies" DependsOnTargets="Build" Returns="@(ReferenceAssembliesOutput)">
      <ItemGroup>
        <ReferenceAssembliesOutput Include="@(IntermediateRefAssembly->'%(FullPath)')" TargetFramework="$(TargetFramework)" />
        <ReferenceAssembliesOutput Include="@(DocumentationProjectOutputGroupOutput->'%(FullPath)')" TargetFramework="$(TargetFramework)" />
      </ItemGroup>
    </Target>
    

    With these, you can use something like #ifdef !(REF_ASSM && NETCOREAPP2.0) in your code to exclude certain methods from the reference assembly on specific platforms. Or, for the “bait and switch” scenario, you may choose to throw an NotImplementedException in some methods (don’t worry, the reference assembly strips out all implementation, but it still has to compile).

You should be able to build these reference assemblies, and in the output directory, you’ll see a ref subdirectory (in \bin\$(Configuration)\$(TargetFramework)\ref). If you open the assembly in a decompiler, you should see an assembly level: attribute [assembly: ReferenceAssembly]. If you inspect the methods, you’ll notice they’re all empty.

Packing the reference assembly

In order to use the reference assembly, and NuGet/MBuild do its magic, it must be packaged correctly. This means the reference assembly has to go into the ref/TFM directory. The library continues to go into lib/TFM, as usual. The goal is to create a package with a structure similar to this:

NuGet folder structure

The contents of the ref folder may not exactly match the lib, and that’s okay. NuGet evaluates each independently for the intended purpose. For finding the assembly to pass as a reference to the compiler, it looks for the “best” target in ref. For runtime, it only looks in lib. That means it’s possible you’ll get a restore error if you try to use the package in an application without a supporting lib.

Out-of-the-box, dotnet pack gives us the lib portion. Adding a Directory.Build.targets above your main libraries gives us a place to inject some code into the NuGet pack pipeline:

<Target Name="GetRefsForPackage" BeforeTargets="_GetPackageFiles" 
        Condition=" Exists('$(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj') ">

  <MSBuild Projects="$(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj" 
           Targets="_GetTargetFrameworksOutput">

    <Output TaskParameter="TargetOutputs" 
            ItemName="_RefTargetFrameworks" />
  </MSBuild>

  <MSBuild Projects="$(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj" 
           Targets="_GetReferenceAssemblies" 
           Properties="TargetFramework=%(_RefTargetFrameworks.Identity)">

    <Output TaskParameter="TargetOutputs" 
            ItemName="_refAssms" />
  </MSBuild>

  <ItemGroup>
    <None Include="@(_refAssms)" PackagePath="ref/%(_refAssms.TargetFramework)" Pack="true" />
  </ItemGroup>
</Target>

This target gets called during the NuGet pack pipeline and calls into the reference assembly project using a convention: $(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj. It looks for a matching project in a refs directory. If it finds it, it obtains the TargetFrameworks it has and then gets the reference assembly for each one. It calls the _GetReferenceAssemblies that we had in the Directory.Build.props in the refs directory (thus applying it to all reference assembly projects).

Building

This will all build and pack normally using dotnet pack, with one caveat. Because there’s no ProjectReference between the main project and the reference assembly projects, we need to build the reference assembly projects first. You can do that with dotnet build. Then, call dotnet pack on your regular project and it’ll put it all together.

OSS Build and Release with VSTS

May 15, 2018 Coding 2 comments , ,

OSS Build and Release with VSTS

Over the past few weeks I have been moving the build system for the OSS projects I maintain over to use VSTS. Up until recently I was using AppVeyor for builds, as they have provided a generous free offering for OSS for years. A huge thank you goes out to them for their past and ongoing support for OSS. So why move to VSTS? There’s three reasons for me:

  1. Support for public projects. This is key since there’s no point in using their builds if users can’t see the results.
  2. Release Management. The existing build systems like AppVeyor, Jenkins, TeamCity, and Travis, can all build a project. Sure, they have different strengths and weaknesses, and some offer free OSS builds as well, but none of them really has a Release Management story. That is, they can build artifacts….but then what? How do the bits get where you want, like NuGet, MyGet, a Store, etc. This is where release management fits in as a central part of CI/CD. More on this later.
  3. Windows, Linux, and Mac build host support in one system. It’s possible to run a single build on all three at the same time (fan out/in), like how VS Code does. No other host can do this easily today using a hosted build pool. I should note that using a hosted build pool is critical for security if you want to build pull requests from public forks. You don’t want to be running arbitrary code on a private build agent. Hosted agent VMs are destroyed after each use, making them far safer.

Life without Release Management

Many projects strive to achieve a continuous deployment pipeline without using Release Management (RM from now on). This is often achieved by using the build script or configuration to do some deployment steps given certain conditions, like if building a certain branch like master. In some ways, the GitFlow branching strategy encourages this, making it easy to decide that builds from the develop branch are pre-release and should go to a dev environment, while builds from master are production and should thus be deployed to a production environment. To me, this is conflating the real purpose of branches, which should be isolation of code churn, from deployment target. I believe any artifact should be able to be deployed to any environment, releasing is a different process than build and should have no bearing on which branch it comes from. For the vast majority of projects, I believe that a GitHub Flow or Release Flow (video) is a better, simpler, option.

Without RM, a pipeline to deploy a library might look something like this:

  1. Builds on the develop branch get deployed to MyGet by the build system. To me, it doesn’t much matter if it’s in the build script directly or if it’s build server configuration (like the deployment option in AppVeyor).
  2. To create a stable release, code is merged to master and then tagged. Often, tags are the mechanism that determines if there should be a release — effectively, tag a commit, that triggers a build which gets released to NuGet.org.

In this model, it’s usually a different build that gets deployed to release than to dev. I believe that mixing build and release like this ultimately leads to less flexibility and more coupling. The source system has to know about the deployment targets. If you need to change the deployment target, or add another one, you have to commit to the source and rebuild.

Following the single responsibility principle, a build should produce artifacts, that’s it. Deployment is something else, don’t conflate the two concepts. Repeat after me: Build is just build. I think projects have tended to mix the two, in part because there wasn’t a good, free, RM tool. It was easy and pragmatic to do both from the build tool. That changes now with VSTS public projects.

CD Nirvana with Release Management

VSTS has a full featured RM tool (deep dive on docs here) that is part of the platform. It is explicitly designed around the concept of artifacts, environments, and releases. In short, a build that contains artifacts can trigger a release. A release defines one or more environments with specific steps that should execute for each. Environments can be chained after another one, enabling a deployment promotion flow. There are many ways to gate each environment, automated and manual. A configuration I use frequently is to have two environments: MyGet and NuGet (dev and prod, respectively). The NuGet environment has an approval step so that releases don’t automatically flow from dev to prod; rather, I can decide to release to production at any time. Any build is a potential release.

Release steps can do anything and there are many existing tasks built-in for common things (like NuGet push, Azure blob copy, Azure App Services, and Docker) and a rich marketplace for things that aren’t (like creating a GitHub release, tagging the commit, and uploading artifacts to the GitHub release). In addition, you can run any custom script.

I think it’s easier to show by example, and that’s what follows in the next sections.

Versioning

Having an automatic version baked into your build artifacts is a crucial element. I use Andrew Arnott‘s Nerdbank.GitVersioning package to handle that for me. I set the Major.Minor in a version.json file and it increments the Patch based on the Git commit height since the last minor change. Add a prerelease tag to the version, if desired. You can control where the git height goes if you don’t want it in the patch (like 1.2.0-build.{height}). The default is the patch, and I think it’s completely okay to have a release like 1.2.42 if there were 42 commits since the version bump. I believe too much time is wasted on “clean” versions; it’s just a number :). Nerdbank.GitVersioning can also can set the build number in the agent, which is really handy knowing what version was just built.

Structuring your branches without overkill

There are many theories around how to structure your branches in Git. I tend to go with simplicity, aiming for a protected master branch with topic/feature branches for work. In my view, the sole reason for branches should be around code churn and isolation.

When it comes to delivery, there are two main schools of thought: releases and continuous. Releases are the traditional way shipping software. A group of features is batched together and shipped out once someone decides “it’s ready.” Continuous Delivery (CD) takes the thought out of releases: every build gets deployed. Note that doesn’t mean every build gets deployed to all environments, but every build is treated as if it could be.

I bring this up because I choose different tagging/branching strategies based on whether I’m doing releases or full CD.

If you’re doing continuous delivery, I would suggest using a single master branch with a stable version in it. Every build triggers a release, at least to a CI environment. At some point, could be every release, a set schedule, etc, that build gets promoted to the production environment. The key here is that it’s a promotion process; builds are fixed and flow through the environments.

If you’re doing release-based delivery, I would suggest using the master with a prerelease version tagged in it (like 1.2-preview). When you’re ready to stabilize your release, cut a rel/1.2 branch for it. In that branch, remove the prerelease tag and continue your stabilization process. Fixes should target master via a PR and then cherry-picked to the release branch if applicable. The release branch never merges back to master in this model.

In my view, using rel/* is perfect for stabilization of a release, enabling master to proceed to the next release. I’ll come back to my earlier point about branches: they should be for isolation of code churn, not environments. A rel branch isn’t always necessary; I’d only create it if there is parallel development happening.

Examples

I have two examples that illustrate how I implement the strategy above.

First, a library author creating a package that gets deployed to two feeds: a CI feed and a stable feed. The tree uses a preview prerelease tag in master and branches underneath rel for a stable release build. My example shows a .NET library with MyGet and NuGet, but the concepts apply to anything.

Second, I have an application that does continuous deployment to an automatically updating CI feed and controlled releases to the Microsoft Store, Chocolatey, NuGet, and GitHub. All releases move forward in master, and will hotfix under rel only if necessary.

Basic Library

Build

For this first scenario, I’ll talk about Rx.NET. It has a a build definition defined in yaml that has these essential parts (non-relevant parts omitted for brevity):

trigger:
- master
- rel/*

queue: Hosted VS2017

variables: 
  BuildConfiguration: Release
  BuildPlatform: Any CPU

steps:
- task: BatchScript@1
  inputs:
    filename: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Enterprise\\Common7\\Tools\\VsDevCmd.bat"
    arguments: -no_logo
    modifyEnvironment: true
  displayName: Setup Environment Variables

- task: PowerShell@1
  inputs:
    scriptName: 'Rx.NET/Source/build-new.ps1'
    workingFolder: 'Rx.NET/Source'
  env:
    VSTS_ACCESS_TOKEN: $(System.AccessToken)
  displayName: Build

- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: 'Rx.NET/Source/artifacts'
    ArtifactName: artifacts
    publishLocation: Container
  condition: always()

I’m not going to dive too deep in the YAML itself; instead I’ll call your attention to the documentation and examples. As of now, there may be more up-to-date docs in their GitHub repo.

In this case, I have a build script (build-new.ps1) that calls msbuild and does all of the work. In order to ensure the right things are in the path, I call out to the VsDevCmd.bat first. xUnit, as of 2.4.0 beta 2, has direct support for publishing test results to VSTS if you supply the VSTS_ACCESS_TOKEN variable. Other frameworks are supported by the VSTest task. After running the main build script, I use a task to publish the binaries (NuGet packages) that were generated ty the build.

Another approach is to use the tasks for all of this directly, similar to this. We all have preferences between using scripts like PowerShell, Cake, PSake, etc, and the tasks. Doesn’t matter what you pick, use what works for you.

Release

The previous section was about build. The end result is a versioned set of artifacts that can be used as input to a release process. Rx.NET has a release definition here:

Release Management Pipeline

One tip on release naming that’s easily overlooked: it can be customized. I like to put the build number in it so I can associate a release with a version, and I concat it with the instance number (since it’s possible to have multiple releases for a particular version). In the definition options, I use the string Release v$(Build.BuildNumber).$(rev:r). That uses the build number from the primary artifact as the name.

The release defines two environments, MyGet and NuGet with an auto release trigger for builds on the master or rel/* branches. RM lets you put branch filters at any point, so you can enforce that releases only come from a specified branch, if desired. In this case, I tell it to create a release after builds from those branches. Then, in the MyGet environment, I’ve configured it to deploy to that environment automatically upon a release creation. That gets me Build -> Release -> MyGet in a CD pipeline. I do want to control releases to NuGet in two ways: 1) I want to ensure they are in MyGet, and 2) I want to manually approve it. I don’t want every build to go to NuGet. I have configured the NuGet environment to do just that, as well as only allowing the latest release to be deployed (I’m not looking to deploy older releases after-the-fact).

The MyGet environment has one step: a NuGet push to a configured endpoint. The NuGet environment has two steps: create a GitHub release (which will tag the commit for me), and a NuGet push. Releases don’t have to be complicated to benefit from using an RM flow. My release process is simple: when it’s time for a release, I take the selected build and push the “approve” button on the NuGet environment. There’s many other ways to gate releases to environments and you can do almost anything by calling out to an Azure Function as a gate.

It’s a bit hard to see how the release pipelines are configured on the site, so here are some screenshots showing the configuration:

Deployment Trigger:
Deployment Trigger

MyGet Environment:
MyGet Environment

NuGet Environment:
NuGet Environment

Environment Triggers for NuGet
Approvers
Queue

The actual release process to NuGet goes like this:

  • If I want to release a prerelease package, I can just press the approve button. It’ll do the rest.
  • If I want to release a stable package, I create a branch called rel/4.0 (for example) and make one edit to the version.json to remove the prerelease tag. That branch will never merge back to master. I can do as much stabilization in that branch as needed, and when I’m ready, I can approve that release to the NuGet environment. If there are hotfix releases I need to make, I will always make the changes to master (via a PR), then cherry-pick to the rel branch. This ensures that the next release always contains all of the fixes.

A Desktop Application

NuGet Package Explorer (NPE) is a WPF desktop application that is released to the Microsoft Store, Chocolatey, and GitHub as a zip. It also has a CI feed that auto-updates by using AppInstaller. NPE is delivered via a full CD process. There aren’t any prerelease versions; every build is a potential release and goes through an environment promotion pipeline.

Build

As a Desktop Bridge application, it contains a manifest file that must be updated with a version. Likewise, the Chocolatey package must be versioned as well. While there may be better options, I’m currently using a PowerShell script at build to replace a fixed version with one generated from Nerdbank.GitVersioning. I also update a build badge for use as an deployment artifact later.

# version    
nuget install NerdBank.GitVersioning -SolutionDir $(Build.SourcesDirectory) -Verbosity quiet -ExcludeVersion

$vers = & $(Build.SourcesDirectory)\packages\nerdbank.gitversioning\tools\Get-Version.ps1
$ver = $vers.SimpleVersion

# Update appxmanifests. These must be done before build.
$doc = Get-Content ".\PackageExplorer.Package\package.appxmanifest"    
$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content ".\PackageExplorer.Package\package.appxmanifest"

$doc = Get-Content ".\PackageExplorer.Package.Nightly\package.appxmanifest"    
$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content ".\PackageExplorer.Package.Nightly\package.appxmanifest"

$doc = Get-Content ".\Build\PackageExplorer.Package.Nightly.appinstaller"    
$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content "$(Build.ArtifactStagingDirectory)\Nightly\PackageExplorer.Package.Nightly.appinstaller"

# Build PackageExplorer
msbuild .\PackageExplorer\NuGetPackageExplorer.csproj /m /p:Configuration=$(BuildConfiguration) /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-PackageExplorer.binlog
msbuild .\PackageExplorer.Package.Nightly\PackageExplorer.Package.Nightly.wapproj /m /p:Configuration=$(BuildConfiguration) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)\Nightly\" /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-NightlyPackage.binlog
msbuild .\PackageExplorer.Package\PackageExplorer.Package.wapproj /m /p:Configuration=$(BuildConfiguration) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)\Store\" /p:UapAppxPackageBuildMode=StoreUpload /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-Package.binlog

# Update versions
$doc = Get-Content ".\Build\ci_badge.svg"    
$doc | % { $_.Replace("ver_number", "$ver.0") } | Set-Content "$(Build.ArtifactStagingDirectory)\Nightly\version_badge.svg"

$doc = Get-Content ".\Build\store_badge.svg"    
$doc | % { $_.Replace("ver_number", "$ver.0") } | Set-Content "$(Build.ArtifactStagingDirectory)\Store\version_badge.svg"

# Choco and NuGet 
# Get choco

$nugetVer = $vers.NuGetPackageVersion

nuget install chocolatey -SolutionDir $(Build.SourcesDirectory) -Verbosity quiet -ExcludeVersion 
$choco = "$(Build.SourcesDirectory)\packages\chocolatey\tools\chocolateyInstall\choco.exe"

mkdir $(Build.ArtifactStagingDirectory)\Nightly\Choco

& $choco pack .\PackageExplorer\NuGetPackageExplorer.nuspec --version $nugetVer --OutputDirectory $(Build.ArtifactStagingDirectory)\Nightly\Choco
msbuild /t:pack .\Types\Types.csproj /p:Configuration=$(BuildConfiguration) /p:PackageOutputPath=$(Build.ArtifactStagingDirectory)\Nightly\NuGet

You can find the full build definition here.

Release

The release definition for NPE is more complicated than the previous example because it contains more environments: CI, Prod - Store, Prod - Chocolatey, Prod - NuGet, and Prod - GitHub. Most of the time releases go out to all production environments, but if there’s a fix that’s applicable to a specific environment, it only goes out to that one. All fixes go to CI first.

Release Management Pipeline for NPE

The triggers for the release and environments are the same as the previous example, so I won’t repeat the pictures. The steps vary per environment, performing the steps needed to take the artifacts and copy to Azure Blob, upload to the Microsoft Store, push to NuGet or Chocolatey, or create a GitHub release with artifacts, as the case may be.

Conclusion

For me, Release Management is a huge differentiator and fits into my way of thinking very well. I like the separation of responsibilities between build and release it provides. Now that public projects are available in VSTS, contributors to the projects can get build feedback and the community can check-in on the deployment status.

I’d love to hear feedback or suggestions and getting me on Twitter is usually the fastest way.

Multi-targeting the world: a single project to rule them all

January 4, 2017 Coding 9 comments , , , , , , ,

Multi-targeting the world: a single project to rule them all

Starting with Visual Studio 2017, you can now use a single project to build platform-specific libraries for all project types. This blog will explore why you might want to do this, how to do it and workarounds for some point-in-time issues with the tooling.

Contents

Intro

Since the beginning of .NET Core, the project.json format has enabled multi-targeting, that is compiling to multiple target frameworks in parallel and creating an output for each. With ASP.NET Core, it’s common to target both net45 and netcoreapp1.0 so you can deploy the site to either the desktop framework, which runs on Windows, or to the CoreCLR, which runs cross-platform. Multi-targeting is nothing more than compiling the same code multiple times, once per target platform. Each target can specify its own dependencies and ifdef‘s, so you can easily tailor the code to the specific platform.

Another example may have a library target netstandard1.0, netstandard1.3, and net45 to enable different levels of functionality based on the available surface area.

While it was also possible to target UWP, Win8, or profile-based PCL’s, using project.json, doing so required hacks like private copies of all reference assemblies, WinMD files and more. Beyond that, some things didn’t work correctly as some platforms require additional targets to generate additional outputs like .pri files on UWP for resource lookup. So while technically possible, full multi-targeting was brittle and required you to stay in a very narrow path, avoiding things like resources or GUI elements that require the full tool-chain to process.

Enter MSBuild

With the move to MSBuild as part of the .NET Core Tooling direction change, the picture gets much better, so much so that with VS 2017 RC2, you can correctly multi-target all platform types, including UWP, profile-based PCL’s, and Xamarin iOS/Android. Not only that, but by conditionally including/excluding directories based on globs, you can reduce the need for ifdef‘s in many cases.

As part of being open sourced and enabled to run cross-platform, the build targets and tasks required to actually do the build were combined into an SDK. This went along with drastic simplification of the csproj file to have a minimal footprint, that will get even smaller, like this:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.NETCore.App" Version="1.0.1" />
  </ItemGroup>
</Project>

Microsoft’s blog details all of the improvements in this area. For current lack of a better term, I’ll call projects based on these new tools “SDK style.” The easiest way to identify these “SDK style” projects is by looking for the Sdk attribute in the top Project element.

Multi-targeting vs. .NET Standard Libraries vs. PCL’s

Before we go further, let’s answer this question that many people have asked — why would you want to multi-target vs just use a single portable library, whether that’s .NET Standard or an older profile-based PCL?

There are several answers to that question — first, if your code can all fit within a single .NET Standard-based library, then there’s no reason to multi-target. If you’re using a legacy profile-based PCL, at the very least consider moving up to the equivalent .NET Standard version. Don’t make more work for yourself. The decision to multi-target falls out of a need to use functionality that doesn’t exist within a .NET Standard version or if you need to target an earlier platform that doesn’t support the .NET Standard version you need. A common example is that many libraries still need to support .NET 4.5. Despite a significant amount of functionality available in .NET Standard 1.3, that .NET Standard version only supports .NET 4.6+. Chances are though that the code would work “just fine” on .NET 4.5, so it’s easy to multi-target to both net45 and netstandard1.3.

The other main reason why you’d need to multi-target is to use platform-specific code within your library. For example, on iOS you might want to use SecKeyChain for saved credentials, on Android use its Context to access shared services like preferences, and on Windows its Credential Manager. You might have a common method called GetCredential that other code uses to get the data. Today you might use dependency injection or reflection to access a “.Platform” library with a specific implementation that your common code uses. Instead, you can choose to multi-target and access the platform code directly.

How to multi-target

Let me start by saying that the methods here are based on the new “SDK-style” projects that VS 2017 provides. They orchestrate using the existing project types that are installed by Visual Studio. As such, the build itself won’t work on a box without the other tools installed (so you’re building on a Windows box, much like you probably are today). Some of these may work on a Mac with Visual Studio for Mac but I have not tested that in any way. When you install Visual Studio 2017, make sure to install all of the tools for the project types you need (Xamarin, UWP, etc) and also the .NET Core Tooling.

There’s no UI in VS for adding additional target frameworks, but I have some samples that show what to do.

First, create a new .NET Core Class Library project. If you don’t see the following option, make sure to install the .NET Core workload in the VS Installer.

New .NET Core Class Library
.NET Core workload

Right-click the project and select “Edit project file…”. This is new in VS 2017 – the ability to edit the project file while it’s open and have changes instantly reflected.

In the editor, after noticing how much less boilerplate code there is now, look for the TargetFramework property that looks like this: <TargetFramework>netstandard1.3</TargetFramework> property. Change that to <TargetFrameworks>netstandard1.3;net45</TargetFrameworks> to target .NET 4.5 and NET Standard 1.3. You can add however many targets you want by adding to that semi-colon list. It’s subtle, but note the difference in property names between TargetFramework and TargetFrameworks with a plural. It’s easy to miss.

For some frameworks, like .NET 4.5, that’s all you need to do. However, targeting .NET Standard and .NET 4.x is far from “the world.” We can do better! You would think it should be as easy as adding additional TFM’s like uap10.0, xamarin.ios10 or MonoAndroid70 to the list, and hopefully by the time the tools RTM it will be, but for now we need to add extra properties to the project file to tell MSBuild what to do with those.

Fortunately, and here’s the real secret, the “SDK-style” build system has a LanguageTargets property that you can specify per TFM to import the targets for that project type instead of the vanilla Microsoft.CSharp.targets import. That means we can use the “Windows Xaml”, Android, iOS, or any other platform tool-chain we need.

Xamarin Example

In the example here, I have a class library that multi-targets to net45, uap10.0, netstandard1.3, Xamarin.iOS10 and MonoAndroid70. In this contrived library, I have a Greeter class that’s calling a Hello() method that needs platform specific code. I’m using a pattern where I have a directory for each TFM where code in there only gets included there, so no ifdef‘s are needed. For Android, Resources are supported if you need them. While the example doesn’t currently use them, you could use PList‘s, xib‘s or Story Boards on iOS, Page‘s on UWP, or any other “native” file type supported by the platform.

Win81/WP8/PCL/Wpa81/Xamarin/Net45 Example

As a more realistic example, one of my libraries, Zeroconf, an mDNS discovery library, targets “the world.” It currently has concrete implementations for wp8, Wpa81, Win8, portable-Wpa81+Win81, uap10.0, net45, and netstandard1.3 (which supports Xamarin and CoreCLR.) In addition to the the concrete implementations, it provides a netstandard1.0 façade to support being used in portable libraries. The different concrete implementations are required due to differences in the networking stacks between the various Windows networking stacks. For now, the uap10.0 version cannot use the netstandard1.3 version until NetworkInformation is fully supported by the platform, so it continues to use the WinRT variant. You can see the platform-specific code in the platforms directory and then how they’re conditionally included by the csproj in the ItemGroups

The property groups at the top contain the LanguageTargets and properties needed. For portable-Wpa81+Win81 two extra items are required as the special PCL profile also supports WinRT. The ItemGroup here has two TargetPlatform to pull in the correct .winmd references.

Building

You can build the libraries either in VS 2017 or the command-line. If you use the command line, you’ll want to run the following from a VS 2017 Developer Command Prompt: msbuild /t:restore followed by msbuild /t:build. If you want to create a NuGet package, you can run msbuild /t:pack. It’s important to note that you must currently use msbuild, the desktop version in the VS 2017 path, to build these and not dotnet build. The reason is that while dotnet build calls MSBuild, it’s currently using a CoreCLR version even though the desktop version is present in your VS installation. The engineering team is aware of this and in the future, dotnet build will be smart enough to call the desktop version of msbuild when present. The “regular” targets file we’re using to support the platform-specific features are designed for Desktop MSBuild. They do not yet have support for CoreCLR tasks. Bottom line, as of the current release: if your targets use build tasks, then you need to provide both CoreCLR and Desktop versions of the library in order to support both “regular” MSBuild and dotnet build.

Common gotcha’s

There are several bugs in the tool-chain currently that are in the process of being fixed:

  • Some Project-to-project (p2p) references aren’t resolving correctly. Whereas they should resolve to the “best” match, they are resolving to the first TFM in the list.
  • Another bug is preventing a “legacy” csproj from doing a p2p reference with a “Portable Library can only reference other portable library” error.
  • Files that are conditionally included won’t show up in the Solution Explorer. As a workaround, include all files with None as the first item group (see example).
  • for iOS (and possibly Android), you need to set DebugType to full as the Xamarin ConvertPdb2Mdb task doesn’t yet support the new Portable PDB format generated by this tool-chain.
  • Win8, Win81, and uap10.0 aren’t correctly understood by the NuGet targets today. As a workaround, you need to include the NugetTargetMoniker property set to the full TFM as shown here. Similarly, for legacy PCL targets, it requires Version=v0.0 in the NugetTargetMoniker here. These should hopefully be fixed by GA.
  • Windows assemblies that use resources need a .pri file alongside them. They’re currently missing from the generated NuGet. Workaround is to use your own .NuSpec for now until the bug is fixed.

Into the weeds, how it all works

This is by no means an official explanation, it’s what I’ve found from exploring the SDK build targets. Some of the terminology and concepts may change over time.

The “SDK style” projects consist of a set of targets/tasks that are pre-installed with MSBuild (and the CLI tools). You can see them in the following directory: C:\Program Files (x86)\Microsoft Visual Studio\2017\<sku>\MSBuild\Sdks where <sku> is Community, Professional, or Enterprise, depending on what you installed. The two SDK’s you’re likely to use directly are Microsoft.NET.Sdk and Microsoft.NET.Sdk.Web.

The Sdk attribute causes an Sdk.props and Sdk.targets within the specified SDK’s \Sdk directory to be imported before and after the project file. The Microsoft.NET.Sdk SDK’s targets defines an “outer” and “inner” build. The “outer-loop” is what your project file directly defines, including several TFM’s in the TargetFrameworks property. If you only have a single build with a TargetFramework property defined, then there’s only an “inner-loop”.

For an “outer-loop” build, the SDK targets imports props/targets in a buildCrossTargeting directory (soon to be renamed to buildMultiTargeting). Those get auto-included before and after the main project file (props before, targets after.) The “outer-loop” targets will eventually loop through each of the TargetFrameworks calling msbuild again in an “inner-loop” with TargetFramework set to one TFM. This “inner-loop” build is what we currently have in today’s “normal” project types. The “inner-loop” build provides an extension point for providing your language-specific targets (the Import that was at the bottom of your old csproj before) in place of the “vanilla” one it’ll include by default. By providing a LanguageTargets property for the “inner-loop,” conditioned by TFM, we can use the “original” targets that invoke the full tool-chain for the target platform. See here, here and here for UWP, iOS, and Android, respectively.

Within each conditionally defined property group, we can set properties that are specific to a particular “inner-loop.” These correspond to the properties in your existing platform-specific project file and are used by the platform-specific targets specified.

One thing you give-up currently is any UI in VS for configuring these properties. Perhaps they’ll return sometime in the future. For now, one thing I’ve found helpful is to maintain a few “dummy” projects where I can edit some settings to see the values and then put them into my multi-targeting csproj.

Looking forward

As of today (January 4, 2017), the tooling is in a fairly rough state. The .NET Core tooling is rightfully in an “alpha” state. The MSBuild SDK is under active development and things will change before GA. There are a number of issues in the tooling that can make it hard to use today, but I expect those to be fixed soon. Most of the bugs I’ve found are slated to be fixed in the RC3 time-frame, and I’d expect things to be better with that release.

As to whether-or-not to take the plunge today: I’d suggest that if you have a tolerance for figuring this out and reporting issues you’ll encounter, then go for it. If you have a complex project today that already multi-targets a different way (most likely by using multiple “head” projects and shared code project types), I would recommend trying this out in a branch to see how far you get. I’ll be happy to help, just give me a shout. The more the community bangs on this stuff up front, the more issues can be addressed prior to GA.

Acknowledgments

Many thanks to Brad Wilson, Joe Morris, and Daniel Plaisted for reviewing this post and providing feedback.

Continuous Integration for UWP projects – Making Builds Faster

December 3, 2015 Coding 2 comments , , ,

Continuous Integration for UWP projects – Making Builds Faster

Are you developing a UWP app? Are you doing continuous integration? Do you want to improve your CI build times while still generating the .appxupload required for store submission? If so, read-on.

Prerequisites

You’ll need VS 2015 with the UWP 1.1 tools installed. The UWP 1.1 tooling has some important fixes for creating app bundles and app upload files for command line/CI builds.

You’ll also need to register your app on the Windows Dev Center and associate it with your app. Follow the docs for setting linking your project to a store from within VS first.

If you’re using VSO, you may need to setup your own VM to run a vNext build agent. I’m not sure VSO’s hosted agents have all the latest tools as of today. I run my builds in an A2 VM on Azure; it’s not the fastest build server but it’s good enough.

Building on a Server

Now that you have a solution with one or more projects that create an appx (UWP) app, you can start setting up your build scripts. One problem you’ll need to solve is updating your .appxmanifest with an incrementing version each time. I’ve solved this using the fantastic GitVersion tool. There’s a number of different ways to use it, but on VSO it sets environment variables as part of a build step that I use to update the manifest on build.

I use a .proj msbuild file with a set of targets the CI server calls, but you can use your favorite build scripting tool.

My code looks like this:

<Target Name="UpdateVersion">
    <PropertyGroup>
      <Version>$(GITVERSION_MAJOR).$(GITVERSION_MINOR).$(GITVERSION_BUILDMETADATA)</Version>
    </PropertyGroup>    
    <ItemGroup>
      <RegexTransform Include="$(SolutionDir)\**\*.appxmanifest">
          <Find><![CDATA[ Version="\d+\.\d+\.\d+\.\d+"]]></Find>
          <ReplaceWith><![CDATA[ Version="$(Version).0"]]></ReplaceWith>
      </RegexTransform>
    </ItemGroup>
    <RegexTransform Items="@(RegexTransform)" />    
    <Message Text="Assm: Ver $(Version)" />
</Target>

The idea is to call GitVersion, either by calling GitVersion.exe earlier in the build process, or by using the GitVersion VSO Build Task in a step prior to the build step.

GitVersion can also update your AssemblyInfo files too, if you’d like.

Finally, at the end of the build step, you’ll want to collect certain files for the output. In this case, it’s the .appxupload for the store. In VSO, I look for the contents in my app dir, MyApp\AppPackages\**\*.appxupload.

If you setup your build definition to build in Release mode, you should have a successful build with a .appxupload artifact available you can submit to the store. Remember, we’ve already associated this app with the store, and we’ve enabled building x86, x64, and arm as part of our initial run-through in Visual Studio.

The problem

For your safety, a CI build will by default only generate the .appxupload file if you’re in Release mode with .NET Native enabled. This is to help you catch compile-time errors that would delay your store submission.

That’s well-intentioned, but it can severely slow down your builds. On one project I’m working on, on that A2 VM, a “normal” debug build takes about 14 min while a Release build takes 81 minutes! That’s too long for CI.

Fortunately, there’s a few things we can do to speed things up if you’re willing to live a bit dangerously.

  1. Force MSBuild to create the .appxupload without actually – yes, it is possible!
    • In your build definition, pass the additional arguments to MSBuild: /p:UseDotNetNativeToolchain=false /p:BuildAppxUploadPackageForUap=true. This overrides two variables that control the use of .NET Native and packaging.
  2. If you have any UWP Unit Test projects, you can disable package generation for them if you’re not running those unit tests on the CI box. There is a g̶o̶o̶d̶  reason for this — it’s hard. Running UWP CI tests requires your test agent to be running as an interactive process, not a service. You need to configure your build box to auto-login on reboot and then startup the agent.

    In your test projects, add the following <PropertyGroup> to your csproj file:

<!-- Don't build an appx for this in TFS/command line msbuild -->
<PropertyGroup>
  <GenerateAppxPackageOnBuild Condition="'$(GenerateAppxPackageOnBuild)' == '' and '$(BuildingInsideVisualStudio)' != 'true'">false</GenerateAppxPackageOnBuild>
</PropertyGroup>

This works because the .appxupload doesn’t actually contain native code. It contains three app bundles (one per platform) with MSIL, that the store compiles to native code in the cloud. The local .NET Native step is only a “safety” check, as is running WACK. If you regularly test your code in Release mode locally, and have run WACK to ensure your code is ok, then there’s no need to run either on every build.

After making those two adjustments, I’m able to generate the .appxupload files on every build and the build takes the same 13 min as debug mode.

Enabling source code debugging for your NuGet packages with GitLink

September 23, 2015 Coding 3 comments , , , ,

Enabling source code debugging for your NuGet packages with GitLink

Recently on Twitter, someone was complaining that their CI builds were failing due to SymbolSource.org either being down or rejecting their packages. Fortunately, there’s a better way than using SymbolSource if you’re using a public Git repo (like GitHub) to host your project — GitLink.

Symbols, SymbolSource and NuGet

Hopefully by now, most of you know that you need to create symbols (PDB’s) for your release libraries in addition to your debug builds. Having symbols helps your users troubleshoot issues that may crop up when they’re using your library. Without symbols, you need to rely on hacks, like using dotPeek as a Symbol Server. It’s a hack because the generated source code usually doesn’t match the original, and it certainly doesn’t include any helpful comments (you do comment your code, right?)

So you’ve updated your project build properties to create symbols for release, now you need someplace to put them so your users can get them. Up until recently, the easiest way has been to publish them on SymbolSource. You’d include the pdb files in your NuGet NuSpec, and then run nuget pack MyLibrary.nuspec -symbols. NuGet then creates two packages, one with your library and one just with the symbols. If you then run nuget push MyLibrary.1.0.0.nupkg, if there’s also a symbols package alongside, NuGet will push that to SymbolSource instead of NuGet.org. If you’re lucky, things will just work. However, sometimes SymbolSource doesn’t like your PDB’s and your push will fail.

The issues

While SymbolSource is a great tool, there are some shortcomings.
* It requires manual configuration by the library consumer
* They have to know to go to VS and add the SymbolSource URL to the symbol search path
* It slows down your debugging experience. VS will by default check every configured Symbol Server for matching PDB’s. That leads many people to either disable symbol loading entirely or selectively load symbols. Even if you selectively load symbols, the load is still slow as VS has know way to know which Symbol Server a PDB might be on and must check all of them.
* Doesn’t enable Source Code debugging. PDB’s can be indexed to map original source code file metadata into them (the file location, not contents). If you’ve source-indexed your PDB’s and the user has source server support enabled, VS will automatically download the matching source code. This is great for OSS projects with their code on GitHub.

GitLink to the Rescue

GitLink provides us an elegant solution. When GitLink is run after your build step, it detects the current commit (assuming the sln is in a git repo clone), detects the provider (BitBucket and GitHub are currently supported) and indexes the PDB’s to point to the exact source location online. Of course, there are options to specify commits, remote repo location URLs, etc if you need to override the defaults.

After running GitLink, just include the PDB files in your nuspec/main nupkg alongside your dll files and you’re done. Upload that whole package to NuGet (and don’t use the -symbols parameter with nuget pack). This also means that users don’t need to configure a symbol server as the source-indexed PDB’s will be alongside the dll — the location VS will auto-load them from.

An example

Over at xUnit and xUnit for Devices, we’ve implemented GitLink as part of our builds. xUnit builds are setup to run msbuild on an “outer” .msbuild project with high-level tasks; we have a GitLink task that runs after our main build task.

As we want the build to be fully automated and not rely on exe’s external to the project, we “install” the GitLink NuGet package on build if necessary.

Here’s the gist of our main CI target that we call on build msbuild xunit.msbuild /t:CI (abbreviated for clarity):

<PropertyGroup>
  <SolutionName Condition="'$(SolutionName)' == ''">xunit.vs2015.sln</SolutionName>
  <SolutionDir Condition="'$(SolutionDir)' == '' Or '$(SolutionDir)' == '*Undefined*'">$(MSBuildProjectDirectory)</SolutionDir>
  <NuGetExePath Condition="'$(NuGetExePath)' == ''">$(SolutionDir)\.nuget\nuget.exe</NuGetExePath>
</PropertyGroup>

<Target Name="CI" DependsOnTargets="Clean;PackageRestore;GitLink;Build;Packages" />

<Target Name="PackageRestore" DependsOnTargets="_DownloadNuGet">
  <Message Text="Restoring NuGet packages..." Importance="High" />
  <Exec Command="&quot;$(NuGetExePath)&quot; install gitlink -SolutionDir &quot;$(SolutionDir)&quot; -Verbosity quiet -ExcludeVersion -pre" Condition="!Exists('$(SolutionDir)\packages\gitlink\')" />
  <Exec Command="&quot;$(NuGetExePath)&quot; restore &quot;$(SolutionDir)\$(SolutionName)&quot; -NonInteractive -Source @(PackageSource) -Verbosity quiet" />
</Target>

<Target Name='GitLink'>
  <Exec Command='packages\gitlink\lib\net45\GitLink.exe $(MSBuildThisFileDirectory) -f $(SolutionName) -u https://github.com/xunit/xunit' IgnoreExitCode='true' />
</Target>

<Target Name='Packages'>
  <Exec Command='"$(NuGetExePath)" pack %(NuspecFiles.Identity) -NoPackageAnalysis -NonInteractive -Verbosity quiet' />
</Target>

There are a few things to note from the snippet:
* When installing GitLink, I use the -ExcludeVersion switch. This is so it’s easier to call later in the script w/o remembering to update a target path each time.
* I’m currently using -pre as well. There’s a number of bugs fixed since the last stable release.

The end result

If you use xUnit 2.0+ or xUnit for Devices and have source server support enabled in your VS debug settings, VS will let you step into xUnit code seamlessly.

If you do this for your library, your users will thank you.