Coding

OSS Build and Release with VSTS

May 15, 2018 Coding 1 comment , ,

OSS Build and Release with VSTS

Over the past few weeks I have been moving the build system for the OSS projects I maintain over to use VSTS. Up until recently I was using AppVeyor for builds, as they have provided a generous free offering for OSS for years. A huge thank you goes out to them for their past and ongoing support for OSS. So why move to VSTS? There’s three reasons for me:

  1. Support for public projects. This is key since there’s no point in using their builds if users can’t see the results.
  2. Release Management. The existing build systems like AppVeyor, Jenkins, TeamCity, and Travis, can all build a project. Sure, they have different strengths and weaknesses, and some offer free OSS builds as well, but none of them really has a Release Management story. That is, they can build artifacts….but then what? How do the bits get where you want, like NuGet, MyGet, a Store, etc. This is where release management fits in as a central part of CI/CD. More on this later.
  3. Windows, Linux, and Mac build host support in one system. It’s possible to run a single build on all three at the same time (fan out/in), like how VS Code does. No other host can do this easily today using a hosted build pool. I should note that using a hosted build pool is critical for security if you want to build pull requests from public forks. You don’t want to be running arbitrary code on a private build agent. Hosted agent VMs are destroyed after each use, making them far safer.

Life without Release Management

Many projects strive to achieve a continuous deployment pipeline without using Release Management (RM from now on). This is often achieved by using the build script or configuration to do some deployment steps given certain conditions, like if building a certain branch like master. In some ways, the GitFlow branching strategy encourages this, making it easy to decide that builds from the develop branch are pre-release and should go to a dev environment, while builds from master are production and should thus be deployed to a production environment. To me, this is conflating the real purpose of branches, which should be isolation of code churn, from deployment target. I believe any artifact should be able to be deployed to any environment, releasing is a different process than build and should have no bearing on which branch it comes from. For the vast majority of projects, I believe that a GitHub Flow or Release Flow (video) is a better, simpler, option.

Without RM, a pipeline to deploy a library might look something like this:

  1. Builds on the develop branch get deployed to MyGet by the build system. To me, it doesn’t much matter if it’s in the build script directly or if it’s build server configuration (like the deployment option in AppVeyor).
  2. To create a stable release, code is merged to master and then tagged. Often, tags are the mechanism that determines if there should be a release — effectively, tag a commit, that triggers a build which gets released to NuGet.org.

In this model, it’s usually a different build that gets deployed to release than to dev. I believe that mixing build and release like this ultimately leads to less flexibility and more coupling. The source system has to know about the deployment targets. If you need to change the deployment target, or add another one, you have to commit to the source and rebuild.

Following the single responsibility principle, a build should produce artifacts, that’s it. Deployment is something else, don’t conflate the two concepts. Repeat after me: Build is just build. I think projects have tended to mix the two, in part because there wasn’t a good, free, RM tool. It was easy and pragmatic to do both from the build tool. That changes now with VSTS public projects.

CD Nirvana with Release Management

VSTS has a full featured RM tool (deep dive on docs here) that is part of the platform. It is explicitly designed around the concept of artifacts, environments, and releases. In short, a build that contains artifacts can trigger a release. A release defines one or more environments with specific steps that should execute for each. Environments can be chained after another one, enabling a deployment promotion flow. There are many ways to gate each environment, automated and manual. A configuration I use frequently is to have two environments: MyGet and NuGet (dev and prod, respectively). The NuGet environment has an approval step so that releases don’t automatically flow from dev to prod; rather, I can decide to release to production at any time. Any build is a potential release.

Release steps can do anything and there are many existing tasks built-in for common things (like NuGet push, Azure blob copy, Azure App Services, and Docker) and a rich marketplace for things that aren’t (like creating a GitHub release, tagging the commit, and uploading artifacts to the GitHub release). In addition, you can run any custom script.

I think it’s easier to show by example, and that’s what follows in the next sections.

Versioning

Having an automatic version baked into your build artifacts is a crucial element. I use Andrew Arnott‘s Nerdbank.GitVersioning package to handle that for me. I set the Major.Minor in a version.json file and it increments the Patch based on the Git commit height since the last minor change. Add a prerelease tag to the version, if desired. You can control where the git height goes if you don’t want it in the patch (like 1.2.0-build.{height}). The default is the patch, and I think it’s completely okay to have a release like 1.2.42 if there were 42 commits since the version bump. I believe too much time is wasted on “clean” versions; it’s just a number :). Nerdbank.GitVersioning can also can set the build number in the agent, which is really handy knowing what version was just built.

Structuring your branches without overkill

There are many theories around how to structure your branches in Git. I tend to go with simplicity, aiming for a protected master branch with topic/feature branches for work. In my view, the sole reason for branches should be around code churn and isolation.

When it comes to delivery, there are two main schools of thought: releases and continuous. Releases are the traditional way shipping software. A group of features is batched together and shipped out once someone decides “it’s ready.” Continuous Delivery (CD) takes the thought out of releases: every build gets deployed. Note that doesn’t mean every build gets deployed to all environments, but every build is treated as if it could be.

I bring this up because I choose different tagging/branching strategies based on whether I’m doing releases or full CD.

If you’re doing continuous delivery, I would suggest using a single master branch with a stable version in it. Every build triggers a release, at least to a CI environment. At some point, could be every release, a set schedule, etc, that build gets promoted to the production environment. The key here is that it’s a promotion process; builds are fixed and flow through the environments.

If you’re doing release-based delivery, I would suggest using the master with a prerelease version tagged in it (like 1.2-preview). When you’re ready to stabilize your release, cut a rel/1.2 branch for it. In that branch, remove the prerelease tag and continue your stabilization process. Fixes should target master via a PR and then cherry-picked to the release branch if applicable. The release branch never merges back to master in this model.

In my view, using rel/* is perfect for stabilization of a release, enabling master to proceed to the next release. I’ll come back to my earlier point about branches: they should be for isolation of code churn, not environments. A rel branch isn’t always necessary; I’d only create it if there is parallel development happening.

Examples

I have two examples that illustrate how I implement the strategy above.

First, a library author creating a package that gets deployed to two feeds: a CI feed and a stable feed. The tree uses a preview prerelease tag in master and branches underneath rel for a stable release build. My example shows a .NET library with MyGet and NuGet, but the concepts apply to anything.

Second, I have an application that does continuous deployment to an automatically updating CI feed and controlled releases to the Microsoft Store, Chocolatey, NuGet, and GitHub. All releases move forward in master, and will hotfix under rel only if necessary.

Basic Library

Build

For this first scenario, I’ll talk about Rx.NET. It has a a build definition defined in yaml that has these essential parts (non-relevant parts omitted for brevity):

trigger:
- master
- rel/*

queue: Hosted VS2017

variables: 
  BuildConfiguration: Release
  BuildPlatform: Any CPU

steps:
- task: BatchScript@1
  inputs:
    filename: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Enterprise\\Common7\\Tools\\VsDevCmd.bat"
    arguments: -no_logo
    modifyEnvironment: true
  displayName: Setup Environment Variables

- task: PowerShell@1
  inputs:
    scriptName: 'Rx.NET/Source/build-new.ps1'
    workingFolder: 'Rx.NET/Source'
  env:
    VSTS_ACCESS_TOKEN: $(System.AccessToken)
  displayName: Build

- task: PublishBuildArtifacts@1
  inputs:
    PathtoPublish: 'Rx.NET/Source/artifacts'
    ArtifactName: artifacts
    publishLocation: Container
  condition: always()

I’m not going to dive too deep in the YAML itself; instead I’ll call your attention to the documentation and examples. As of now, there may be more up-to-date docs in their GitHub repo.

In this case, I have a build script (build-new.ps1) that calls msbuild and does all of the work. In order to ensure the right things are in the path, I call out to the VsDevCmd.bat first. xUnit, as of 2.4.0 beta 2, has direct support for publishing test results to VSTS if you supply the VSTS_ACCESS_TOKEN variable. Other frameworks are supported by the VSTest task. After running the main build script, I use a task to publish the binaries (NuGet packages) that were generated ty the build.

Another approach is to use the tasks for all of this directly, similar to this. We all have preferences between using scripts like PowerShell, Cake, PSake, etc, and the tasks. Doesn’t matter what you pick, use what works for you.

Release

The previous section was about build. The end result is a versioned set of artifacts that can be used as input to a release process. Rx.NET has a release definition here:

Release Management Pipeline

One tip on release naming that’s easily overlooked: it can be customized. I like to put the build number in it so I can associate a release with a version, and I concat it with the instance number (since it’s possible to have multiple releases for a particular version). In the definition options, I use the string Release v$(Build.BuildNumber).$(rev:r). That uses the build number from the primary artifact as the name.

The release defines two environments, MyGet and NuGet with an auto release trigger for builds on the master or rel/* branches. RM lets you put branch filters at any point, so you can enforce that releases only come from a specified branch, if desired. In this case, I tell it to create a release after builds from those branches. Then, in the MyGet environment, I’ve configured it to deploy to that environment automatically upon a release creation. That gets me Build -> Release -> MyGet in a CD pipeline. I do want to control releases to NuGet in two ways: 1) I want to ensure they are in MyGet, and 2) I want to manually approve it. I don’t want every build to go to NuGet. I have configured the NuGet environment to do just that, as well as only allowing the latest release to be deployed (I’m not looking to deploy older releases after-the-fact).

The MyGet environment has one step: a NuGet push to a configured endpoint. The NuGet environment has two steps: create a GitHub release (which will tag the commit for me), and a NuGet push. Releases don’t have to be complicated to benefit from using an RM flow. My release process is simple: when it’s time for a release, I take the selected build and push the “approve” button on the NuGet environment. There’s many other ways to gate releases to environments and you can do almost anything by calling out to an Azure Function as a gate.

It’s a bit hard to see how the release pipelines are configured on the site, so here are some screenshots showing the configuration:

Deployment Trigger:
Deployment Trigger

MyGet Environment:
MyGet Environment

NuGet Environment:
NuGet Environment

Environment Triggers for NuGet
Approvers
Queue

The actual release process to NuGet goes like this:

  • If I want to release a prerelease package, I can just press the approve button. It’ll do the rest.
  • If I want to release a stable package, I create a branch called rel/4.0 (for example) and make one edit to the version.json to remove the prerelease tag. That branch will never merge back to master. I can do as much stabilization in that branch as needed, and when I’m ready, I can approve that release to the NuGet environment. If there are hotfix releases I need to make, I will always make the changes to master (via a PR), then cherry-pick to the rel branch. This ensures that the next release always contains all of the fixes.

A Desktop Application

NuGet Package Explorer (NPE) is a WPF desktop application that is released to the Microsoft Store, Chocolatey, and GitHub as a zip. It also has a CI feed that auto-updates by using AppInstaller. NPE is delivered via a full CD process. There aren’t any prerelease versions; every build is a potential release and goes through an environment promotion pipeline.

Build

As a Desktop Bridge application, it contains a manifest file that must be updated with a version. Likewise, the Chocolatey package must be versioned as well. While there may be better options, I’m currently using a PowerShell script at build to replace a fixed version with one generated from Nerdbank.GitVersioning. I also update a build badge for use as an deployment artifact later.

# version    
nuget install NerdBank.GitVersioning -SolutionDir $(Build.SourcesDirectory) -Verbosity quiet -ExcludeVersion

$vers = & $(Build.SourcesDirectory)\packages\nerdbank.gitversioning\tools\Get-Version.ps1
$ver = $vers.SimpleVersion

# Update appxmanifests. These must be done before build.
$doc = Get-Content ".\PackageExplorer.Package\package.appxmanifest"    
$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content ".\PackageExplorer.Package\package.appxmanifest"

$doc = Get-Content ".\PackageExplorer.Package.Nightly\package.appxmanifest"    
$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content ".\PackageExplorer.Package.Nightly\package.appxmanifest"

$doc = Get-Content ".\Build\PackageExplorer.Package.Nightly.appinstaller"    
$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content "$(Build.ArtifactStagingDirectory)\Nightly\PackageExplorer.Package.Nightly.appinstaller"

# Build PackageExplorer
msbuild .\PackageExplorer\NuGetPackageExplorer.csproj /m /p:Configuration=$(BuildConfiguration) /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-PackageExplorer.binlog
msbuild .\PackageExplorer.Package.Nightly\PackageExplorer.Package.Nightly.wapproj /m /p:Configuration=$(BuildConfiguration) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)\Nightly\" /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-NightlyPackage.binlog
msbuild .\PackageExplorer.Package\PackageExplorer.Package.wapproj /m /p:Configuration=$(BuildConfiguration) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)\Store\" /p:UapAppxPackageBuildMode=StoreUpload /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-Package.binlog

# Update versions
$doc = Get-Content ".\Build\ci_badge.svg"    
$doc | % { $_.Replace("ver_number", "$ver.0") } | Set-Content "$(Build.ArtifactStagingDirectory)\Nightly\version_badge.svg"

$doc = Get-Content ".\Build\store_badge.svg"    
$doc | % { $_.Replace("ver_number", "$ver.0") } | Set-Content "$(Build.ArtifactStagingDirectory)\Store\version_badge.svg"

# Choco and NuGet 
# Get choco

$nugetVer = $vers.NuGetPackageVersion

nuget install chocolatey -SolutionDir $(Build.SourcesDirectory) -Verbosity quiet -ExcludeVersion 
$choco = "$(Build.SourcesDirectory)\packages\chocolatey\tools\chocolateyInstall\choco.exe"

mkdir $(Build.ArtifactStagingDirectory)\Nightly\Choco

& $choco pack .\PackageExplorer\NuGetPackageExplorer.nuspec --version $nugetVer --OutputDirectory $(Build.ArtifactStagingDirectory)\Nightly\Choco
msbuild /t:pack .\Types\Types.csproj /p:Configuration=$(BuildConfiguration) /p:PackageOutputPath=$(Build.ArtifactStagingDirectory)\Nightly\NuGet

You can find the full build definition here.

Release

The release definition for NPE is more complicated than the previous example because it contains more environments: CI, Prod - Store, Prod - Chocolatey, Prod - NuGet, and Prod - GitHub. Most of the time releases go out to all production environments, but if there’s a fix that’s applicable to a specific environment, it only goes out to that one. All fixes go to CI first.

Release Management Pipeline for NPE

The triggers for the release and environments are the same as the previous example, so I won’t repeat the pictures. The steps vary per environment, performing the steps needed to take the artifacts and copy to Azure Blob, upload to the Microsoft Store, push to NuGet or Chocolatey, or create a GitHub release with artifacts, as the case may be.

Conclusion

For me, Release Management is a huge differentiator and fits into my way of thinking very well. I like the separation of responsibilities between build and release it provides. Now that public projects are available in VSTS, contributors to the projects can get build feedback and the community can check-in on the deployment status.

I’d love to hear feedback or suggestions and getting me on Twitter is usually the fastest way.

Microsoft Regional Director

April 10, 2018 Coding 2 comments

I am thrilled to announce that I received, and accepted, an invitation to join the Microsoft Regional Director program. I’m humbled and honored to be among the ranks of people who I’ve looked up to for most of my professional career. A very huge Thank You to those who nominated and supported me for this program.

If you’re not familiar with what a Regional Director is, the website explains pretty well:

The Regional Director Program provides Microsoft leaders with the customer insights and real-world voices it needs to continue empowering developers and IT professionals with the world’s most innovative and impactful tools, services, and solutions.

Established in 1993, the program consists of 150 of the world’s top technology visionaries chosen specifically for their proven cross-platform expertise, community leadership, and commitment to business results. You will typically find Regional Directors keynoting at top industry events, leading community groups and local initiatives, running technology-focused companies, or consulting on and implementing the latest breakthrough within a multinational corporation.

It is coming up on four years since I was first awarded Windows Developer MVP in July, 2014, and two years since Microsoft awarded me a second category of Visual Studio & Development Technologies. The journey has been incredible, getting to meet so many amazing people along the way.

I am excited to continue the journey as both an MVP and RD!

Registration-free COM with Azure App Services

February 25, 2018 Coding 1 comment ,

Registration-free COM with Azure App Services

TL;DR: If you ever got this error on Azure App Services, read-on: The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail.

Background

This is admittedly an esoteric topic, but useful to know in case you happen to hit it. Azure App Services are wonderful; they make it really easy to host a website or API in Azure, with the platform handling the “hard parts” of HTTPS certificate management, scaling, load balancing, etc. A few months ago, while working on my code signing service, I hit a strange issue when running on Azure that didn’t reproduce locally. MakeAppx.exe, a tool that expands and creates AppX packages, was failing. The code signing service bundles a few utility exe’s that are executed for certain operations. In this case, I needed to extract Appx/Appxbundle files, modify the contents and repack them.

When I started investigating, I went to the Kudu console and tried executing makeappx.exe, I got the following error: The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail.. No combinations of parameters made a difference, it wasn’t even getting that far.

The cause

After looking at potential differences in that application vs every other application that worked, I noticed that makeappx.exe uses an embedded manifest that declares registration-free COM dependencies. This is because makeappx.exe includes local copies of some files may be newer than the ones included in the operating system. If I replaced the built-in manifest with a default one, I could launch the application, so that was definitely the issue. The caveat is two-fold: 1. I was modifying a Windows SDK binary, and I didn’t want to do that, 2. I’d be tied to a particular version of that tool that matched whatever app services was running on.

Investigating

Given that I was now in uncharted territory, I reached out to the App Service team and they graciously helped. In particular, I need to thank David Ebbo and Petr Podhorsky, without whom I’d still be stuck.

What the investigation found: due to the way App Service maps in the “D” drive (the default one your site runs from). As Petr explains:

  • Registration-free COM side-by-side is handled by a different process on the box, CSRSS, which runs under a different account and because the path is transferred between processes unchanged, it tries to look at d:\home path, which is a special path and makes sense only for your site process, but nothing else on the box (if there are multiple sites running on the box, they all have their own d:\home and don’t see each other’s content). So it does not find your manifest.
  • Even if the previous problem was solved, the CSRSS (client server runtime subsystem) process does not have access to the site content anyway.

The solution

Fortunately, there is a workaround: use the HOME_EXPANDED environment variable. That points to the real path of the “C drive” equivalent location, something like C:\DWASFiles\Sites\#1mysite1\home. Don’t worry about the exact location, just use the variable.

Invoking makeappx.exe from the HOME_EXPANDED location works because CSRSS sees the files that the manifest points to and it loads properly. You can see how I applied the workaround here.

Conclusion

While it’s likely rare to have a program using registration-free COM invoked on App Services, it’s not entirely impossible. You might have a web job or function that needs to invoke some program, or you might need to invoke programs for some website functionality like I do. If you do hit this, I hope this workaround unblocks you too.

Continuous Deployment of Cloud Services with VSTS

October 18, 2017 Coding 4 comments , , ,

Continuous Deployment of Cloud Services with VSTS

In my last blog post, I showed how you can use ASP.NET Core with an Azure Cloud Service Web Role. The next step is to enable CI/CD for it, since you really shouldn’t be using “Publish” within Visual Studio for deployment.

As part of this, I wanted to configure the Cloud Service settings per environment in VSTS and not have any configuration checked-in to source control. Cloud Services’ configuration mechanism makes this a bit challenging due to the way it stores configuration, but with a few extra steps, it’s possible to make it work.

What you’ll need

To follow along, you’ll need the following:

  • Cloud Service the code can live in GitHub, VSTS, or many other locations. VSTS can build from any of them.
  • Azure Key Vault we’ll use Azure Key Vault to store the secrets. Creating a Key Vault is easy and the standard tier will work.
  • VSTS this guide is using Visual Studio Team Services, so you’ll need an account there. Those are free for up to five users and any number of users with MSDN licenses.

What we’re going to do

The gist here is that we’ll create a build definition that publishes the output of the Cloud Service project as an artifact. Then, we’ll create a release management process that takes the output of the build and deploys it to the cloud service in Azure. To handle the configuration, we’ll tokenize the checked-in configuration, then use a release management task to read configuration values stored in Key Vault and replace the matching tokenized values before the Azure deployment.

Moving the configuration into Key Vault

Create a new Key Vault to hold your configuration. You should have one Key Vault per environment that you intend to release to, since the secret names will directly translate to variables within VSTS. For each setting you need, create a secret with name like CustomSetting-Setting1 or CustomSetting-Setting2 and set their values. Next, in your ServiceConfiguration.Cloud.cscfg, set the values to be __CustomSetting-Setting1__ and __CustomSetting-Setting2__. The __ is the token start/end, and the value identifies which VSTS variable should be used to replace it.

One tip: If you have Password Encryption certificates or SSL endpoints configured, the .cscfg will have the certificates’ SHA-1 thumbprint’s encoded in them. If you want to configure this per environment, then replace those with token values. The configuration checker will enforce that it looks like a thumbprint, so use values like:

  • ABCDEF01234567ABCDEF01234567ABCDEF012345
  • BACDEF01234567ABCDEF01234567ABCDEF012345

Those sentinel values will be replaced with tokens during the build process and those tokens can be replaced with variable values.

We’ll use these in the build task later on.

The build definition

  1. Start with a new Empty build definition.
  2. On the process tab, choose the Hosted VS2017 Agent queue and give your build definition a name.
  3. Select Get Sources and point to your repository. This could be VSTS, GitHub or virtually any other location.
  4. Add the tasks we’ll need: Visual Studio Build (three times), Publish Build Artifacts (once). It should look something like this:
  5. For the first Visual Studio Build task, set the following values:
    SettingValue
    Display nameRestore solution
    SolutionAspNetCoreCloudService.sln
    Visual Studio VersionVisual Studio 2017
    MSBuild Arguments/t:restore
    Platform$(BuildPlatform)
    Configuration$(BuildConfiguration)
  6. For the second Visual Studio Build task, use the following values:

    SettingValue
    Display nameBuild solution
    SolutionAspNetCoreCloudService.sln
    Visual Studio VersionVisual Studio 2017
    MSBuild Arguments
    Platform$(BuildPlatform)
    Configuration$(BuildConfiguration)
  7. And the third Visual Studio Build task should be set as:

    SettingValue
    Display namePublish Cloud Service
    SolutionTheCloudService\TheCloudService.ccproj
    Visual Studio VersionVisual Studio 2017
    MSBuild Arguments/t:Publish /p:OutputPath=$(Build.ArtifactStagingDirectory)\
    Platform$(BuildPlatform)
    Configuration$(BuildConfiguration)
  8. If you are using sentinel certificate values, add a PowerShell Task. Configure the PowerShell task by selecting “Inline Script”, expand Advanced and set the working folder to the publish directory (like $(Build.ArtifactStagingDirectory)\app.publish) and use the following script:

    $file = "ServiceConfiguration.Cloud.cscfg"
    # Read file
    $content = Get-Content -Path $file
    # substitute values
    $content = $content.Replace("ABCDEF01234567ABCDEF01234567ABCDEF012345", "__SslCertificateSha1__")
    $content = $content.Replace("BACDEF01234567ABCDEF01234567ABCDEF012345", "__PasswordEncryption__")
    # Save
    [System.IO.File]::WriteAllText($file, $content)
    

    This replaces the fake SHA-1 thumbprints with tokens that release management will use. Be sure to define variables in release management that match the names you use.

  9. Finally, set the Publish Artifact step to:

    SettingValue
    Display namePublish Artifact: Cloud Service
    Path to Publish$(Build.ArtifactStagingDirectory)\app.publish
    Artifact NameTheCloudService
    Artifact TypeServer
  10. Go to the Variables tab and add two variables:

    NameValue
    BuildConfigurationRelease
    BuildPlatformAny CPU
  11. Hit Save & Queue to save the definition and start a new build. It should complete successfully. If you go to the build artifacts folder, you should see TheCloudService with the .cspkg file in it.

Deploying the build to Azure

This release process depends on one external extension that handles the tokenization, the Release Management Utility Tasks. Install it from the marketplace into your VSTS account before starting this section.

  1. In VSTS, switch to the Releases tab and create a new release definition using the “Azure Cloud Service Deployment” template.
  2. Give the environment a name, like “Cloud Service – Prod”.
  3. Click the “Add artifact” box and select your build definition. Should look something like this:

    If you want continuous deployment, click the “lightning bolt” icon and enable the CD trigger.
  4. Click on the Tasks tab and specify an Azure subscription, storage account, service name and location. If you need to link your existing Azure subscription, click the “Manage” link. If you need a new storage account to hold the deployment artifacts, you can create that in the portal as well, just make sure to create a “Classic” storage account.
  5. Go to the Variables tab and select “Variable groups”, then “Manage variable groups.” Add a new variable group, give it a name like “AspNetCloudService Production Configuration”, select your subscription (click Manage to link one), and select the Key Vault we created earlier to hold the config. Press the Authorize button if prompted.

    Finally, click Add to select which secrets from Key Vault should be added to this variable group.

    It’s important to note that it does not copy the values at this point. The secret’s values are always read on use, so they’re always current. Save the variable group and return back to the Release Management definition. At this point, you can select “Link variable group” and link the one we just created.
  6. Add a Tokenize with XPath/Regular Expressions task before the Azure Deployment task.
  7. In the Tokenizer task, browse to the ServiceConfiguration.Cloud.cscfg file, something like $(System.DefaultWorkingDirectory)/AspNetCoreCloudService-CI/TheCloudService/ServiceConfiguration.Cloud.cscfg depending on what you call your artifacts.
  8. Ensure that the Azure Deployment task is last, and you should be all set.
  9. Create a new release and it should deploy successfully. If you view your cloud service configuration on Azure Portal, you should see the real values, not the __Tokenized__ values.

Summary

That’s it, you now have an ASP.NET Core Cloud Service deployed to Azure with CI/CD through VSTS. If you want to add additional environments, simply add an additional key vault and linked variable group for each environment, clone the existing environment configuration in the Release Management editor and set the appropriate environmental values. Variable groups are defined at the release definition level, so for multiple-environments you can use a suffix in your variable names and then update the PowerShell script in step 7 to append that per environment (__MyVariable-Prod__), etc.

Using ASP.NET Core with Azure Cloud Services

October 16, 2017 Coding 5 comments , ,

Using ASP.NET Core with Azure Cloud Services

Overview

Cloud Services may be the old-timer of Azure’s offerings, but there are still some cases where it is useful. For example, today, it is the only available PaaS way to run a Windows Server 2016 workload in Azure. Sure, you can run a Windows Container with Azure Container Services, but that’s not really PaaS to me. You still have to be fully aware of Kubernetes, DC/OS, or Swarm, and, as with any container, you are responsible for patching the underlying OS image with security updates.

In developing my Code Signing Service, I stumbled upon a hard dependency on Server 2016. The API I needed to Authenticode sign a file using Azure Key Vault’s signing methods only exists in that version of Windows. That meant that using Azure App Services was out, as it uses Server 2012 (based on the version numbers from its command line). That left Cloud Service Web Roles as the sole remaining option if I wanted PaaS. I could have also used a B-Series VM, that’s perfect for this type of workload, but I really don’t want to maintain a VM.

If you have tried to use ASP.NET Core with a Cloud Service Web Role, you’ll probably have come away disappointed as Visual Studio doesn’t let you do this…. until now. Never one to accept no for an answer, I found a way to make this work, and with a few workarounds, you can too.

The solution presented here handles deployment of an MVC & API application that along with config settings and deployment of the ASP.NET Core Windows Hosting Module. VS Cloud Service tooling works for making changes to config and publishing to cloud services (though please use CI/CD in VSTS!)

Many thanks to Scott Hunter‘s team, Jaques Eloff and Catherine Wang in particular, on figuring out a workaround for some gotcha’s when installing the Windows Hosting Module.

Pieces to the puzzle

You can see the sample solution here, and it may be helpful to clone and follow along in VS.

There are a few pieces to making this work:

  1. TheWebsite The ASP.NET Core MVC site. Nothing significantly special here, just an ordinary site.
  2. TheCloudService The Cloud Service project. Contains the configuration files and service definition.
  3. TheWebRole ASP.NET 4.6 project that contains the Web Role startup scripts and “references” the TheWebsite site. This is where the tricks are.

At a high level, the Cloud Service “sees” TheWebRole as the configured website. The cloud service doesn’t know anything about ASP.NET Core. The trick is to get the ASP.NET Core site published and running “in” an ASP.NET site.

Doing this yourself

The Projects

In a new solution, create a new ASP.NET Core 2 project. Doesn’t really matter what template you use. For the descriptions here, I’ll call it TheWebsite. Build and run the site, it should debug and run normally in IISExpress.

Next, create a new Cloud Service (File -> Add -> New Project -> Cloud -> Azure Cloud Service). I’ll call the cloud service TheCloudService, and on the next dialog, add a single Web Site. I called mine TheWebRole.

Finally, on the ASP.NET Template selection, choose “Empty” and continue.

Right now, we have an ASP.NET Core Website and an Azure Cloud Service with a single ASP.NET 4.6 WebRole. Next up is to clear out almost everything from TheWebRole since it won’t actually contain any ASP.NET Code. Delete the packages.config and Web.config files.

Save the project, then select “Unload” from the project’s context menu. Right-click again and select “Edit TheWebRole.csproj”. We need to delete the packages brought in by NuGet along with the imported props and target. There are three areas to delete as noted in the screen shots: Props at the top, all Reference elements with a HintPath pointing to ..\packages\ and the Target at the bottom.



At this point, your project file should look similar to this here. You can also view the complete diff.

Magic

Now comes the special sauce — we need a way to have TheWebRole build TheWebsite and include TheWebsite‘s publish output as Content. Doing this ensures that TheCloudService Package contains the correct folder layout. Add the following snippet to the bottom of TheWebRole‘s project file to call Publish on our website before the main build step.

<Target Name="BeforeBuild">
  <MSBuild Projects="..\TheWebsite\TheWebsite.csproj" Targets="Publish" Properties="Configuration=$(Configuration)" />
</Target>

Then, add the following ItemGroup to include TheWebsite‘s publish output as Content in the TheWebRole project:

<ItemGroup>
  <Content Include="..\TheWebsite\bin\$(Configuration)\netcoreapp2.0\publish\**\*.*" Link="%(RecursiveDir)%(Filename)%(Extension)" />
</ItemGroup>

Save the csproj file, then right-click the TheWebRole and click Reload. You can test that the cloud service package is created correctly by right-clicking TheCloudService and selecting Package. After choosing a build configuration and hitting “Package,” the project should build and the output directory pop up.

The .cspkg is really a zip file, so extract it and you’ll see the guts of cloud service packages. Look for the .cssx file and extract that (again, just a zip file)

Inside there, open the approot folder and that is the root of your website. If the previous steps were done correctly, you should see something like the following

You should see TheWebsite.dll, TheWebsite.PrecompiledViews.dll, wwwroot, and the rest of your files from TheWebsite.

Congratulations, you’ve now created a cloud service that packages up and deploys an ASP.NET Core website! This alone won’t let the site run though since the Cloud Service images don’t include the Windows Hosting Module.

Installing .NET Core 2 onto the Web Role

Installing additional components onto a Web Role typically involves a startup script, and .NET Core 2 is no different. There is one complication though: the installer downloads files into the TEMP folder, and Cloud Services has a 100MB hard limit on that folder. We need to specify an alternate folder to use as TEMP with a higher quota (this is what Jaques and Catherine figured out).

In TheCloudService, expand Roles, right click TheWebRole and hit properties. Go to Local Storage and add a new location called CustomTempPath with a 500MB limit (or whatever else your app might need).

Next, we need the startup script. Go to TheWebRole, add a new folder called Startup and add the following files to it. Ensure that the Build Action is set to Content and that Copy to Output Directory is set to Copy if newer. Finally, we need to configure the cloud service to invoke our startup task. Open the ServiceDefinition.csdef file and add the following xml in the WebRole node to define the startup task:

<Startup>
  <Task commandLine="Startup\startup.cmd" executionContext="elevated" taskType="simple">
    <Environment>
    <Variable name="IsEmulated">
      <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
    </Variable>
    </Environment>
  </Task>
</Startup>

Now we finally have a cloud service that can be deployed, install .NET Core, and run the website. The first time you publish, it will take a few minutes for the role instance to become available since it
has to install the hosting module and restart IIS.

Note: I leave creating a cloud service instance in the Azure Portal as an exercise to the reader

Configuration

There are many ways of getting configuration into an ASP.NET Core application. If you know you’ll only be running in Cloud Services, you may consider taking a direct dependency on the Cloud Services libraries and using the RoleEnvironment types to get populate your configuration. Alternatively, you can likely write a configuration provider that funnels in the RoleEnvironment configuration into the ASP.NET Core configuration system.

In my original case, I didn’t want my ASP.NET Core website to have any awareness of Cloud Services, so I came up with another way—in the startup script, I copy the values from the RoleEnvironment into environment variables that the default configuration settings pick up. The key here to making this transparent is knowing that the double-underscore, __, translates into the : when read from an environment variable. This means you can define a setting like CustomSetting__Setting1, and then you can access it with Configuration["CustomSetting:Setting1"], or similar mechanisms.

To bridge this gap, we can add this to the startup script (complete script):

$keys = @(
  "CustomSetting__Setting1",
  "CustomSetting__Setting2"
)

foreach($key in $keys){
  [Environment]::SetEnvironmentVariable($key, [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::GetConfigurationSettingValue($key), "Machine")
}

This copies the settings from the Cloud Service Role Environment into environment variables on the host, and from there, the default ASP.NET Core configuration adds them into configuration.

Considerations

  • Session affinity If you need session affinity for session state, you’ll need to configure that.
  • Data Protection API Unlike Azure App Services, Cloud Services doesn’t have any default synchronization for the keys. You’ll need a solution for this. If anyone comes up with a reusable solution, I’ll happily mention it here. More info on configuring DPAPI is here.
  • Local Debugging Due to the way local debugging of cloud services works (it directly uses TheWebRole as a startup project in IIS Express), directly debugging the cloud service does not work with the current patterns. Instead, you can set TheWebsite as a startup project and debug that normally. The underlying issue is that TheWebRole includes TheWebsite as Content and does not copy the published files to TheWebRole‘s directory. It may be possible to achieve this, though you’d likely want additional .gitignore rules to prevent those files from being committed. In my case, I did not want my service to have any direct dependency on Cloud Services, so this wasn’t an issue—I simply needed a Server 2016 web host.

CI / CD with VSTS

It is possible to automate build/deploy of these cloud service web role projects using VSTS. My next blog post will show how to set that up.

Update October 18: The post is live

Use all TFM’s with SDK-style projects in Visual Studio for Mac

August 29, 2017 Coding 4 comments , , ,

Use all TFM’s with SDK-style projects in Visual Studio for Mac

TL;DR

You can now use SDK-style projects, with all supported TFM’s, in Visual Studio for Mac. See getting started for details.

Issue

While Visual Studio for Mac supports the SDK-style projects, there have been a couple of issues blocking use of TFM’s other than net, netstandard, and netcoreapp.

  1. Those TFM’s are hard-coded and an SDK-style project containing any other target frameworks is blocked.
  2. Xamarin on the Mac has a multi-valued MSBuildExtensionsPath property. That means that it can search for targets in different locations. Unfortunately the logic this works with is limited to the <Import /> element, so if you set properties, as is required to use LanguageTargets, it won’t work. Fortunately, with some brainstorming with Ankit Jain and Mikayla Hutchinson, we found a solution.

Getting Started

You’ll need a few things:

  1. Latest stable channel of Visual Studio for Mac
  2. .NET Core 2 SDK (even if you’re not targeting .NET Standard 2 or .NET Core, the SDK style projects use these targets). Download here.
  3. Matt Ward‘s Extension to VSfM that removes TFM checks on SDK-style projects. Binary | Source. Install by going to Visual Studio -> Extensions... -> Install from file...

Then, create a new SDK-style project and use the latest version of the MSBuild.Sdk.Extras package, at least version 1.1.0-beta.69:

<PackageReference Include="MSBuild.Sdk.Extras" Version="1.1.0-beta.69" PrivateAssets="all" />

At the end of the project file, just before the closing tag, you’ll also need the following, as per the MSBuild SDK Extras readme:

<Import Project="$(MSBuildSDKExtrasTargets)" Condition="Exists('$(MSBuildSDKExtrasTargets)')" />

Here’s a complete example of using the SDK-style projects with an iOS class library:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>xamarinios10</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.1.0-beta.69" PrivateAssets="all" />
  </ItemGroup>

  <Import Project="$(MSBuildSDKExtrasTargets)" Condition="Exists('$(MSBuildSDKExtrasTargets)')" />
</Project>

Building these projects

These projects will build in the IDE (VSfM, VS, etc) or the command line. If you use the command line, you must use msbuild, not dotnet build. Keep in mind that with msbuild, you must explicitly call restore first, so your build steps will look like this:

msbuild /t:restore
msbuild /p:Configuration=Release

Notes

For the beta, since it’s a SemVer2 package, you must be using the NuGet v3 feed. If your VSfM prefs have https://www.nuget.org/api/v2/, you need to update that to be https://api.nuget.org/v3/index.json.

Support

If you run into issues, please file a bug on the MSBuild SDK Extras project site: https://github.com/onovotny/MSBuildSdkExtras/issues and reach
me @onovotny.

Announcing Reactive Extensions for .NET 4.0 Preview 1

May 27, 2017 Coding 1 comment , , ,

Announcing Reactive Extensions for .NET 4.0 Preview 1!

I am happy to announce that the first preview of Rx.NET 4.0 is now available. This release addresses a number of issues and contains several enhancements.

The biggest enhancement is consolidating the existing packages into one main package, System.Reactive NuGet. This will prevent most of the pain around binding redirects that were caused by #205. If you are using Rx 4.0 and need to use libraries built against Rx 3.x, then you need to also install the compatibility package System.Reactive.Compatibility. That package contains facades with type forwarders to the new assembly so types are unified correctly. You only need this compatibility package if you are consuming a library built against 3.x. You do not need it otherwise.

If you’re interested in the background behind the version numbers, I suggest reading the thread as it contains the gory details. While the idea was technically sound, it did mean that binding redirects were required for all .NET Framework uses. We heard the feedback loud and clear that this was really painful and took steps to fix it in 4.0.

The fix was to consolidate the previous set of packages into a single System.Reactive package. With the single package, binding redirects are no longer required and the platforms will get the correct Rx package version.

Please try it out and let us know if you encounter any issues at our repo. The full release notes are there too.

Using Xamarin Forms with .NET Standard – VS 2017 Edition

April 23, 2017 Coding 38 comments , , ,

Using Xamarin Forms with .NET Standard – VS 2017 Edition

I have previously blogged about using .NET Standard with Xamarin Forms. Since then, the tooling has changed significantly with Visual Studio 2017 and Visual Studio for Mac. This post will show you what you need to use Xamarin.Forms with a .NET Standard class library.

Why use a .NET Standard class library instead of a PCL? There are many good reasons, but the two biggest ones are:

  • Much bigger surface area. PCL’s were the least common denominator intersection of supported platforms. The end result is that while the binary worked on many platforms, there were a much more limited set of APIs available. .NET Standard 1.4 is the version that supports UWP, Xamarin Android, Xamarin iOS, and Xamarin.Mac.
  • “SDK Style” project file goodness. Legacy PCL’s use the old csproj format which have tons of gunk in them. While it is possible to use the new project style to generate legacy PCLs (if you use my MSBuild.Sdk.Extras package), it’s time to move past those. If you target .NET Standard 1.0-1.2, some PCL profiles can install your library. See the full table for the list.

Prerequisites

Using .NET Standard requires you to use PackageReference to eliminate the pain of “lots of packages” as well as properly handle transitive dependencies. While you may be able to use .NET Standard without PackageReference, I wouldn’t recommend it.

You’ll need to use one of the following tools:

Getting Started

As of now, the project templates for creating a new Xamarin Forms project start with an older-style packages.config template, so whether you create a new project or have an existing project, the steps will be pretty much the same.

Step 1: Convert your projects to use PackageReference. The NuGet blog has details on using PackageReference with all project types. Unfortunately there’s no current migration tool, so it’s probably easiest to uninstall your existing packages, make sure the packages.config file is gone and then install the package after setting the VS Options to PackageReference. You can also do it by hand (which is what I did for my projects).

Step 2: As part of this, you can remove dependencies from your “head” projects that are referenced by your other projects you reference. This should simplify things dramatically for most projects. In the future, when you want to update to the next Xamarin Forms version, you can update it in one place, not 3-4 places. It also means, you only need the main Xamarin.Forms package, not each of the packages it pulls in.

For now, you’ll need to add the <RestoreProjectStyle>PackageReference</RestoreProjectStyle> property near the top of your iOS and Android csproj files. That tells NuGet restore to use the PackageReference mode even if you don’t have any direct packages (this is important for transitive restore). If you have any PackageReference elements in your iOS or Android csproj, then you don’t need this. For UWP, you already should have a PackageReference to the UWP meta-package (Microsoft.NETCore.UniversalWindowsPlatform version 5.3.2).

If you hit any issues with binaries not showing up in your bin directories (for your Android and iOS “head” projects), make sure that you have set CopyNuGetImplementations to true in your csproj.

At this point, your project should be compiling and working, but not yet using netstandard1.x anywhere.

Step 3: Move your PCL library to .NET Standard. This is the hard part today as there’s no tooling to automatically do this correctly. Be warned and DO NOT use this option PCL Dialog in the PCL properties. It is broken and will create a project.json based library targeting dotnet. I hope this option is removed in a future VS Update! Instead, go to File -> New Project -> .NET Standard -> Class Library and create a new class library. If this is a new project, I’d simply delete the existing PCL and just use a new one. If it’s an existing project, you’ll want to migrate. The new format is far simpler and moving the PCL by hand is usually pretty easy. What I’ve usually done is this:

  1. Close the solution in VS
  2. Take the existing csproj and make a copy of it somewhere else. I’ll keep this other copy open in Notepad.
  3. Copy/paste the contents of the new project you created and replace the contents of your existing project. Most of what you had in the old project isn’t really needed anymore. What you’ll likely need are settings like any signing or assembly names that don’t match the folder name/conventions. If you have ResX files with design-time generated code, you’ll need to add the following. Likewise, for Xamarin Forms pages, you’ll need this.
  4. Decide which .NET Standard version to target, probably 1.4, based on the table. Here’s a cheat sheet:
    • If you only want to support iOS and Android, you can use .NET Standard 1.6. In practicality though, most features are currently available at .NET Standard 1.3 and up.
    • If you want to support iOS, Android and UWP, then NET Standard 1.4 is the highest you can use.
    • If you want to support Windows Phone App 8.1 and Windows 8.1, then NET Standard 1.2 is your target.
    • If you’re still supporting Windows 8, .NET Standard 1.1 is for you.
    • Finally, if you need to support Windows Phone 8 Silverlight, then .NET Standard 1.0 is your only option.

Once you determine the netstandard version you want, in your csproj, set the TargetFramework to it — netstandard1.4, etc.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard1.4</TargetFramework>
    <PackageTargetFallback>portable-net45+win8+wpa81+wp8</PackageTargetFallback>
    <DebugType>full</DebugType>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Xamarin.Forms" Version="2.3.4.231" />
  </ItemGroup>

  <ItemGroup>
    <!-- https://bugzilla.xamarin.com/show_bug.cgi?id=55591 -->
    <None Remove="**\*.xaml" />

    <Compile Update="**\*.xaml.cs" DependentUpon="%(Filename)" />
    <EmbeddedResource Include="**\*.xaml" SubType="Designer" Generator="MSBuild:UpdateDesignTimeXaml" />
  </ItemGroup>

</Project>

Note the addition of the PackageTargetFallback property. This is required to tell NuGet that specified TFM is compatible here because the Xamarin.Forms package has not yet been updated to use netstandard directly. Also note that DebugType set to full is required for the Xamarin tool-chain currently as they don’t yet support the new portable PDBs that are created by default.

At this point, when you reload the project, it should restore the packages and build correctly. You may need to do a full clean/rebuild.

Seeing it in action

I created a sample solution showing this all working over on GitHub. It’s a good idea to clone, build and run it to ensure your environment and tooling is up-to-date. If you get stuck converting your own projects, I’d recommend referring back to that repo to find the difference.

Building on command line

You will need to use MSBuild.exe to build this, either on Windows with a VS 2017 command prompt or a Mac with Visual Studio for Mac. You cannot use dotnet build for these projects types. dotnet build only supports .NET Standard, .NET Core and .NET Framework project types. It is not able to build the Xamarin projects and the custom tasks in Xamarin Forms have not yet been updated to support .NET Core.

To build, you’ll need two steps:

  1. msbuild /t:restore MySolution.sln
  2. msbuild /t:build /p:Configuration=Release MySolution.sln

You can also restore/build the .csproj files individually if you’d prefer.

As always, feel free to tweet me @onovotny as well.

Multi-targeting the world: a single project to rule them all

January 4, 2017 Coding 9 comments , , , , , , ,

Multi-targeting the world: a single project to rule them all

Starting with Visual Studio 2017, you can now use a single project to build platform-specific libraries for all project types. This blog will explore why you might want to do this, how to do it and workarounds for some point-in-time issues with the tooling.

Contents

Intro

Since the beginning of .NET Core, the project.json format has enabled multi-targeting, that is compiling to multiple target frameworks in parallel and creating an output for each. With ASP.NET Core, it’s common to target both net45 and netcoreapp1.0 so you can deploy the site to either the desktop framework, which runs on Windows, or to the CoreCLR, which runs cross-platform. Multi-targeting is nothing more than compiling the same code multiple times, once per target platform. Each target can specify its own dependencies and ifdef‘s, so you can easily tailor the code to the specific platform.

Another example may have a library target netstandard1.0, netstandard1.3, and net45 to enable different levels of functionality based on the available surface area.

While it was also possible to target UWP, Win8, or profile-based PCL’s, using project.json, doing so required hacks like private copies of all reference assemblies, WinMD files and more. Beyond that, some things didn’t work correctly as some platforms require additional targets to generate additional outputs like .pri files on UWP for resource lookup. So while technically possible, full multi-targeting was brittle and required you to stay in a very narrow path, avoiding things like resources or GUI elements that require the full tool-chain to process.

Enter MSBuild

With the move to MSBuild as part of the .NET Core Tooling direction change, the picture gets much better, so much so that with VS 2017 RC2, you can correctly multi-target all platform types, including UWP, profile-based PCL’s, and Xamarin iOS/Android. Not only that, but by conditionally including/excluding directories based on globs, you can reduce the need for ifdef‘s in many cases.

As part of being open sourced and enabled to run cross-platform, the build targets and tasks required to actually do the build were combined into an SDK. This went along with drastic simplification of the csproj file to have a minimal footprint, that will get even smaller, like this:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.NETCore.App" Version="1.0.1" />
  </ItemGroup>
</Project>

Microsoft’s blog details all of the improvements in this area. For current lack of a better term, I’ll call projects based on these new tools “SDK style.” The easiest way to identify these “SDK style” projects is by looking for the Sdk attribute in the top Project element.

Multi-targeting vs. .NET Standard Libraries vs. PCL’s

Before we go further, let’s answer this question that many people have asked — why would you want to multi-target vs just use a single portable library, whether that’s .NET Standard or an older profile-based PCL?

There are several answers to that question — first, if your code can all fit within a single .NET Standard-based library, then there’s no reason to multi-target. If you’re using a legacy profile-based PCL, at the very least consider moving up to the equivalent .NET Standard version. Don’t make more work for yourself. The decision to multi-target falls out of a need to use functionality that doesn’t exist within a .NET Standard version or if you need to target an earlier platform that doesn’t support the .NET Standard version you need. A common example is that many libraries still need to support .NET 4.5. Despite a significant amount of functionality available in .NET Standard 1.3, that .NET Standard version only supports .NET 4.6+. Chances are though that the code would work “just fine” on .NET 4.5, so it’s easy to multi-target to both net45 and netstandard1.3.

The other main reason why you’d need to multi-target is to use platform-specific code within your library. For example, on iOS you might want to use SecKeyChain for saved credentials, on Android use its Context to access shared services like preferences, and on Windows its Credential Manager. You might have a common method called GetCredential that other code uses to get the data. Today you might use dependency injection or reflection to access a “.Platform” library with a specific implementation that your common code uses. Instead, you can choose to multi-target and access the platform code directly.

How to multi-target

Let me start by saying that the methods here are based on the new “SDK-style” projects that VS 2017 provides. They orchestrate using the existing project types that are installed by Visual Studio. As such, the build itself won’t work on a box without the other tools installed (so you’re building on a Windows box, much like you probably are today). Some of these may work on a Mac with Visual Studio for Mac but I have not tested that in any way. When you install Visual Studio 2017, make sure to install all of the tools for the project types you need (Xamarin, UWP, etc) and also the .NET Core Tooling.

There’s no UI in VS for adding additional target frameworks, but I have some samples that show what to do.

First, create a new .NET Core Class Library project. If you don’t see the following option, make sure to install the .NET Core workload in the VS Installer.

New .NET Core Class Library
.NET Core workload

Right-click the project and select “Edit project file…”. This is new in VS 2017 – the ability to edit the project file while it’s open and have changes instantly reflected.

In the editor, after noticing how much less boilerplate code there is now, look for the TargetFramework property that looks like this: <TargetFramework>netstandard1.3</TargetFramework> property. Change that to <TargetFrameworks>netstandard1.3;net45</TargetFrameworks> to target .NET 4.5 and NET Standard 1.3. You can add however many targets you want by adding to that semi-colon list. It’s subtle, but note the difference in property names between TargetFramework and TargetFrameworks with a plural. It’s easy to miss.

For some frameworks, like .NET 4.5, that’s all you need to do. However, targeting .NET Standard and .NET 4.x is far from “the world.” We can do better! You would think it should be as easy as adding additional TFM’s like uap10.0, xamarin.ios10 or MonoAndroid70 to the list, and hopefully by the time the tools RTM it will be, but for now we need to add extra properties to the project file to tell MSBuild what to do with those.

Fortunately, and here’s the real secret, the “SDK-style” build system has a LanguageTargets property that you can specify per TFM to import the targets for that project type instead of the vanilla Microsoft.CSharp.targets import. That means we can use the “Windows Xaml”, Android, iOS, or any other platform tool-chain we need.

Xamarin Example

In the example here, I have a class library that multi-targets to net45, uap10.0, netstandard1.3, Xamarin.iOS10 and MonoAndroid70. In this contrived library, I have a Greeter class that’s calling a Hello() method that needs platform specific code. I’m using a pattern where I have a directory for each TFM where code in there only gets included there, so no ifdef‘s are needed. For Android, Resources are supported if you need them. While the example doesn’t currently use them, you could use PList‘s, xib‘s or Story Boards on iOS, Page‘s on UWP, or any other “native” file type supported by the platform.

Win81/WP8/PCL/Wpa81/Xamarin/Net45 Example

As a more realistic example, one of my libraries, Zeroconf, an mDNS discovery library, targets “the world.” It currently has concrete implementations for wp8, Wpa81, Win8, portable-Wpa81+Win81, uap10.0, net45, and netstandard1.3 (which supports Xamarin and CoreCLR.) In addition to the the concrete implementations, it provides a netstandard1.0 façade to support being used in portable libraries. The different concrete implementations are required due to differences in the networking stacks between the various Windows networking stacks. For now, the uap10.0 version cannot use the netstandard1.3 version until NetworkInformation is fully supported by the platform, so it continues to use the WinRT variant. You can see the platform-specific code in the platforms directory and then how they’re conditionally included by the csproj in the ItemGroups

The property groups at the top contain the LanguageTargets and properties needed. For portable-Wpa81+Win81 two extra items are required as the special PCL profile also supports WinRT. The ItemGroup here has two TargetPlatform to pull in the correct .winmd references.

Building

You can build the libraries either in VS 2017 or the command-line. If you use the command line, you’ll want to run the following from a VS 2017 Developer Command Prompt: msbuild /t:restore followed by msbuild /t:build. If you want to create a NuGet package, you can run msbuild /t:pack. It’s important to note that you must currently use msbuild, the desktop version in the VS 2017 path, to build these and not dotnet build. The reason is that while dotnet build calls MSBuild, it’s currently using a CoreCLR version even though the desktop version is present in your VS installation. The engineering team is aware of this and in the future, dotnet build will be smart enough to call the desktop version of msbuild when present. The “regular” targets file we’re using to support the platform-specific features are designed for Desktop MSBuild. They do not yet have support for CoreCLR tasks. Bottom line, as of the current release: if your targets use build tasks, then you need to provide both CoreCLR and Desktop versions of the library in order to support both “regular” MSBuild and dotnet build.

Common gotcha’s

There are several bugs in the tool-chain currently that are in the process of being fixed:

  • Some Project-to-project (p2p) references aren’t resolving correctly. Whereas they should resolve to the “best” match, they are resolving to the first TFM in the list.
  • Another bug is preventing a “legacy” csproj from doing a p2p reference with a “Portable Library can only reference other portable library” error.
  • Files that are conditionally included won’t show up in the Solution Explorer. As a workaround, include all files with None as the first item group (see example).
  • for iOS (and possibly Android), you need to set DebugType to full as the Xamarin ConvertPdb2Mdb task doesn’t yet support the new Portable PDB format generated by this tool-chain.
  • Win8, Win81, and uap10.0 aren’t correctly understood by the NuGet targets today. As a workaround, you need to include the NugetTargetMoniker property set to the full TFM as shown here. Similarly, for legacy PCL targets, it requires Version=v0.0 in the NugetTargetMoniker here. These should hopefully be fixed by GA.
  • Windows assemblies that use resources need a .pri file alongside them. They’re currently missing from the generated NuGet. Workaround is to use your own .NuSpec for now until the bug is fixed.

Into the weeds, how it all works

This is by no means an official explanation, it’s what I’ve found from exploring the SDK build targets. Some of the terminology and concepts may change over time.

The “SDK style” projects consist of a set of targets/tasks that are pre-installed with MSBuild (and the CLI tools). You can see them in the following directory: C:\Program Files (x86)\Microsoft Visual Studio\2017\<sku>\MSBuild\Sdks where <sku> is Community, Professional, or Enterprise, depending on what you installed. The two SDK’s you’re likely to use directly are Microsoft.NET.Sdk and Microsoft.NET.Sdk.Web.

The Sdk attribute causes an Sdk.props and Sdk.targets within the specified SDK’s \Sdk directory to be imported before and after the project file. The Microsoft.NET.Sdk SDK’s targets defines an “outer” and “inner” build. The “outer-loop” is what your project file directly defines, including several TFM’s in the TargetFrameworks property. If you only have a single build with a TargetFramework property defined, then there’s only an “inner-loop”.

For an “outer-loop” build, the SDK targets imports props/targets in a buildCrossTargeting directory (soon to be renamed to buildMultiTargeting). Those get auto-included before and after the main project file (props before, targets after.) The “outer-loop” targets will eventually loop through each of the TargetFrameworks calling msbuild again in an “inner-loop” with TargetFramework set to one TFM. This “inner-loop” build is what we currently have in today’s “normal” project types. The “inner-loop” build provides an extension point for providing your language-specific targets (the Import that was at the bottom of your old csproj before) in place of the “vanilla” one it’ll include by default. By providing a LanguageTargets property for the “inner-loop,” conditioned by TFM, we can use the “original” targets that invoke the full tool-chain for the target platform. See here, here and here for UWP, iOS, and Android, respectively.

Within each conditionally defined property group, we can set properties that are specific to a particular “inner-loop.” These correspond to the properties in your existing platform-specific project file and are used by the platform-specific targets specified.

One thing you give-up currently is any UI in VS for configuring these properties. Perhaps they’ll return sometime in the future. For now, one thing I’ve found helpful is to maintain a few “dummy” projects where I can edit some settings to see the values and then put them into my multi-targeting csproj.

Looking forward

As of today (January 4, 2017), the tooling is in a fairly rough state. The .NET Core tooling is rightfully in an “alpha” state. The MSBuild SDK is under active development and things will change before GA. There are a number of issues in the tooling that can make it hard to use today, but I expect those to be fixed soon. Most of the bugs I’ve found are slated to be fixed in the RC3 time-frame, and I’d expect things to be better with that release.

As to whether-or-not to take the plunge today: I’d suggest that if you have a tolerance for figuring this out and reporting issues you’ll encounter, then go for it. If you have a complex project today that already multi-targets a different way (most likely by using multiple “head” projects and shared code project types), I would recommend trying this out in a branch to see how far you get. I’ll be happy to help, just give me a shout. The more the community bangs on this stuff up front, the more issues can be addressed prior to GA.

Acknowledgments

Many thanks to Brad Wilson, Joe Morris, and Daniel Plaisted for reviewing this post and providing feedback.

Authenticode Signing Service and Client

September 12, 2016 Coding 1 comment , , , , ,

Authenticode Signing Service and Client

Last night I published a new project on GitHub to make it easier to integrate Authenticode signing into a CI process by providing a secured API for submitting artifacts to be signed by a code signing cert held on the server. It uses Azure AD with two application entries for security:

  1. One registration for the service itself
  2. One registration to represent each code signing client you want to allow

Azure AD was chosen as it makes it easy to restrict access to a single application/user in a secure way. Azure App Services also provide a secure location to store certificates, so the combination works well.

The service currently supports either individual files, or a zip archive that contains supported files to sign (works well for NuGet packages). The service code is easy to extend if additional filters or functionality is required.

Supported File Types

  • .msi, .msp, .msm, .cab, .dll, .exe, .sys, .vxd and Any PE file (via SignTool)
  • .ps1 and .psm1 via Set-AuthenticodeSignature

Deployment

You will need an Azure AD tenant. These are free if you don’t already have one. In the “old” Azure Portal, you’ll need to
create two application entries: one for the server and one for your client.

Azure AD Configuration

Server

Create a new application entry for a web/api application. Use whatever you want for the sign-on URI and App ID Uri (but remember what you use for the App ID Uri as you’ll need it later). On the application properties, edit the manifest to add an application role.

In the appRoles element, add something like the following:

{
  "allowedMemberTypes": [
    "Application"
  ],
  "displayName": "Code Sign App",
  "id": "<insert guid here>",
  "isEnabled": true,
  "description": "Application that can sign code",
  "value": "application_access"
}

After updating the manifest, you’ll likely want to edit the application configuration to enable “user assignment.” This means that only assigned users and applications can get an access token to/for this service. Otherwise, anyone who can authenticate in your directory can call the service.

Client

Create a new application entry to represent your client application. The client will use the “client credentials” flow to login to Azure AD
and access the service as itself. For the application type, also choose “web/api” and use anything you want for the app id and sign in url.

Under application access, click “Add application” and browse for your service (you might need to hit the circled check to show all). Choose your service app and select the application permission.



Finally, create a new client secret and save the value for later (along with the client id of your app).

Server Configuration

Create a new App Service on Azure (I used a B1 for this as it’s not high-load). Build/deploy the service however you see fit. I used VSTS connected to this GitHub repo along with a Release Management build to auto-deploy to my site.

In the Azure App Service, in the certificates area, upload your code signing certificate and take note of the thumbprint id. In the Azure App Service, go to the settings section and add the following setting entries:

NameValueNotes
CertificateInfo:Thumbprintthumbprint of your certThumbprint of the cert to sign with
CertificateInfo:TimeStampUrlurl of timestamp server
WEBSITE_LOAD_CERTIFICATESthumbprint of your certThis exposes the cert’s private key to your app in the user store
Authentication:AzureAd:AudienceApp ID URI of your service from the application entry
Authentication:AzureAd:ClientIdclient id of your service app from the application entry
Authentication:AzureAd:TenantIdAzure AD tenant IDeither the guid or the name like mydirectory.onmicrosoft.com

Enable “always on” if you’d like and disable PHP then save changes. Your service should now be configured.

Client Configuration

The client is distributed via NuGet and uses both a json config file and command line parameters. Common settings, like the client id and service url are stored in a config file, while per-file parameters and the client secret are passed in on the command line.

You’ll need to create an appsettings.json similar to the following:

{
  "SignClient": {
    "AzureAd": {
      "AADInstance": "https://login.microsoftonline.com/",
      "ClientId": "<client id of your client app entry>",
      "TenantId": "<guid or domain name>"
    },
    "Service": {
      "Url": "https://<your-service>.azurewebsites.net/",
      "ResourceId": "<app id uri of your service>"
    }
  }
}

Then, somewhere in your build, you’ll need to call the client tool. I use AppVeyor and have the following in my yml:

environment:
  SignClientSecret:
    secure: <the encrypted client secret using the appveyor secret encryption tool>

install: 
  - cmd: appveyor DownloadFile https://dist.nuget.org/win-x86-commandline/v3.5.0-rc1/NuGet.exe
  - cmd: nuget install SignClient -Version 0.5.0-beta3 -SolutionDir %APPVEYOR_BUILD_FOLDER% -Verbosity quiet -ExcludeVersion -pre

build: 
 ...

after_build:
  - cmd: nuget pack nuget\Zeroconf.nuspec -version "%GitVersion_NuGetVersion%-bld%GitVersion_BuildMetaDataPadded%" -prop "target=%CONFIGURATION%" -NoPackageAnalysis
  - ps: '.\SignClient\SignPackage.ps1'
  - cmd: appveyor PushArtifact "Zeroconf.%GitVersion_NuGetVersion%-bld%GitVersion_BuildMetaDataPadded%.nupkg"  

SignPackage.ps1 looks like this:

$currentDirectory = split-path $MyInvocation.MyCommand.Definition

# See if we have the ClientSecret available
if([string]::IsNullOrEmpty($env:SignClientSecret)){
    Write-Host "Client Secret not found, not signing packages"
    return;
}

# Setup Variables we need to pass into the sign client tool

$appSettings = "$currentDirectory\appsettings.json"

$appPath = "$currentDirectory\..\packages\SignClient\tools\SignClient.dll"

$nupgks = ls $currentDirectory\..\*.nupkg | Select -ExpandProperty FullName

foreach ($nupkg in $nupgks){
    Write-Host "Submitting $nupkg for signing"

    dotnet $appPath 'zip' -c $appSettings -i $nupkg -s $env:SignClientSecret -n 'Zeroconf' -d 'Zeroconf' -u 'https://github.com/onovotny/zeroconf' 

    Write-Host "Finished signing $nupkg"
}

Write-Host "Sign-package complete"

The parameters to the signing client are as follows. There are two modes, file for a single file and zip for a zip-type archive:

usage: SignClient <command> [<args>]

    file    Single file
    zip     Zip-type file (NuGet, etc)

File mode:

usage: SignClient file [-c <arg>] [-i <arg>] [-o <arg>] [-h <arg>]
                  [-s <arg>] [-n <arg>] [-d <arg>] [-u <arg>]

    -c, --config <arg>            Full path to config json file
    -i, --input <arg>             Full path to input file
    -o, --output <arg>            Full path to output file. May be same
                                  as input to overwrite. Defaults to
                                  input file if ommited
    -h, --hashmode <arg>          Hash mode: either dual or Sha256.
                                  Default is dual, to sign with both
                                  Sha-1 and Sha-256 for files that
                                  support it. For files that don't
                                  support dual, Sha-256 is used
    -s, --secret <arg>            Client Secret
    -n, --name <arg>              Name of project for tracking
    -d, --description <arg>       Description
    -u, --descriptionUrl <arg>    Description Url

Zip-type archive mode, including NuGet:

usage: SignClient zip [-c <arg>] [-i <arg>] [-o <arg>] [-h <arg>]
                  [-f <arg>] [-s <arg>] [-n <arg>] [-d <arg>] [-u <arg>]

    -c, --config <arg>            Full path to config json file
    -i, --input <arg>             Full path to input file
    -o, --output <arg>            Full path to output file. May be same
                                  as input to overwrite
    -h, --hashmode <arg>          Hash mode: either dual or Sha256.
                                  Default is dual, to sign with both
                                  Sha-1 and Sha-256 for files that
                                  support it. For files that don't
                                  support dual, Sha-256 is used
    -f, --filter <arg>            Full path to file containing paths of
                                  files to sign within an archive
    -s, --secret <arg>            Client Secret
    -n, --name <arg>              Name of project for tracking
    -d, --description <arg>       Description
    -u, --descriptionUrl <arg>    Description Url

Contributing

I’m very much open to any collaboration and contributions to this tool to enable additional scenarios. Pull requests are welcome, though please open an issue to discuss first. Security reviews are also much appreciated!