Authenticode Signing Service and Client

September 12, 2016 Uncategorized 1 comment , , , , ,

Authenticode Signing Service and Client

Last night I published a new project on GitHub to make it easier to integrate Authenticode signing into a CI process by providing a secured API for submitting artifacts to be signed by a code signing cert held on the server. It uses Azure AD with two application entries for security:

  1. One registration for the service itself
  2. One registration to represent each code signing client you want to allow

Azure AD was chosen as it makes it easy to restrict access to a single application/user in a secure way. Azure App Services also provide a secure location to store certificates, so the combination works well.

The service currently supports either individual files, or a zip archive that contains supported files to sign (works well for NuGet packages). The service code is easy to extend if additional filters or functionality is required.

Supported File Types

  • .msi, .msp, .msm, .cab, .dll, .exe, .sys, .vxd and Any PE file (via SignTool)
  • .ps1 and .psm1 via Set-AuthenticodeSignature

Deployment

You will need an Azure AD tenant. These are free if you don’t already have one. In the “old” Azure Portal, you’ll need to
create two application entries: one for the server and one for your client.

Azure AD Configuration

Server

Create a new application entry for a web/api application. Use whatever you want for the sign-on URI and App ID Uri (but remember what you use for the App ID Uri as you’ll need it later). On the application properties, edit the manifest to add an application role.

In the appRoles element, add something like the following:

{
  "allowedMemberTypes": [
    "Application"
  ],
  "displayName": "Code Sign App",
  "id": "<insert guid here>",
  "isEnabled": true,
  "description": "Application that can sign code",
  "value": "application_access"
}

After updating the manifest, you’ll likely want to edit the application configuration to enable “user assignment.” This means that only assigned users and applications can get an access token to/for this service. Otherwise, anyone who can authenticate in your directory can call the service.

Client

Create a new application entry to represent your client application. The client will use the “client credentials” flow to login to Azure AD
and access the service as itself. For the application type, also choose “web/api” and use anything you want for the app id and sign in url.

Under application access, click “Add application” and browse for your service (you might need to hit the circled check to show all). Choose your service app and select the application permission.



Finally, create a new client secret and save the value for later (along with the client id of your app).

Server Configuration

Create a new App Service on Azure (I used a B1 for this as it’s not high-load). Build/deploy the service however you see fit. I used VSTS connected to this GitHub repo along with a Release Management build to auto-deploy to my site.

In the Azure App Service, in the certificates area, upload your code signing certificate and take note of the thumbprint id. In the Azure App Service, go to the settings section and add the following setting entries:

Name Value Notes
CertificateInfo:Thumbprint thumbprint of your cert Thumbprint of the cert to sign with
CertificateInfo:TimeStampUrl url of timestamp server
WEBSITE_LOAD_CERTIFICATES thumbprint of your cert This exposes the cert’s private key to your app in the user store
Authentication:AzureAd:Audience App ID URI of your service from the application entry
Authentication:AzureAd:ClientId client id of your service app from the application entry
Authentication:AzureAd:TenantId Azure AD tenant ID either the guid or the name like mydirectory.onmicrosoft.com

Enable “always on” if you’d like and disable PHP then save changes. Your service should now be configured.

Client Configuration

The client is distributed via NuGet and uses both a json config file and command line parameters. Common settings, like the client id and service url are stored in a config file, while per-file parameters and the client secret are passed in on the command line.

You’ll need to create an appsettings.json similar to the following:

{
  "SignClient": {
    "AzureAd": {
      "AADInstance": "https://login.microsoftonline.com/",
      "ClientId": "<client id of your client app entry>",
      "TenantId": "<guid or domain name>"
    },
    "Service": {
      "Url": "https://<your-service>.azurewebsites.net/",
      "ResourceId": "<app id uri of your service>"
    }
  }
}

Then, somewhere in your build, you’ll need to call the client tool. I use AppVeyor and have the following in my yml:

environment:
  SignClientSecret:
    secure: <the encrypted client secret using the appveyor secret encryption tool>

install: 
  - cmd: appveyor DownloadFile https://dist.nuget.org/win-x86-commandline/v3.5.0-rc1/NuGet.exe
  - cmd: nuget install SignClient -Version 0.5.0-beta3 -SolutionDir %APPVEYOR_BUILD_FOLDER% -Verbosity quiet -ExcludeVersion -pre

build: 
 ...

after_build:
  - cmd: nuget pack nuget\Zeroconf.nuspec -version "%GitVersion_NuGetVersion%-bld%GitVersion_BuildMetaDataPadded%" -prop "target=%CONFIGURATION%" -NoPackageAnalysis
  - ps: '.\SignClient\SignPackage.ps1'
  - cmd: appveyor PushArtifact "Zeroconf.%GitVersion_NuGetVersion%-bld%GitVersion_BuildMetaDataPadded%.nupkg"  

SignPackage.ps1 looks like this:

$currentDirectory = split-path $MyInvocation.MyCommand.Definition

# See if we have the ClientSecret available
if([string]::IsNullOrEmpty($env:SignClientSecret)){
    Write-Host "Client Secret not found, not signing packages"
    return;
}

# Setup Variables we need to pass into the sign client tool

$appSettings = "$currentDirectory\appsettings.json"

$appPath = "$currentDirectory\..\packages\SignClient\tools\SignClient.dll"

$nupgks = ls $currentDirectory\..\*.nupkg | Select -ExpandProperty FullName

foreach ($nupkg in $nupgks){
    Write-Host "Submitting $nupkg for signing"

    dotnet $appPath 'zip' -c $appSettings -i $nupkg -s $env:SignClientSecret -n 'Zeroconf' -d 'Zeroconf' -u 'https://github.com/onovotny/zeroconf' 

    Write-Host "Finished signing $nupkg"
}

Write-Host "Sign-package complete"

The parameters to the signing client are as follows. There are two modes, file for a single file and zip for a zip-type archive:

usage: SignClient <command> [<args>]

    file    Single file
    zip     Zip-type file (NuGet, etc)

File mode:

usage: SignClient file [-c <arg>] [-i <arg>] [-o <arg>] [-h <arg>]
                  [-s <arg>] [-n <arg>] [-d <arg>] [-u <arg>]

    -c, --config <arg>            Full path to config json file
    -i, --input <arg>             Full path to input file
    -o, --output <arg>            Full path to output file. May be same
                                  as input to overwrite. Defaults to
                                  input file if ommited
    -h, --hashmode <arg>          Hash mode: either dual or Sha256.
                                  Default is dual, to sign with both
                                  Sha-1 and Sha-256 for files that
                                  support it. For files that don't
                                  support dual, Sha-256 is used
    -s, --secret <arg>            Client Secret
    -n, --name <arg>              Name of project for tracking
    -d, --description <arg>       Description
    -u, --descriptionUrl <arg>    Description Url

Zip-type archive mode, including NuGet:

usage: SignClient zip [-c <arg>] [-i <arg>] [-o <arg>] [-h <arg>]
                  [-f <arg>] [-s <arg>] [-n <arg>] [-d <arg>] [-u <arg>]

    -c, --config <arg>            Full path to config json file
    -i, --input <arg>             Full path to input file
    -o, --output <arg>            Full path to output file. May be same
                                  as input to overwrite
    -h, --hashmode <arg>          Hash mode: either dual or Sha256.
                                  Default is dual, to sign with both
                                  Sha-1 and Sha-256 for files that
                                  support it. For files that don't
                                  support dual, Sha-256 is used
    -f, --filter <arg>            Full path to file containing paths of
                                  files to sign within an archive
    -s, --secret <arg>            Client Secret
    -n, --name <arg>              Name of project for tracking
    -d, --description <arg>       Description
    -u, --descriptionUrl <arg>    Description Url

Contributing

I’m very much open to any collaboration and contributions to this tool to enable additional scenarios. Pull requests are welcome, though please open an issue to discuss first. Security reviews are also much appreciated!

Connecting SharePoint to Azure AD B2C

September 8, 2016 Uncategorized 3 comments , , ,

Connecting SharePoint to Azure AD B2C

Overview

This post will describe how to use Azure AD B2C as an authentication mechanism for SharePoint on-prem/IaaS sites. It assumes a working knowledge of identity and authentication protocols, WS-Federation (WsFed) and OpenID Connect (OIDC). If you need a refresher on those, there are some great resources out there, including Vittorio Bertocci’s awesome book.

Background

Azure AD B2C is a hyper-scalable standards-based authentication and user storage mechanism typically aimed at consumer or customer scenarios. It is a separate product from “regular” Azure AD. Whereas “regular” Azure AD is normally meant to house identities for a single organization, B2C is designed to host identities of external users. In my opinion, it’s the best alternative to writing your own authentication mechanism (which no one should ever do!)

For one client, we had a scenario where we needed to enable external users to access specific site collections within SharePoint. Azure AD wasn’t a good fit, even with the B2B functionality, as we needed to collect additional information during user sign-up. Out-of-the-box, B2C doesn’t yet support WsFed or SAML 1.1 and SharePoint doesn’t support OpenID Connect. This leaves us needing a tool that can bridge B2C to SharePoint by acting as an OIDC relying party (RP) to B2C and a WsFed Identity Provider (IdP) to SharePoint.

The Solution

Fortunately, the identity guru’s Dominick Baier and Brock Allen created just such a tool with IdentityServer 3. From the docs:

IdentityServer is a framework and a hostable component that allows implementing single sign-on and access control for modern web applications and APIs using protocols like OpenID Connect and OAuth2.

IdentityServer has plugins to support additional functionality, like acting as a WsFed IdP. This means we can use IdentityServer as a bridge from OIDC to WsFed. We’ll register an application in B2C for IdentityServer and then create an entry in IdentityServer for SharePoint.

Here’s a diagram of the pieces:
Diagram

While you can use IdentityServer to act as an IdP to multiple clients, in the model we used, we considered IdentityServer as “part of SharePoint.” That is, SharePoint is the only client and in B2C, the application entry visible is called “SharePoint.” I mention this because B2C allows applications to choose different policies/flows for sign up/sign in, password reset, and more. In our solution, we’ve configured IdentityServer to use a particular set of policies that meet SharePoint’s needs — it many not meet the needs of other applications.

Diving deep

As mentioned, IdentityServer isn’t so much a “drop in product,” but rather, it’s a framework that needs customization. The rest of this post will look at how we customized and configured B2C, IdentityServer and SharePoint to enable the end-to-end flow.

B2C

Let’s start with B2C. As far as B2C is concerned, we register a new web application and create a couple of policies: sign-up/sign-in and password reset. When you register the application, enter the redirect uri’s you’ll need for IdentityServer (localhost and/or your real url.) You don’t need a client secret for these flows.

IdentityServer

Follow the IdentityServer getting started guide to create a blank ASP.NET 4.6 MVC site and install/configure IdentityServer. ASP.NET Core is not yet supported on CoreCLR as .NET Core doesn’t yet have the XML cryptography libraries needed for WsFed (that support will come as part of .NET Standard 2.0.) After installing the IdentityServer3 NuGet, you’ll need to install the WsFed plugin NuGet.

The key here is that we don’t need a local user store as IdentityServer won’t be acting as the user database. We just need to configure B2C as an identity provider and the WsFed plugin to act as an IdP. IdentityServer won’t maintain any state and is simply a pass-through, validating JWT’s and issuing SAML tokens.

Below, I’ll explain some of the core snippets; the full set of files are available here.

Inventory

There are several areas of IdentityServer that need to either be configured or have custom code added:

  • Identity Provider B2C via the standard OIDC OWIN middleware
  • WS-Federation plugin IdentityServer plug for WsFed
  • Relying parties An entry or two for your WsFed/SAML client (SharePoint or a test app configured with WsFed auth)
  • User Service IdentityServer component for mapping external auth to users

Identity Provider

We need to configure B2C as an OIDC middleware to IdentityServer. Due to the way B2C works, we need some additional code to handle the different policies — it’s not enough to configure a single OIDC endpoint. For normal flows, it’ll default to the policy specified in “SignInPolicyId”. Where it gets tricky is in handling password reset.

First, let’s look at the normal “happy path” flow, where a user either sign’s up or sign’s in. Here’s what the flow looks like:
Sign in

In B2C, password reset is a separate policy and thus requires a specific call to the /authorize endpoint specifying the password reset policy to use. If you use the “combined sign up/sign in” policy, which is recommended as it’s the most styleable, it provides a link button for “password reset”. What this does, however, is return a specific error code to the app that started the sign up flow. It’s up to the app to start the password reset flow. Then, once the password reset flow is complete, despite appearing to be authenticated (as defined by having had a signed JWT returned), B2C’s SSO mechanisms won’t consider the user signed in. You’ll notice this if you try to use a profile edit flow or any other flow where SSO should have signed in the user w/o additional prompting. The guidance from the B2C team here is that after the password reset flow completes, an app should immediately trigger the sign in flow again. Logically, this makes sense, as a user started the reset password flow from the sign in screen, once the password is reset, they should logically resume there to actually sign in.

Sign in with password reset

Implementing this all with IdentityServer requires a little bit of extra code. Unfortunately, with IdentityServer, we cannot simply add individual OIDC middleware instances for each endpoint as we would in a normal web app because IdentityServer will see them as different providers and present an identity provider selection screen. To avoid this, we are only configuring a single identity provider and passing the policy as an authentication parameter. The B2C samples provide a PolicyConfigurationManager class that can retrieve and cache the OIDC metadata for each of the policies (sign-up/sign-in and password reset).

Here’s an example from Startup.Auth.B2C.cs:

ConfigurationManager = new PolicyConfigurationManager(
    string.Format(CultureInfo.InvariantCulture, B2CAadInstance, B2CTenant, "/v2.0", OIDCMetadataSuffix), 
    new[] {SignUpPolicyId, ResetPasswordPolicyId}),

The main work in getting IdentityServer to handle the B2C flows are in handling the OpenID Connect Event’s RedirectToIdentityProvider, AuthenticationFailed, and SecurityTokenValidated. By handling these three, we can bounce between the flows.

In the Startup.Auth.B2C.cs file, the OnRedirectToIdentityProvider event handler looks for the policy authentication parameter and ensures the correct /authorize endpoint is used. As IdentityServer handles the initial auth call, we cannot specify a policy parameter, so we assume it’s a sign-in. IdentityServer tracks some state for the sign in request, and we’ll need access to it in case the user needs to do a password reset later, so we store it in a short-lived, encrypted, session cookie.

Once the B2C flow comes back, we need to handle both the failed and validated events. If failed, we look for the specific error codes and take appropriate action. If success, we check if it’s from a password reset and then bounce back to the sign in to complete the journey.

WS-Federation plugin

Configuring IdentitySever to act as a WS-Federation IdP is fairly simple: install the plugin package and provide the plugin configuration in Startup.cs. As an aside, don’t forget to either provide your own certificate or alter the logic to pull the cert from somewhere else!

The main WsFed configuration is a list of Relying Parties, seen in RelyingParties.cs. I’ve hard-coded it, but you can generate this data however you see fit.

Relying parties

Within the Relying Party configuration, you can specify the required WsFed parameters, including Realm, ReplyUrl and PostLogoutRedirectUris. The final thing you need is a map of OIDC claims to SAML claim types returned.

User Service

The User Service is what IdentityServer uses to match external claims to internal identities. For our use, we don’t have any internal identities and we simply pass the claims through as you can see in AadUserService.cs. The main thing we do is to extract a few specific claims and tell IdentityServer to use those for name, subject, issuer and authentication method.

WsFed Client (or SharePoint)

Adding a WsFed client should be faily easy at this point. Configure the realm and reply url’s as required and point to the metadata address. For IdentityServer, this is https://localhost:44352/wsfed/metadata by default (or whatever your hostname is.)

ASP.NET 4.6

I find it useful to have a basic ASP.NET MVC site I can use for testing that authenticates and prints out the claims — helps isolate me from difficult SharePoint issues.

With ASP.NET MVC 4.6, add the Microsoft.Owin.Security.WsFederation NuGet package and use this in your Startup class where realm is the configured realm and adfsMetadata is the IdentityServer metadata endpoint:

public void ConfigureAuth(IAppBuilder app)
{
    app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

    app.UseCookieAuthentication(new CookieAuthenticationOptions());

    app.UseWsFederationAuthentication(
        new WsFederationAuthenticationOptions
        {
            Wtrealm = realm,
            MetadataAddress = adfsMetadata
        });
}

SharePoint

I will readily confess that I am not a SharePoint expert. I’ll happily leave that to other’s like Bob German, a colleague and SharePoint MVP. From Bob:

The Microsoft documentation is fine, but is oriented toward a connection with Active Directory via AD FS, so it includes claims attributes such as the SID value, which won’t exist in this scenario. The only real claims SharePoint needs are email address, first name, and last name. Any role claims passed in are available for setting permissions in SharePoint. Follow the relevant portions of the documentation, but only map the claims that make sense.

For example,

$emailClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
$firstNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" -IncomingClaimTypeDisplayName "FirstName" -SameAsIncoming
$lastNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" -IncomingClaimTypeDisplayName "LastName" -SameAsIncoming
$roleClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming

New-SPTrustedIdentityTokenIssuer -Name <somename> -Description <somedescription> -realm <realmname> -ImportTrustCertificate <token signing cert> -ClaimsMappings $emailClaimMap,$roleClaimMap,$firstNameClaimMap,$lastNameClaimMap -IdentifierClaim $emailClaimMap.InputClaimType

You can pass in additional claims attributes and SharePoint’s STS will pass them along to you, but you can only access them server-side via Thread.CurrentPrincipal; for example,

IClaimsPrincipal claimsPrincipal = Thread.CurrentPrincipal as IClaimsPrincipal;
If (claimsPrincipal != null)
{
    IClaimsIdentity claimsIdentity = (IClaimsIdentity)claimsPrincipal.Identity;
    foreach (Claim c in claimsIdentity.Claims)
    {
        // Do something
    }
}

With this scenario, you can assign permissions based on an individual user using the email claim, or based on a role using the role claim. However SharePoint’s people picker isn’t especially helpful in this case. Since it has no way to look up and resolve the claims attribute value, it will let users type anything they want. Type something and then hover over the people picker; you’ll see a list of claims. Select the Role claim to grant permission based on a role, or the Email claim to grant permission to an individual user based on their email address.

SharePoint does not use WsFed metadata, so you need to provide the signing certificate’s public key directly and specify the WsFed signin url. For the scenario here, that’s https://localhost:44352/wsfed

Conclusion

While not without its challenges, it is possible to use B2C with a system that only knows WsFed. One thing I have not yet done is implement a profile edit flow. I need to give that more thought around how that’d work and interact. I’m open to ideas if you have them and I’ll blog a follow-up once that’s done.

Using Xamarin Forms with .NET Standard

July 9, 2016 Uncategorized 7 comments , , , ,

Using Xamarin Forms with .NET Standard

With the release of .NET Core and the .NET Standard Library last week, many people want to know how they can use packages targeting netstandard1.x with their Xamarin projects. It is possible today if you use Visual Studio; for Xamarin Studio users, support is coming soon.

Prerequisites

Using .NET Standard pretty much requires you to use project.json to eliminate the pain of “lots of packages” as well as properly handle transitive dependencies. While you may be able to use .NET Standard without project.json, I wouldn’t recommend it.

You’ll need to use the following tools:

Getting Started

As of now, the project templates for creating a new Xamarin Forms project start with an older-style packages.config template, so whether you create a new project or have an existing project, the steps will be pretty much the same.

Step 1: Convert your projects to project.json following the steps in my previous blog post.

Step 2: As part of this, you can remove dependencies from your “head” projects that are referenced by your other projects you reference. This should simplify things dramatically for most projects. In the future, when you want to update to the next Xamarin Forms version, you can update it in one place, not 3-4 places. It also means, you only need the main Xamarin.Forms package, not each of the packages it pulls in.

If you hit any issues with binaries not showing up in your bin directories (for your Android and iOS “head” projects), make sure that you have set CopyNuGetImplementations to true in your csproj as per the steps in the post.

At this point, your project should be compiling and working, but not yet using netstandard1.x anywhere.

Step 3: In your Portable Class Library projects, find the highest .NET Standard version you need/want to support.

Here’s a cheat sheet:

  • If you only want to support iOS and Android, you can use .NET Standard 1.6. In practicality though, most features are currently available at .NET Standard 1.3 and up.
  • If you want to support iOS, Android and UWP, then NET Standard 1.4 is the highest you can use.
  • If you want to support Windows Phone App 8.1 and Windows 8.1, then NET Standard 1.2 is your target.
  • If you’re still supporting Windows 8, .NET Standard 1.1 is for you.
  • Finally, if you need to support Windows Phone 8 Silverlight, then .NET Standard 1.0 is your only option.

Once you determine the netstandard version you want, in your PCL’s project.json, change what you might have had:

{
    "dependencies": {
        "Xamarin.Forms": "2.3.0.107"        
    },
    "frameworks": {        
        ".NETPortable,Version=v4.5,Profile=Profile111": { }
    },
    "supports": { }
}

to

{
    "dependencies": {
        "NETStandard.Library": "1.6.0",
        "Xamarin.Forms": "2.3.0.107"        
    },
    "frameworks": {        
        "netstandard1.4": {
            "imports": [ "portable-net45+wpa81+wp8+win8" ]
         }
    },
    "supports": { }
}

Note the addition of the imports section. This is required to tell NuGet that specified TFM is compabtible here beause the Xamarin.Forms package has not yet been updated to use netstandard directly.

Then, edit the csproj to set the TargetFrameworkVersion element to v5.0 and remove any value from the TargetFrameworkProfile element.

At this point, when you reload the project, it should restore the packages and build correctly. You may need to do a full clean/rebuild.

Seeing it in action

I created a sample solution showing this all working over on GitHub. It’s a good idea to clone, build and run it to ensure your environment and tooling is up-to-date. If you get stuck converting your own projects, I’d recommend referring back to that repo to find the difference.

As always, feel free to tweet me @onovotny as well.

Portable- is dead, long live NetStandard

June 23, 2016 Uncategorized 8 comments

Portable- is dead, long live NetStandard

With the RC of NuGet 2.12 for VS 2012/2013, and imminent release of .NET Core on Monday the 27th, it’s time to bid farewell to our beloved/cursed PCL profiles. So long, you won’t be missed! Oh, and dotnet, please don’t let the door hit you on the way out either.

In its place we join the new world of the .NET Platform Standard and it’s new moniker netstandard.

When dotnet was released last July, there was a lot of confusion around what it is and how it worked. Working with the NuGet and CoreFX teams, I tried to explain it in a few of my previous posts. Despite good intentions, dotnet was a constant frustration to many library authors due to its design and limited support. dotnet only worked with NuGet v3, which meant that packages would need to ship a dotnet version and a version in a PCL directory like portable-net45+win8+wp8+wpa81 to support VS 2012/2013.

It was hard to fault anyone for wondering, “why bother?” The other downfall of dotnet, and likely the main one, was that dotnet fell into a mess trying to work with different compatibility levels of libraries. What if you wanted to have a version that worked with newer packages than were supported by Windows 8? What if you wanted to have multiple versions which “light up” based on platform capabilities? How many of you who installed a dotnet-based package, with its dependencies listed, saw a System.Runtime 4.0.10 entry and wondered why you were getting errors trying to update it? After all, NuGet showed an update to 4.0.20, why wouldn’t that work? The reality was that you had to release packages with multiple versions that were incompatible with some platforms because there was no way to put all of it into one package.

Enter .NET Platform Standard

netstandard fixes the short comings of dotnet by being versioned. As of today, there’s 1.0 – 1.6, with the following TFM’s netstandard1.0netstandard1.6. Now, the idea is that each .NET Platform Standard version supports a given set of platforms and when authoring a library, you’d ideally want to target the lowest one that has the features you need and runs on your desired target platform versions.

With NuGet 2.12, netstandard is also supported in VS 2012 and 2013, so that there’s no further need to include a portable-* version and a netstandard version in the same package. Finally!

What does this all mean

The short answer is that if you currently have a Profile 259 PCL today, you can change your NuGet package to put it in a netstandard1.0 directory and add the appropriate dependency group. The full list if profile -> netstandard mappings is here. If you support .NET 4.0 or Silverlight 5 in your PCL — basically a PCL “older” than 259, then you can continue to put that in your NuGet alongside the netstandard1.0+ version and things will continue to work. In addition, a platform-specific TFM (like net45) will always “win” over netstandard if compatible.

Detour: Dependencies for NetStandard

Like dotnet, netstandard requires listing package dependencies. For the most part, it’s easier than with dotnet as there is a meta-package, NETStandard.Library 1.6.0 (the RC2 version is 1.5.0-rc2-24027), that has most BCL dependencies included. This package is one that you probably have in your project.json today. Put the NETStandard.Library 1.6.0 dependency in a netstandard1.0 dependency group, along with any other top-level dependencies you have:

<dependencies>
  <group targetFramework="netstandard1.0">
    <dependency id="NETStandard.Library" version="1.6.0" />
    <dependency id="System.Linq.Queryable" version="4.0.1" />
  </group>
</dependencies>

Next steps

There’s about to be a lot to do over the coming weeks:

  • Get the .NET Core 1.0 RTM with the Preview 2 tooling
  • Download VS 2015 Update 3 when available
  • Grab NuGet 2.12 for VS 2012/2013

Start updating your packages to support .NET Core if you haven’t already. If you’ve been waiting for .NET Core RTM, that time has finally come. You can either use a csproj-based portable library targeting netstandard or use xproj with project.json. As an aside, my personal opinion is that xproj‘s main advantage over csproj today is cross-compiling. If you need to compile for multiple targets, that’s the currently the best option, otherwise, you’ll get a better experience using the csproj approach.

Converting existing PCL projects

VS 2015 Update 3 has a new feature that makes it easy to convert a PCL to netstandard. If your library has any NuGet dependencies installed, you need to first follow the steps to switch packages.config to project.json. Once you do that, you can select Target .NET Platform Standard from the project properties and it’ll convert it over:
Project Properties

Icing on the cake

Xamarin supports all versions of netstandard as well. This means that if you were cross-compiling because PCL 259 was too limiting, try taking another look at netstandard1.3+. There’s a lot more surface area there and it may mean you can eliminate a few target platforms.

Bonus Round

If you want to use xproj for its cross-compiling features and also need to reference that library from a project type that doesn’t support csproj -> xproj references today (most of them, including UWP don’t work super-well), I’ve written up an explanation of how you can do this on StackOverflow.

Project.json all the things

February 8, 2016 Uncategorized 29 comments ,

Project.json all the things

One of the less known features of Visual Studio 2015 is that it is possible to use project.json with any project type, not just “modern PCL’s,” UWP projects, or xproj projects. Read on to learn why you want to switch and how you can update your existing solution.

Background

Since the beginning of NuGet, installed packages were tracked in a file named packages.config placed alongside the project file. The package installation process goes something like this:

  1. Determine the full list of packages to install, walking the tree of all dependent packages
  2. Download all of those packages to a \packages directory alongside your solution file
  3. Update your project file with correct libraries from the package (looking at \lib\TFM
    • If the package contains a build directory, add any appropriate props or targets files found
  4. Create or update a packages.config file along the project that lists each package along with the current target framework

Terms

  • TFM – Target Framework Moniker. The name that represents a specific Platform (platforms being .NET Framework 4.6, MonoTouch, UWP, etc.)
  • Short Moniker – a short way of referring to a TFM in a NuGet file (e.g., net46). Full list is here.
  • Full Moniker – a longer way of specifying the TFM (e.g., .NETPortable,Version=v4.5,Profile=Profile111). Easiest way to determine this is to compile and let the NuGet error message tell you what to add (see below).

Limitations

The above steps are roughly the same for NuGet up to and including the 2.x series. While it works for basic projects, larger, more complex projects quickly ran into issues. I do not consider the raw number of packages that a project has to be an issue by itself – that is merely showing oodles of reuse and componentization of packages into small functional units. What does become an issue are the UI and the time it takes to update everything.

As mentioned, because NuGet modifies the project file with the relative location of the references, every time you update, it has to edit the project file. This is slow and can lead to merge conflicts across branches.

Furthermore, the system is unable to pivot on different compile-time needs. With many projects needing to provide some native support, NuGet v2.0 had no way of providing different dependencies based on build configuration.

One more issue surfaces with the use of “bait and switch” PCLs. Some packages provide a PCL for reference purpose (the bait), and then also provide platform-specific implementations that have the same external surface area (the switch). This enables libraries to take advantage of platform specific functionality that’s not available in a portable class library alone. The catch with these packages is that to function correctly in a multi-project solution containing a PCL and an application, the application must also add a NuGet reference to all of the packages its PCL libraries use to ensure that the platform-specific version winds up in the output directory. If you forget, you’ll likely get a runtime error due to an incomplete reference assembly being used.

NuGet v3 and Project.json to the rescue

NuGet 3.x introduces a number of new features aimed at addressing the above limitations:

  • Project files are no longer modified to contain the library location. Instead, an MSBuild task and target gets auto-included by the build system. This task creates references and content-file items at build time enabling the meta-data values to be calculated and not baked into a project file.
    • Per-platform files can exist by using the runtimes directories. See the native light-up section in the docs for the details.
  • Packages are now stored in a per-user cache instead of alongside the solution. This means that common packages do not have to be re-downloaded since they’ll already be present on your machine. Very handy for those packages you use in many different solutions. The MSBuild task enables this as the location is no longer baked into the project file.
  • Reference assemblies are now more formalized with a new ref top-level directory. This would be the “bait” assembly, one that could target a wide range of frameworks via either a portable- or dotnet or netstandard TFM. The implementation library would then reside in \lib\TFM. The version in the ref directory would be used as the compile-time reference while the version in the lib directory is placed in the output location.
  • Transitive references. This is a biggie. Now only the top-level packages you require are listed. The full chain of packages is still downloaded (to the shared per-user cache), but it’s hidden in the tooling and doesn’t get in your way. You can continue to focus on the packages you care about. This also works with project-to-project references. If I have a bait-and-switch package reference in my portable project, and I have an application that references that portable library, the full package list will be evaluated for output in the application and the per-architecture, per-platform assemblies will get put in the output directories. You no longer have to reference each package again in the application.

It is important to note that these features only work when a project is using the new project.json format of package management. Having NuGet v3 alone isn’t enough. The good news is that we can use project.json in any project type with a few manual steps.

Using project.json in your current solution

You can use project.json in your current solution. There are a couple of small caveats here:

  1. Only Visual Studio 2015 with Update 1 currently supports project.json. Xamarin Studio does not yet support it but it is planned. That said, Xamarin projects in Visual Studio do support project.json.
    • If you’re using TFS Team Build, you need TFS 2015 Update 1 on the build agent in addition to VS 2015 Update 1.
  2. Some packages that rely on content files being placed into the project may not work correctly. project.json has a different mechanism for this, so the package would need to be updated. The workaround would be to manually copy the content into your project file.
  3. All projects in your solution would need to be updated for the transitive references to resolve correctly. That’s to say that an application using NuGet v2/packages.config won’t pull in the correct transitive references of a portable project reference that’s using project.json.

With that out of the way, lets get started. If you’d like to skip this and see some examples, please look at the following projects that have been converted over. These are all libraries that have a combination of reference assemblies, platform specific implementations, test applications and unit tests, so the spectrum of scenarios should be covered there. They have everything you need in them:

One last note before diving deep: make sure your .gitignore file contains the following entries:

  • *.lock.json
  • *.nuget.props
  • *.nuget.targets

These files should not generally be checked in. In particular, the .nuget.props/targets files will contain a per-user path to the NuGet cache. These files are created by calling NuGet restore on your solution file.

Diving deep

As you start, have the following blank project.json handy as you’ll need it later:

{
    "dependencies": {        
    },
    "frameworks": {        
        "net452": { }
    },
    "runtimes": {
        "win": { }
    } 
}

This represents an empty project.json for a project targeting .NET 4.5.2. I’m using the short moniker here, but you can also use the full one. The string to use here is the thing you’ll likely hit the most trouble with. Fortunately, when you’re wrong and try to build, you’ll get what’s probably the most helpful error message of all time:

Your project is not referencing the “.NETPortable,Version=v4.5,Profile=Profile111” framework. Add a reference to “.NETPortable,Version=v4.5,Profile=Profile111” in the “frameworks” section of your project.json, and then re-run NuGet restore.

The error literally tells you how to fix it. Awesome! The fix is to put .NETPortable,Version=v4.5,Profile=Profile111 in your frameworks section to wind up with something like:

{
    "dependencies": {        
    },
    "frameworks": {        
        ".NETPortable,Version=v4.5,Profile=Profile111": { }
    },
    "supports": { }
}

The eagle-eyed reader will notice that the first example had a runtimes section with win in it. This is required for a desktop .NET Framework projects and for projects where CopyNuGetImplementations is set to true like your application (we’ll come back that in a bit), but is not required for other library project types. If you have the runtimes section, then there’s rarely, if ever, a reason to have both the supports section too.

The easiest way to think about this:

  • For library projects, use supports and not runtimes
  • For your application project, (.exe, .apk, .appx, .ipa, website) use runtimes and not supports
  • If it’s a desktop .NET Framework project, use runtimes for both class libraries and your application
  • If it’s a unit test library executing in-place and you need references copied to its output directory, use runtimes and not supports

Now, take note of any packages with the versions that you already have installed. You might want to copy/paste your packages.config file into a temporary editor window.

The next step is to remove all of your existing packages from your project. There are two ways to do this: via the NuGet package manager console or by hand.

Using the NuGet Package Manager Console

Pull up the NuGet Package Manager Console and ensure the drop-down is set to the project you’re working on. For each package in the project, uninstall each package with the following command:
Uninstall-Package <package name> -Force -RemoveDependencies
Repeat this for each package until they’re all gone.

By Hand

Delete your packages.config file, save the project file then right-click the project and choose “Unload project”. Now right-click the project and select Edit. We need to clean up a few things in the project file.

  • At the top of the project file, remove any .props files that were added by NuGet (look for the ones going to a \packages directory.
  • Find any <Reference> element where the HintPath points to a NuGet package library. Remove all of them.
  • At the bottom of the file, remove any .targets files that NuGet added. Also remove any NuGet targets or Tasks that NuGet added (might be a target that starts with the following line <Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">).
  • If you have any packages that contain Roslyn Analyzers, make sure to remove any analyzer items that come from them.

Save your changes, right click the project in the solution explorer and reload the project.

Adding the project.json

In your project, add a new blank project.json file using one of the templates above. Ensure that the Build Action is set to None (should be the default). Once present, you might need to unload your project and reload it for NuGet to recognize it, so save your project, right-click your project and unload it and reload it.

Now you can either use the Manage NuGet Packages UI to re-add your packages or add them to the project.json by hand. Remember, you don’t necessarily have to re-add every package, only the top-level ones. For example, if you use Reactive Extensions, you only need Rx-Main, not the four other packages that it pulls in.

Build your project. If there are any errors related to NuGet, the error messages should guide you to the answer. Your project should build.

What you’ll notice for projects other than desktop .NET executables or UWP appx’s, is that the output directory will no longer contain every referenced library. This saves disk space and helps the build be faster by eliminating extra file copying. If you want the files to be in the output directory, like for unit test libraries that need to execute in-place, or for an application itself, there’s two extra steps to take:

  1. Unload the project once more and edit it to add the following to the first <PropertyGroup> at the top of the project file: <CopyNuGetImplementations>true</CopyNuGetImplementations>. This tells NuGet to copy all required implementation files to the output directory.
  2. Save and reload the project file. You’ll next need to add that runtimes section from above. The exact contents will depend on your project type. Rather than list them all out here, please see the Zeroconf or xUnit for Devices for the full examples.
    • For an AnyCPU Desktop .NET project win is sufficient
    • For Windows Store projects, you’ll need more

Once you repeat this for all of your projects, you’ll hopefully still have a working build(!) but now one where the projects are using the rich NuGet v3 capabilities. If you have a CI build system, you need to ensure that you’re using the latest nuget.exe to call restore on your solution prior to build. My preference is to always download the latest stable version from the dist link here: https://dist.nuget.org/win-x86-commandline/latest/nuget.exe.

Edge Cases

There may be some edge cases you hit when it comes to the transitive references. If you need to prevent any of the automatic project-to-project propagation of dependencies, the NuGet Docs can help.

In some rare cases, if you start getting compile errors due to missing System references, you may be hitting this bug, currently scheduled to be fixed in the upcoming 3.4 release. This happens if a NuGet package contains a <frameworkAssembly /> dependency that contains a System.* assembly. The workaround for now is to add <IncludeFrameworkReferencesFromNuGet>false</IncludeFrameworkReferencesFromNuGet> to your project file.

What this doesn’t do

There is often confusion between the use of project.json and its relation to the DNX/CLI project tooling that enables cross-compilation to different sets of targets. Visual Studio 2015 uses a new project type (.xproj) as a wrapper for these. This post isn’t about enabling an existing .csproj or .vbproj project type (the one most people have been using on “regular”) projects to start cross-compiling. Converting an existing project to use .xproj is a topic for another day and not all project types are supported by .xproj.

What this does do is enable the NuGet v3 features to be used by the existing project types today. If you have a .NET 4.6 desktop project, this will not change that. Likewise if your project is using the Xamarin Android 6 SDK, this won’t alter that either. It’ll simply make package management easier.

Acknowledgments

I would like to thank Andrew Arnott for his persistence in figuring out how to make this all work. He explained it to me as he was figuring it out and then recently helped to review this post. Thanks Andrew! A shout out is also due to Scott Dorman and Jason Malinowski for their valuable feedback reviewing this post.

Syntax highlighting on WordPress with Prism and Markdown

February 5, 2016 Uncategorized 2 comments

Using the JetPack plugin, WordPress supports using Markdown natively in the editor. This makes it much easier to write posts, but one feature has been a bit wonky — syntax highlighting.

Out-of-the-box, there’s no syntax highlighting plug-ins that met my need. Prism.js is a popular highlighter and while there are a few plugins to support it, the languages they pre-selected didn’t have what I wanted. Also, they didn’t seem to be regularly updated.

Fortunately, it wasn’t hard to create a child theme that contained a custom-downloaded version of the Prism JavaScript and CSS. I won’t walk through that part as it’s well documented elsewhere (see the previous link). What I did with the base child theme was to place my configured prism.js and prism.css files into a sub-directory and register them to be loaded by WordPress.

That almost worked. The trick is that Prism expects the syntax highlighting to be in tags matching <code class="language-foo"> where foo is whatever syntax rules it should apply. The trouble is that by default, JetPack’s Markdown -> HTML processor turns into <code class="json">. Seeing that, I hacked together a little fix-up script to inject prior to the prism code executing:

jQuery(function() {
    $ = jQuery;
    $("code").each(function() {
       var className = $(this).attr('class');
       if(className && !(className.lastIndexOf('language-', 0) === 0)) {
            // No language, prepend
            $(this).attr('class', 'language-' + className);
        }
    });  
});

Caveat: I am not a “real” JavaScript programmer; you might have a better way to do this!

When it was all assembled and uploaded to WordPress, I can now use the normal syntax and things get highlighted as expected.

I’ve zipped up my child theme that only contains these changes here.

You’re welcome to use that as a starting point; you’ll probably need to rename the child theme and specify the correct parent. You can also merge it with your current child theme.

As the Prism JavaScript and CSS is highly customizable, you may wish to generate your own from their site and use those in place.

Announcing Humanizer 2.0

January 30, 2016 Uncategorized 1 comment

Announcing Humanizer 2.0

Earlier today we finalized and published the next major release of Humanizer. This version includes many fixes and new features, many of them coming directly from the community. A huge thank you to all those who have contributed!

You can find the latest Humanizer on NuGet and the website contains the latest documentation. The release notes contains the full details of the changes.

I wanted to call out a few things though:

  • The Humanizer package now supports selecting locales to install. This was done by using a little-known feature of NuGet called satellite packages. The main Humanizer package is now a meta-package that pulls in all language packages plus the core library; this is the existing behavior of Humanizer today.
    • To install English only, you may elect to install Humanizer.Core directly
    • To install specific a language or set of languages, you can specify Humanizer.Core.<locale> where <locale> represents a supported language package.
  • There is currently a known issue with DNX with satellite packages. It might affect CLI too; track that one here.
  • For best results, using project.json/NuGet v3 is highly recommended over packages.config/NuGet v2. The key difference is that all of the child packages are transitively included instead of directly referenced in your packages.config file. project.json is supported in any project type, not just .NET Core or UWP projects.

Finally, I wanted to thank Mehdi Khalili for trusting me with the stewardship of the project. Mehdi did a fantastic job building Humanizer up and getting the community involved to contribute back. I also would like to thank Alexander I. Zaytsev and Max Malook for their efforts in coordinating the community contributions and guide the project forward.

Xamarin MVP

January 25, 2016 Uncategorized No comments

Xamarin MVP

Xamarin just announced their newest round of MVP awards, and I am very honored to have received one. I have always thought that the C# and .NET are the best, most productive, way to create applications for iOS and Android and Xamarin’s tools make this possible.

Hope to see you at Xamarin Evolve later this April!

Continuous Integration for UWP projects – Making Builds Faster

December 3, 2015 Uncategorized 2 comments , , ,

Continuous Integration for UWP projects – Making Builds Faster

Are you developing a UWP app? Are you doing continuous integration? Do you want to improve your CI build times while still generating the .appxupload required for store submission? If so, read-on.

Prerequisites

You’ll need VS 2015 with the UWP 1.1 tools installed. The UWP 1.1 tooling has some important fixes for creating app bundles and app upload files for command line/CI builds.

You’ll also need to register your app on the Windows Dev Center and associate it with your app. Follow the docs for setting linking your project to a store from within VS first.

If you’re using VSO, you may need to setup your own VM to run a vNext build agent. I’m not sure VSO’s hosted agents have all the latest tools as of today. I run my builds in an A2 VM on Azure; it’s not the fastest build server but it’s good enough.

Building on a Server

Now that you have a solution with one or more projects that create an appx (UWP) app, you can start setting up your build scripts. One problem you’ll need to solve is updating your .appxmanifest with an incrementing version each time. I’ve solved this using the fantastic GitVersion tool. There’s a number of different ways to use it, but on VSO it sets environment variables as part of a build step that I use to update the manifest on build.

I use a .proj msbuild file with a set of targets the CI server calls, but you can use your favorite build scripting tool.

My code looks like this:

<Target Name="UpdateVersion">
    <PropertyGroup>
      <Version>$(GITVERSION_MAJOR).$(GITVERSION_MINOR).$(GITVERSION_BUILDMETADATA)</Version>
    </PropertyGroup>    
    <ItemGroup>
      <RegexTransform Include="$(SolutionDir)\**\*.appxmanifest">
          <Find><![CDATA[ Version="\d+\.\d+\.\d+\.\d+"]]></Find>
          <ReplaceWith><![CDATA[ Version="$(Version).0"]]></ReplaceWith>
      </RegexTransform>
    </ItemGroup>
    <RegexTransform Items="@(RegexTransform)" />    
    <Message Text="Assm: Ver $(Version)" />
</Target>

The idea is to call GitVersion, either by calling GitVersion.exe earlier in the build process, or by using the GitVersion VSO Build Task in a step prior to the build step.

GitVersion can also update your AssemblyInfo files too, if you’d like.

Finally, at the end of the build step, you’ll want to collect certain files for the output. In this case, it’s the .appxupload for the store. In VSO, I look for the contents in my app dir, MyApp\AppPackages\**\*.appxupload.

If you setup your build definition to build in Release mode, you should have a successful build with a .appxupload artifact available you can submit to the store. Remember, we’ve already associated this app with the store, and we’ve enabled building x86, x64, and arm as part of our initial run-through in Visual Studio.

The problem

For your safety, a CI build will by default only generate the .appxupload file if you’re in Release mode with .NET Native enabled. This is to help you catch compile-time errors that would delay your store submission.

That’s well-intentioned, but it can severely slow down your builds. On one project I’m working on, on that A2 VM, a “normal” debug build takes about 14 min while a Release build takes 81 minutes! That’s too long for CI.

Fortunately, there’s a few things we can do to speed things up if you’re willing to live a bit dangerously.

  1. Force MSBuild to create the .appxupload without actually – yes, it is possible!
    • In your build definition, pass the additional arguments to MSBuild: /p:UseDotNetNativeToolchain=false /p:BuildAppxUploadPackageForUap=true. This overrides two variables that control the use of .NET Native and packaging.
  2. If you have any UWP Unit Test projects, you can disable package generation for them if you’re not running those unit tests on the CI box. There is a g̶o̶o̶d̶  reason for this — it’s hard. Running UWP CI tests requires your test agent to be running as an interactive process, not a service. You need to configure your build box to auto-login on reboot and then startup the agent.

    In your test projects, add the following <PropertyGroup> to your csproj file:

<!-- Don't build an appx for this in TFS/command line msbuild -->
<PropertyGroup>
  <GenerateAppxPackageOnBuild Condition="'$(GenerateAppxPackageOnBuild)' == '' and '$(BuildingInsideVisualStudio)' != 'true'">false</GenerateAppxPackageOnBuild>
</PropertyGroup>

This works because the .appxupload doesn’t actually contain native code. It contains three app bundles (one per platform) with MSIL, that the store compiles to native code in the cloud. The local .NET Native step is only a “safety” check, as is running WACK. If you regularly test your code in Release mode locally, and have run WACK to ensure your code is ok, then there’s no need to run either on every build.

After making those two adjustments, I’m able to generate the .appxupload files on every build and the build takes the same 13 min as debug mode.

Surface Book or Surface Pro 4?

October 6, 2015 Uncategorized 3 comments

windows101

This evening, at the Windows 10 Devices fan celebration in NYC, I got to use the Surface Book (and the other devices announced today) and talk to the product guys about the Surface Pro 4 and the Surface Book. One of my questions to them stemmed from a question at work regarding the split-hinge on the Surface Book; I thought the answer was interesting, so here goes (read on further for my comparison of SP4 vs SB?)

They said the split hinge was a deliberate design decision. It also stems from the following goals:

  • To keep the base as thin as possible
  • To have a “perfect” keyboard. The travel on the keys is 1.6mm, which is greater than most laptop keyboards
  • When the lid is closed, they didn’t want the keys to scuff up the screen.
    • To address this, sometimes the keyboard is slightly recessed in the case – it is like that on the MacBook Pro I have. The problem is that’s wasted space that and they wanted to make the thing thinner.

Also, when the lid is open, they had to get the balance exactly right so that when you push against the screen (it is a touch screen after all), it doesn’t tip over. Many/most other 2-in-1’s don’t have the balance quite right and are “tipsy”. Having tried the Surface Book, I can say it’s certainly not tipsy. The “dynamic fulcrum hinge” has some role in this too.

When it comes to a choice between a Surface Pro 4 and the Surface Book, I’d have to say that the differences are primarily around usage:

  • Surface Pro 4 is a tablet and can go it’s full battery charge without its keyboard
  • Surface Pro 4’s keyboard is better than the previous gen one, but for people that do a lot of typing (developers?), it may not be as ideal. In “lap mode”, the SP4 keyboard still has some “bounciness” as the cover overall could be stiffer.
  • Surface Book’s “clipboard” has three hours of battery life on its own. The remaining 9 hours are in the base (for a total of 12). That’s why they call it a clipboard and not a tablet, because the tablet usage is intended as a secondary/auxiliary mode, not the primary.
  • The Surface Book’s keyboard is really, really nice.
  • The 13.5” screen size feels bigger than it is due to its aspect ratio and the resolution. It also has a very narrow bezel, so the screen goes almost to the edge.

Both devices will have the same memory/storage capabilities, maxing out at 16 GB/1TB. The 1TB storage isn’t available yet (will be a month or two) as they are finishing testing those components. They are using Samsung 3d V-NAND modules so the more storage, the faster it actually is. The pen is really nice and has a great feel to it. Even for people with messy handwriting, the friction level on the screen is the right amount to have control and write something legibly.

Both machines are priced at about $2700 fully loaded (16GB/1TB). Which one to get really depends on your usage and needs; I have a feeling most developers would be happiest with the Surface Book while non-developers would probably like the Surface Pro 4 best.