Coding

Continuous Deployment of Cloud Services with VSTS

October 18, 2017 Coding 4 comments , , ,

Continuous Deployment of Cloud Services with VSTS

In my last blog post, I showed how you can use ASP.NET Core with an Azure Cloud Service Web Role. The next step is to enable CI/CD for it, since you really shouldn’t be using “Publish” within Visual Studio for deployment.

As part of this, I wanted to configure the Cloud Service settings per environment in VSTS and not have any configuration checked-in to source control. Cloud Services’ configuration mechanism makes this a bit challenging due to the way it stores configuration, but with a few extra steps, it’s possible to make it work.

What you’ll need

To follow along, you’ll need the following:

  • Cloud Service the code can live in GitHub, VSTS, or many other locations. VSTS can build from any of them.
  • Azure Key Vault we’ll use Azure Key Vault to store the secrets. Creating a Key Vault is easy and the standard tier will work.
  • VSTS this guide is using Visual Studio Team Services, so you’ll need an account there. Those are free for up to five users and any number of users with MSDN licenses.

What we’re going to do

The gist here is that we’ll create a build definition that publishes the output of the Cloud Service project as an artifact. Then, we’ll create a release management process that takes the output of the build and deploys it to the cloud service in Azure. To handle the configuration, we’ll tokenize the checked-in configuration, then use a release management task to read configuration values stored in Key Vault and replace the matching tokenized values before the Azure deployment.

Moving the configuration into Key Vault

Create a new Key Vault to hold your configuration. You should have one Key Vault per environment that you intend to release to, since the secret names will directly translate to variables within VSTS. For each setting you need, create a secret with name like CustomSetting-Setting1 or CustomSetting-Setting2 and set their values. Next, in your ServiceConfiguration.Cloud.cscfg, set the values to be __CustomSetting-Setting1__ and __CustomSetting-Setting2__. The __ is the token start/end, and the value identifies which VSTS variable should be used to replace it.

One tip: If you have Password Encryption certificates or SSL endpoints configured, the .cscfg will have the certificates’ SHA-1 thumbprint’s encoded in them. If you want to configure this per environment, then replace those with token values. The configuration checker will enforce that it looks like a thumbprint, so use values like:

  • ABCDEF01234567ABCDEF01234567ABCDEF012345
  • BACDEF01234567ABCDEF01234567ABCDEF012345

Those sentinel values will be replaced with tokens during the build process and those tokens can be replaced with variable values.

We’ll use these in the build task later on.

The build definition

  1. Start with a new Empty build definition.
  2. On the process tab, choose the Hosted VS2017 Agent queue and give your build definition a name.
  3. Select Get Sources and point to your repository. This could be VSTS, GitHub or virtually any other location.
  4. Add the tasks we’ll need: Visual Studio Build (three times), Publish Build Artifacts (once). It should look something like this:
  5. For the first Visual Studio Build task, set the following values:
    SettingValue
    Display nameRestore solution
    SolutionAspNetCoreCloudService.sln
    Visual Studio VersionVisual Studio 2017
    MSBuild Arguments/t:restore
    Platform$(BuildPlatform)
    Configuration$(BuildConfiguration)
  6. For the second Visual Studio Build task, use the following values:

    SettingValue
    Display nameBuild solution
    SolutionAspNetCoreCloudService.sln
    Visual Studio VersionVisual Studio 2017
    MSBuild Arguments
    Platform$(BuildPlatform)
    Configuration$(BuildConfiguration)
  7. And the third Visual Studio Build task should be set as:

    SettingValue
    Display namePublish Cloud Service
    SolutionTheCloudService\TheCloudService.ccproj
    Visual Studio VersionVisual Studio 2017
    MSBuild Arguments/t:Publish /p:OutputPath=$(Build.ArtifactStagingDirectory)\
    Platform$(BuildPlatform)
    Configuration$(BuildConfiguration)
  8. If you are using sentinel certificate values, add a PowerShell Task. Configure the PowerShell task by selecting “Inline Script”, expand Advanced and set the working folder to the publish directory (like $(Build.ArtifactStagingDirectory)\app.publish) and use the following script:

    $file = "ServiceConfiguration.Cloud.cscfg"
    # Read file
    $content = Get-Content -Path $file
    # substitute values
    $content = $content.Replace("ABCDEF01234567ABCDEF01234567ABCDEF012345", "__SslCertificateSha1__")
    $content = $content.Replace("BACDEF01234567ABCDEF01234567ABCDEF012345", "__PasswordEncryption__")
    # Save
    [System.IO.File]::WriteAllText($file, $content)
    

    This replaces the fake SHA-1 thumbprints with tokens that release management will use. Be sure to define variables in release management that match the names you use.

  9. Finally, set the Publish Artifact step to:

    SettingValue
    Display namePublish Artifact: Cloud Service
    Path to Publish$(Build.ArtifactStagingDirectory)\app.publish
    Artifact NameTheCloudService
    Artifact TypeServer
  10. Go to the Variables tab and add two variables:

    NameValue
    BuildConfigurationRelease
    BuildPlatformAny CPU
  11. Hit Save & Queue to save the definition and start a new build. It should complete successfully. If you go to the build artifacts folder, you should see TheCloudService with the .cspkg file in it.

Deploying the build to Azure

This release process depends on one external extension that handles the tokenization, the Release Management Utility Tasks. Install it from the marketplace into your VSTS account before starting this section.

  1. In VSTS, switch to the Releases tab and create a new release definition using the “Azure Cloud Service Deployment” template.
  2. Give the environment a name, like “Cloud Service – Prod”.
  3. Click the “Add artifact” box and select your build definition. Should look something like this:

    If you want continuous deployment, click the “lightning bolt” icon and enable the CD trigger.
  4. Click on the Tasks tab and specify an Azure subscription, storage account, service name and location. If you need to link your existing Azure subscription, click the “Manage” link. If you need a new storage account to hold the deployment artifacts, you can create that in the portal as well, just make sure to create a “Classic” storage account.
  5. Go to the Variables tab and select “Variable groups”, then “Manage variable groups.” Add a new variable group, give it a name like “AspNetCloudService Production Configuration”, select your subscription (click Manage to link one), and select the Key Vault we created earlier to hold the config. Press the Authorize button if prompted.

    Finally, click Add to select which secrets from Key Vault should be added to this variable group.

    It’s important to note that it does not copy the values at this point. The secret’s values are always read on use, so they’re always current. Save the variable group and return back to the Release Management definition. At this point, you can select “Link variable group” and link the one we just created.
  6. Add a Tokenize with XPath/Regular Expressions task before the Azure Deployment task.
  7. In the Tokenizer task, browse to the ServiceConfiguration.Cloud.cscfg file, something like $(System.DefaultWorkingDirectory)/AspNetCoreCloudService-CI/TheCloudService/ServiceConfiguration.Cloud.cscfg depending on what you call your artifacts.
  8. Ensure that the Azure Deployment task is last, and you should be all set.
  9. Create a new release and it should deploy successfully. If you view your cloud service configuration on Azure Portal, you should see the real values, not the __Tokenized__ values.

Summary

That’s it, you now have an ASP.NET Core Cloud Service deployed to Azure with CI/CD through VSTS. If you want to add additional environments, simply add an additional key vault and linked variable group for each environment, clone the existing environment configuration in the Release Management editor and set the appropriate environmental values. Variable groups are defined at the release definition level, so for multiple-environments you can use a suffix in your variable names and then update the PowerShell script in step 7 to append that per environment (__MyVariable-Prod__), etc.

Using ASP.NET Core with Azure Cloud Services

October 16, 2017 Coding 1 comment , ,

Using ASP.NET Core with Azure Cloud Services

Overview

Cloud Services may be the old-timer of Azure’s offerings, but there are still some cases where it is useful. For example, today, it is the only available PaaS way to run a Windows Server 2016 workload in Azure. Sure, you can run a Windows Container with Azure Container Services, but that’s not really PaaS to me. You still have to be fully aware of Kubernetes, DC/OS, or Swarm, and, as with any container, you are responsible for patching the underlying OS image with security updates.

In developing my Code Signing Service, I stumbled upon a hard dependency on Server 2016. The API I needed to Authenticode sign a file using Azure Key Vault’s signing methods only exists in that version of Windows. That meant that using Azure App Services was out, as it uses Server 2012 (based on the version numbers from its command line). That left Cloud Service Web Roles as the sole remaining option if I wanted PaaS. I could have also used a B-Series VM, that’s perfect for this type of workload, but I really don’t want to maintain a VM.

If you have tried to use ASP.NET Core with a Cloud Service Web Role, you’ll probably have come away disappointed as Visual Studio doesn’t let you do this…. until now. Never one to accept no for an answer, I found a way to make this work, and with a few workarounds, you can too.

The solution presented here handles deployment of an MVC & API application that along with config settings and deployment of the ASP.NET Core Windows Hosting Module. VS Cloud Service tooling works for making changes to config and publishing to cloud services (though please use CI/CD in VSTS!)

Many thanks to Scott Hunter‘s team, Jaques Eloff and Catherine Wang in particular, on figuring out a workaround for some gotcha’s when installing the Windows Hosting Module.

Pieces to the puzzle

You can see the sample solution here, and it may be helpful to clone and follow along in VS.

There are a few pieces to making this work:

  1. TheWebsite The ASP.NET Core MVC site. Nothing significantly special here, just an ordinary site.
  2. TheCloudService The Cloud Service project. Contains the configuration files and service definition.
  3. TheWebRole ASP.NET 4.6 project that contains the Web Role startup scripts and “references” the TheWebsite site. This is where the tricks are.

At a high level, the Cloud Service “sees” TheWebRole as the configured website. The cloud service doesn’t know anything about ASP.NET Core. The trick is to get the ASP.NET Core site published and running “in” an ASP.NET site.

Doing this yourself

The Projects

In a new solution, create a new ASP.NET Core 2 project. Doesn’t really matter what template you use. For the descriptions here, I’ll call it TheWebsite. Build and run the site, it should debug and run normally in IISExpress.

Next, create a new Cloud Service (File -> Add -> New Project -> Cloud -> Azure Cloud Service). I’ll call the cloud service TheCloudService, and on the next dialog, add a single Web Site. I called mine TheWebRole.

Finally, on the ASP.NET Template selection, choose “Empty” and continue.

Right now, we have an ASP.NET Core Website and an Azure Cloud Service with a single ASP.NET 4.6 WebRole. Next up is to clear out almost everything from TheWebRole since it won’t actually contain any ASP.NET Code. Delete the packages.config and Web.config files.

Save the project, then select “Unload” from the project’s context menu. Right-click again and select “Edit TheWebRole.csproj”. We need to delete the packages brought in by NuGet along with the imported props and target. There are three areas to delete as noted in the screen shots: Props at the top, all Reference elements with a HintPath pointing to ..\packages\ and the Target at the bottom.



At this point, your project file should look similar to this here. You can also view the complete diff.

Magic

Now comes the special sauce — we need a way to have TheWebRole build TheWebsite and include TheWebsite‘s publish output as Content. Doing this ensures that TheCloudService Package contains the correct folder layout. Add the following snippet to the bottom of TheWebRole‘s project file to call Publish on our website before the main build step.

<Target Name="BeforeBuild">
  <MSBuild Projects="..\TheWebsite\TheWebsite.csproj" Targets="Publish" Properties="Configuration=$(Configuration)" />
</Target>

Then, add the following ItemGroup to include TheWebsite‘s publish output as Content in the TheWebRole project:

<ItemGroup>
  <Content Include="..\TheWebsite\bin\$(Configuration)\netcoreapp2.0\publish\**\*.*" Link="%(RecursiveDir)%(Filename)%(Extension)" />
</ItemGroup>

Save the csproj file, then right-click the TheWebRole and click Reload. You can test that the cloud service package is created correctly by right-clicking TheCloudService and selecting Package. After choosing a build configuration and hitting “Package,” the project should build and the output directory pop up.

The .cspkg is really a zip file, so extract it and you’ll see the guts of cloud service packages. Look for the .cssx file and extract that (again, just a zip file)

Inside there, open the approot folder and that is the root of your website. If the previous steps were done correctly, you should see something like the following

You should see TheWebsite.dll, TheWebsite.PrecompiledViews.dll, wwwroot, and the rest of your files from TheWebsite.

Congratulations, you’ve now created a cloud service that packages up and deploys an ASP.NET Core website! This alone won’t let the site run though since the Cloud Service images don’t include the Windows Hosting Module.

Installing .NET Core 2 onto the Web Role

Installing additional components onto a Web Role typically involves a startup script, and .NET Core 2 is no different. There is one complication though: the installer downloads files into the TEMP folder, and Cloud Services has a 100MB hard limit on that folder. We need to specify an alternate folder to use as TEMP with a higher quota (this is what Jaques and Catherine figured out).

In TheCloudService, expand Roles, right click TheWebRole and hit properties. Go to Local Storage and add a new location called CustomTempPath with a 500MB limit (or whatever else your app might need).

Next, we need the startup script. Go to TheWebRole, add a new folder called Startup and add the following files to it. Ensure that the Build Action is set to Content and that Copy to Output Directory is set to Copy if newer. Finally, we need to configure the cloud service to invoke our startup task. Open the ServiceDefinition.csdef file and add the following xml in the WebRole node to define the startup task:

<Startup>
  <Task commandLine="Startup\startup.cmd" executionContext="elevated" taskType="simple">
    <Environment>
    <Variable name="IsEmulated">
      <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
    </Variable>
    </Environment>
  </Task>
</Startup>

Now we finally have a cloud service that can be deployed, install .NET Core, and run the website. The first time you publish, it will take a few minutes for the role instance to become available since it
has to install the hosting module and restart IIS.

Note: I leave creating a cloud service instance in the Azure Portal as an exercise to the reader

Configuration

There are many ways of getting configuration into an ASP.NET Core application. If you know you’ll only be running in Cloud Services, you may consider taking a direct dependency on the Cloud Services libraries and using the RoleEnvironment types to get populate your configuration. Alternatively, you can likely write a configuration provider that funnels in the RoleEnvironment configuration into the ASP.NET Core configuration system.

In my original case, I didn’t want my ASP.NET Core website to have any awareness of Cloud Services, so I came up with another way—in the startup script, I copy the values from the RoleEnvironment into environment variables that the default configuration settings pick up. The key here to making this transparent is knowing that the double-underscore, __, translates into the : when read from an environment variable. This means you can define a setting like CustomSetting__Setting1, and then you can access it with Configuration["CustomSetting:Setting1"], or similar mechanisms.

To bridge this gap, we can add this to the startup script (complete script):

$keys = @(
  "CustomSetting__Setting1",
  "CustomSetting__Setting2"
)

foreach($key in $keys){
  [Environment]::SetEnvironmentVariable($key, [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::GetConfigurationSettingValue($key), "Machine")
}

This copies the settings from the Cloud Service Role Environment into environment variables on the host, and from there, the default ASP.NET Core configuration adds them into configuration.

Considerations

  • Session affinity If you need session affinity for session state, you’ll need to configure that.
  • Data Protection API Unlike Azure App Services, Cloud Services doesn’t have any default synchronization for the keys. You’ll need a solution for this. If anyone comes up with a reusable solution, I’ll happily mention it here. More info on configuring DPAPI is here.
  • Local Debugging Due to the way local debugging of cloud services works (it directly uses TheWebRole as a startup project in IIS Express), directly debugging the cloud service does not work with the current patterns. Instead, you can set TheWebsite as a startup project and debug that normally. The underlying issue is that TheWebRole includes TheWebsite as Content and does not copy the published files to TheWebRole‘s directory. It may be possible to achieve this, though you’d likely want additional .gitignore rules to prevent those files from being committed. In my case, I did not want my service to have any direct dependency on Cloud Services, so this wasn’t an issue—I simply needed a Server 2016 web host.

CI / CD with VSTS

It is possible to automate build/deploy of these cloud service web role projects using VSTS. My next blog post will show how to set that up.

Update October 18: The post is live

Use all TFM’s with SDK-style projects in Visual Studio for Mac

August 29, 2017 Coding 4 comments , , ,

Use all TFM’s with SDK-style projects in Visual Studio for Mac

TL;DR

You can now use SDK-style projects, with all supported TFM’s, in Visual Studio for Mac. See getting started for details.

Issue

While Visual Studio for Mac supports the SDK-style projects, there have been a couple of issues blocking use of TFM’s other than net, netstandard, and netcoreapp.

  1. Those TFM’s are hard-coded and an SDK-style project containing any other target frameworks is blocked.
  2. Xamarin on the Mac has a multi-valued MSBuildExtensionsPath property. That means that it can search for targets in different locations. Unfortunately the logic this works with is limited to the <Import /> element, so if you set properties, as is required to use LanguageTargets, it won’t work. Fortunately, with some brainstorming with Ankit Jain and Mikayla Hutchinson, we found a solution.

Getting Started

You’ll need a few things:

  1. Latest stable channel of Visual Studio for Mac
  2. .NET Core 2 SDK (even if you’re not targeting .NET Standard 2 or .NET Core, the SDK style projects use these targets). Download here.
  3. Matt Ward‘s Extension to VSfM that removes TFM checks on SDK-style projects. Binary | Source. Install by going to Visual Studio -> Extensions... -> Install from file...

Then, create a new SDK-style project and use the latest version of the MSBuild.Sdk.Extras package, at least version 1.1.0-beta.69:

<PackageReference Include="MSBuild.Sdk.Extras" Version="1.1.0-beta.69" PrivateAssets="all" />

At the end of the project file, just before the closing tag, you’ll also need the following, as per the MSBuild SDK Extras readme:

<Import Project="$(MSBuildSDKExtrasTargets)" Condition="Exists('$(MSBuildSDKExtrasTargets)')" />

Here’s a complete example of using the SDK-style projects with an iOS class library:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>xamarinios10</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.1.0-beta.69" PrivateAssets="all" />
  </ItemGroup>

  <Import Project="$(MSBuildSDKExtrasTargets)" Condition="Exists('$(MSBuildSDKExtrasTargets)')" />
</Project>

Building these projects

These projects will build in the IDE (VSfM, VS, etc) or the command line. If you use the command line, you must use msbuild, not dotnet build. Keep in mind that with msbuild, you must explicitly call restore first, so your build steps will look like this:

msbuild /t:restore
msbuild /p:Configuration=Release

Notes

For the beta, since it’s a SemVer2 package, you must be using the NuGet v3 feed. If your VSfM prefs have https://www.nuget.org/api/v2/, you need to update that to be https://api.nuget.org/v3/index.json.

Support

If you run into issues, please file a bug on the MSBuild SDK Extras project site: https://github.com/onovotny/MSBuildSdkExtras/issues and reach
me @onovotny.

Announcing Reactive Extensions for .NET 4.0 Preview 1

May 27, 2017 Coding 1 comment , , ,

Announcing Reactive Extensions for .NET 4.0 Preview 1!

I am happy to announce that the first preview of Rx.NET 4.0 is now available. This release addresses a number of issues and contains several enhancements.

The biggest enhancement is consolidating the existing packages into one main package, System.Reactive NuGet. This will prevent most of the pain around binding redirects that were caused by #205. If you are using Rx 4.0 and need to use libraries built against Rx 3.x, then you need to also install the compatibility package System.Reactive.Compatibility. That package contains facades with type forwarders to the new assembly so types are unified correctly. You only need this compatibility package if you are consuming a library built against 3.x. You do not need it otherwise.

If you’re interested in the background behind the version numbers, I suggest reading the thread as it contains the gory details. While the idea was technically sound, it did mean that binding redirects were required for all .NET Framework uses. We heard the feedback loud and clear that this was really painful and took steps to fix it in 4.0.

The fix was to consolidate the previous set of packages into a single System.Reactive package. With the single package, binding redirects are no longer required and the platforms will get the correct Rx package version.

Please try it out and let us know if you encounter any issues at our repo. The full release notes are there too.

Using Xamarin Forms with .NET Standard – VS 2017 Edition

April 23, 2017 Coding 37 comments , , ,

Using Xamarin Forms with .NET Standard – VS 2017 Edition

I have previously blogged about using .NET Standard with Xamarin Forms. Since then, the tooling has changed significantly with Visual Studio 2017 and Visual Studio for Mac. This post will show you what you need to use Xamarin.Forms with a .NET Standard class library.

Why use a .NET Standard class library instead of a PCL? There are many good reasons, but the two biggest ones are:

  • Much bigger surface area. PCL’s were the least common denominator intersection of supported platforms. The end result is that while the binary worked on many platforms, there were a much more limited set of APIs available. .NET Standard 1.4 is the version that supports UWP, Xamarin Android, Xamarin iOS, and Xamarin.Mac.
  • “SDK Style” project file goodness. Legacy PCL’s use the old csproj format which have tons of gunk in them. While it is possible to use the new project style to generate legacy PCLs (if you use my MSBuild.Sdk.Extras package), it’s time to move past those. If you target .NET Standard 1.0-1.2, some PCL profiles can install your library. See the full table for the list.

Prerequisites

Using .NET Standard requires you to use PackageReference to eliminate the pain of “lots of packages” as well as properly handle transitive dependencies. While you may be able to use .NET Standard without PackageReference, I wouldn’t recommend it.

You’ll need to use one of the following tools:

Getting Started

As of now, the project templates for creating a new Xamarin Forms project start with an older-style packages.config template, so whether you create a new project or have an existing project, the steps will be pretty much the same.

Step 1: Convert your projects to use PackageReference. The NuGet blog has details on using PackageReference with all project types. Unfortunately there’s no current migration tool, so it’s probably easiest to uninstall your existing packages, make sure the packages.config file is gone and then install the package after setting the VS Options to PackageReference. You can also do it by hand (which is what I did for my projects).

Step 2: As part of this, you can remove dependencies from your “head” projects that are referenced by your other projects you reference. This should simplify things dramatically for most projects. In the future, when you want to update to the next Xamarin Forms version, you can update it in one place, not 3-4 places. It also means, you only need the main Xamarin.Forms package, not each of the packages it pulls in.

For now, you’ll need to add the <RestoreProjectStyle>PackageReference</RestoreProjectStyle> property near the top of your iOS and Android csproj files. That tells NuGet restore to use the PackageReference mode even if you don’t have any direct packages (this is important for transitive restore). If you have any PackageReference elements in your iOS or Android csproj, then you don’t need this. For UWP, you already should have a PackageReference to the UWP meta-package (Microsoft.NETCore.UniversalWindowsPlatform version 5.3.2).

If you hit any issues with binaries not showing up in your bin directories (for your Android and iOS “head” projects), make sure that you have set CopyNuGetImplementations to true in your csproj.

At this point, your project should be compiling and working, but not yet using netstandard1.x anywhere.

Step 3: Move your PCL library to .NET Standard. This is the hard part today as there’s no tooling to automatically do this correctly. Be warned and DO NOT use this option PCL Dialog in the PCL properties. It is broken and will create a project.json based library targeting dotnet. I hope this option is removed in a future VS Update! Instead, go to File -> New Project -> .NET Standard -> Class Library and create a new class library. If this is a new project, I’d simply delete the existing PCL and just use a new one. If it’s an existing project, you’ll want to migrate. The new format is far simpler and moving the PCL by hand is usually pretty easy. What I’ve usually done is this:

  1. Close the solution in VS
  2. Take the existing csproj and make a copy of it somewhere else. I’ll keep this other copy open in Notepad.
  3. Copy/paste the contents of the new project you created and replace the contents of your existing project. Most of what you had in the old project isn’t really needed anymore. What you’ll likely need are settings like any signing or assembly names that don’t match the folder name/conventions. If you have ResX files with design-time generated code, you’ll need to add the following. Likewise, for Xamarin Forms pages, you’ll need this.
  4. Decide which .NET Standard version to target, probably 1.4, based on the table. Here’s a cheat sheet:
    • If you only want to support iOS and Android, you can use .NET Standard 1.6. In practicality though, most features are currently available at .NET Standard 1.3 and up.
    • If you want to support iOS, Android and UWP, then NET Standard 1.4 is the highest you can use.
    • If you want to support Windows Phone App 8.1 and Windows 8.1, then NET Standard 1.2 is your target.
    • If you’re still supporting Windows 8, .NET Standard 1.1 is for you.
    • Finally, if you need to support Windows Phone 8 Silverlight, then .NET Standard 1.0 is your only option.

Once you determine the netstandard version you want, in your csproj, set the TargetFramework to it — netstandard1.4, etc.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard1.4</TargetFramework>
    <PackageTargetFallback>portable-net45+win8+wpa81+wp8</PackageTargetFallback>
    <DebugType>full</DebugType>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Xamarin.Forms" Version="2.3.4.231" />
  </ItemGroup>

  <ItemGroup>
    <!-- https://bugzilla.xamarin.com/show_bug.cgi?id=55591 -->
    <None Remove="**\*.xaml" />

    <Compile Update="**\*.xaml.cs" DependentUpon="%(Filename)" />
    <EmbeddedResource Include="**\*.xaml" SubType="Designer" Generator="MSBuild:UpdateDesignTimeXaml" />
  </ItemGroup>

</Project>

Note the addition of the PackageTargetFallback property. This is required to tell NuGet that specified TFM is compatible here because the Xamarin.Forms package has not yet been updated to use netstandard directly. Also note that DebugType set to full is required for the Xamarin tool-chain currently as they don’t yet support the new portable PDBs that are created by default.

At this point, when you reload the project, it should restore the packages and build correctly. You may need to do a full clean/rebuild.

Seeing it in action

I created a sample solution showing this all working over on GitHub. It’s a good idea to clone, build and run it to ensure your environment and tooling is up-to-date. If you get stuck converting your own projects, I’d recommend referring back to that repo to find the difference.

Building on command line

You will need to use MSBuild.exe to build this, either on Windows with a VS 2017 command prompt or a Mac with Visual Studio for Mac. You cannot use dotnet build for these projects types. dotnet build only supports .NET Standard, .NET Core and .NET Framework project types. It is not able to build the Xamarin projects and the custom tasks in Xamarin Forms have not yet been updated to support .NET Core.

To build, you’ll need two steps:

  1. msbuild /t:restore MySolution.sln
  2. msbuild /t:build /p:Configuration=Release MySolution.sln

You can also restore/build the .csproj files individually if you’d prefer.

As always, feel free to tweet me @onovotny as well.

Multi-targeting the world: a single project to rule them all

January 4, 2017 Coding 9 comments , , , , , , ,

Multi-targeting the world: a single project to rule them all

Starting with Visual Studio 2017, you can now use a single project to build platform-specific libraries for all project types. This blog will explore why you might want to do this, how to do it and workarounds for some point-in-time issues with the tooling.

Contents

Intro

Since the beginning of .NET Core, the project.json format has enabled multi-targeting, that is compiling to multiple target frameworks in parallel and creating an output for each. With ASP.NET Core, it’s common to target both net45 and netcoreapp1.0 so you can deploy the site to either the desktop framework, which runs on Windows, or to the CoreCLR, which runs cross-platform. Multi-targeting is nothing more than compiling the same code multiple times, once per target platform. Each target can specify its own dependencies and ifdef‘s, so you can easily tailor the code to the specific platform.

Another example may have a library target netstandard1.0, netstandard1.3, and net45 to enable different levels of functionality based on the available surface area.

While it was also possible to target UWP, Win8, or profile-based PCL’s, using project.json, doing so required hacks like private copies of all reference assemblies, WinMD files and more. Beyond that, some things didn’t work correctly as some platforms require additional targets to generate additional outputs like .pri files on UWP for resource lookup. So while technically possible, full multi-targeting was brittle and required you to stay in a very narrow path, avoiding things like resources or GUI elements that require the full tool-chain to process.

Enter MSBuild

With the move to MSBuild as part of the .NET Core Tooling direction change, the picture gets much better, so much so that with VS 2017 RC2, you can correctly multi-target all platform types, including UWP, profile-based PCL’s, and Xamarin iOS/Android. Not only that, but by conditionally including/excluding directories based on globs, you can reduce the need for ifdef‘s in many cases.

As part of being open sourced and enabled to run cross-platform, the build targets and tasks required to actually do the build were combined into an SDK. This went along with drastic simplification of the csproj file to have a minimal footprint, that will get even smaller, like this:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.NETCore.App" Version="1.0.1" />
  </ItemGroup>
</Project>

Microsoft’s blog details all of the improvements in this area. For current lack of a better term, I’ll call projects based on these new tools “SDK style.” The easiest way to identify these “SDK style” projects is by looking for the Sdk attribute in the top Project element.

Multi-targeting vs. .NET Standard Libraries vs. PCL’s

Before we go further, let’s answer this question that many people have asked — why would you want to multi-target vs just use a single portable library, whether that’s .NET Standard or an older profile-based PCL?

There are several answers to that question — first, if your code can all fit within a single .NET Standard-based library, then there’s no reason to multi-target. If you’re using a legacy profile-based PCL, at the very least consider moving up to the equivalent .NET Standard version. Don’t make more work for yourself. The decision to multi-target falls out of a need to use functionality that doesn’t exist within a .NET Standard version or if you need to target an earlier platform that doesn’t support the .NET Standard version you need. A common example is that many libraries still need to support .NET 4.5. Despite a significant amount of functionality available in .NET Standard 1.3, that .NET Standard version only supports .NET 4.6+. Chances are though that the code would work “just fine” on .NET 4.5, so it’s easy to multi-target to both net45 and netstandard1.3.

The other main reason why you’d need to multi-target is to use platform-specific code within your library. For example, on iOS you might want to use SecKeyChain for saved credentials, on Android use its Context to access shared services like preferences, and on Windows its Credential Manager. You might have a common method called GetCredential that other code uses to get the data. Today you might use dependency injection or reflection to access a “.Platform” library with a specific implementation that your common code uses. Instead, you can choose to multi-target and access the platform code directly.

How to multi-target

Let me start by saying that the methods here are based on the new “SDK-style” projects that VS 2017 provides. They orchestrate using the existing project types that are installed by Visual Studio. As such, the build itself won’t work on a box without the other tools installed (so you’re building on a Windows box, much like you probably are today). Some of these may work on a Mac with Visual Studio for Mac but I have not tested that in any way. When you install Visual Studio 2017, make sure to install all of the tools for the project types you need (Xamarin, UWP, etc) and also the .NET Core Tooling.

There’s no UI in VS for adding additional target frameworks, but I have some samples that show what to do.

First, create a new .NET Core Class Library project. If you don’t see the following option, make sure to install the .NET Core workload in the VS Installer.

New .NET Core Class Library
.NET Core workload

Right-click the project and select “Edit project file…”. This is new in VS 2017 – the ability to edit the project file while it’s open and have changes instantly reflected.

In the editor, after noticing how much less boilerplate code there is now, look for the TargetFramework property that looks like this: <TargetFramework>netstandard1.3</TargetFramework> property. Change that to <TargetFrameworks>netstandard1.3;net45</TargetFrameworks> to target .NET 4.5 and NET Standard 1.3. You can add however many targets you want by adding to that semi-colon list. It’s subtle, but note the difference in property names between TargetFramework and TargetFrameworks with a plural. It’s easy to miss.

For some frameworks, like .NET 4.5, that’s all you need to do. However, targeting .NET Standard and .NET 4.x is far from “the world.” We can do better! You would think it should be as easy as adding additional TFM’s like uap10.0, xamarin.ios10 or MonoAndroid70 to the list, and hopefully by the time the tools RTM it will be, but for now we need to add extra properties to the project file to tell MSBuild what to do with those.

Fortunately, and here’s the real secret, the “SDK-style” build system has a LanguageTargets property that you can specify per TFM to import the targets for that project type instead of the vanilla Microsoft.CSharp.targets import. That means we can use the “Windows Xaml”, Android, iOS, or any other platform tool-chain we need.

Xamarin Example

In the example here, I have a class library that multi-targets to net45, uap10.0, netstandard1.3, Xamarin.iOS10 and MonoAndroid70. In this contrived library, I have a Greeter class that’s calling a Hello() method that needs platform specific code. I’m using a pattern where I have a directory for each TFM where code in there only gets included there, so no ifdef‘s are needed. For Android, Resources are supported if you need them. While the example doesn’t currently use them, you could use PList‘s, xib‘s or Story Boards on iOS, Page‘s on UWP, or any other “native” file type supported by the platform.

Win81/WP8/PCL/Wpa81/Xamarin/Net45 Example

As a more realistic example, one of my libraries, Zeroconf, an mDNS discovery library, targets “the world.” It currently has concrete implementations for wp8, Wpa81, Win8, portable-Wpa81+Win81, uap10.0, net45, and netstandard1.3 (which supports Xamarin and CoreCLR.) In addition to the the concrete implementations, it provides a netstandard1.0 façade to support being used in portable libraries. The different concrete implementations are required due to differences in the networking stacks between the various Windows networking stacks. For now, the uap10.0 version cannot use the netstandard1.3 version until NetworkInformation is fully supported by the platform, so it continues to use the WinRT variant. You can see the platform-specific code in the platforms directory and then how they’re conditionally included by the csproj in the ItemGroups

The property groups at the top contain the LanguageTargets and properties needed. For portable-Wpa81+Win81 two extra items are required as the special PCL profile also supports WinRT. The ItemGroup here has two TargetPlatform to pull in the correct .winmd references.

Building

You can build the libraries either in VS 2017 or the command-line. If you use the command line, you’ll want to run the following from a VS 2017 Developer Command Prompt: msbuild /t:restore followed by msbuild /t:build. If you want to create a NuGet package, you can run msbuild /t:pack. It’s important to note that you must currently use msbuild, the desktop version in the VS 2017 path, to build these and not dotnet build. The reason is that while dotnet build calls MSBuild, it’s currently using a CoreCLR version even though the desktop version is present in your VS installation. The engineering team is aware of this and in the future, dotnet build will be smart enough to call the desktop version of msbuild when present. The “regular” targets file we’re using to support the platform-specific features are designed for Desktop MSBuild. They do not yet have support for CoreCLR tasks. Bottom line, as of the current release: if your targets use build tasks, then you need to provide both CoreCLR and Desktop versions of the library in order to support both “regular” MSBuild and dotnet build.

Common gotcha’s

There are several bugs in the tool-chain currently that are in the process of being fixed:

  • Some Project-to-project (p2p) references aren’t resolving correctly. Whereas they should resolve to the “best” match, they are resolving to the first TFM in the list.
  • Another bug is preventing a “legacy” csproj from doing a p2p reference with a “Portable Library can only reference other portable library” error.
  • Files that are conditionally included won’t show up in the Solution Explorer. As a workaround, include all files with None as the first item group (see example).
  • for iOS (and possibly Android), you need to set DebugType to full as the Xamarin ConvertPdb2Mdb task doesn’t yet support the new Portable PDB format generated by this tool-chain.
  • Win8, Win81, and uap10.0 aren’t correctly understood by the NuGet targets today. As a workaround, you need to include the NugetTargetMoniker property set to the full TFM as shown here. Similarly, for legacy PCL targets, it requires Version=v0.0 in the NugetTargetMoniker here. These should hopefully be fixed by GA.
  • Windows assemblies that use resources need a .pri file alongside them. They’re currently missing from the generated NuGet. Workaround is to use your own .NuSpec for now until the bug is fixed.

Into the weeds, how it all works

This is by no means an official explanation, it’s what I’ve found from exploring the SDK build targets. Some of the terminology and concepts may change over time.

The “SDK style” projects consist of a set of targets/tasks that are pre-installed with MSBuild (and the CLI tools). You can see them in the following directory: C:\Program Files (x86)\Microsoft Visual Studio\2017\<sku>\MSBuild\Sdks where <sku> is Community, Professional, or Enterprise, depending on what you installed. The two SDK’s you’re likely to use directly are Microsoft.NET.Sdk and Microsoft.NET.Sdk.Web.

The Sdk attribute causes an Sdk.props and Sdk.targets within the specified SDK’s \Sdk directory to be imported before and after the project file. The Microsoft.NET.Sdk SDK’s targets defines an “outer” and “inner” build. The “outer-loop” is what your project file directly defines, including several TFM’s in the TargetFrameworks property. If you only have a single build with a TargetFramework property defined, then there’s only an “inner-loop”.

For an “outer-loop” build, the SDK targets imports props/targets in a buildCrossTargeting directory (soon to be renamed to buildMultiTargeting). Those get auto-included before and after the main project file (props before, targets after.) The “outer-loop” targets will eventually loop through each of the TargetFrameworks calling msbuild again in an “inner-loop” with TargetFramework set to one TFM. This “inner-loop” build is what we currently have in today’s “normal” project types. The “inner-loop” build provides an extension point for providing your language-specific targets (the Import that was at the bottom of your old csproj before) in place of the “vanilla” one it’ll include by default. By providing a LanguageTargets property for the “inner-loop,” conditioned by TFM, we can use the “original” targets that invoke the full tool-chain for the target platform. See here, here and here for UWP, iOS, and Android, respectively.

Within each conditionally defined property group, we can set properties that are specific to a particular “inner-loop.” These correspond to the properties in your existing platform-specific project file and are used by the platform-specific targets specified.

One thing you give-up currently is any UI in VS for configuring these properties. Perhaps they’ll return sometime in the future. For now, one thing I’ve found helpful is to maintain a few “dummy” projects where I can edit some settings to see the values and then put them into my multi-targeting csproj.

Looking forward

As of today (January 4, 2017), the tooling is in a fairly rough state. The .NET Core tooling is rightfully in an “alpha” state. The MSBuild SDK is under active development and things will change before GA. There are a number of issues in the tooling that can make it hard to use today, but I expect those to be fixed soon. Most of the bugs I’ve found are slated to be fixed in the RC3 time-frame, and I’d expect things to be better with that release.

As to whether-or-not to take the plunge today: I’d suggest that if you have a tolerance for figuring this out and reporting issues you’ll encounter, then go for it. If you have a complex project today that already multi-targets a different way (most likely by using multiple “head” projects and shared code project types), I would recommend trying this out in a branch to see how far you get. I’ll be happy to help, just give me a shout. The more the community bangs on this stuff up front, the more issues can be addressed prior to GA.

Acknowledgments

Many thanks to Brad Wilson, Joe Morris, and Daniel Plaisted for reviewing this post and providing feedback.

Authenticode Signing Service and Client

September 12, 2016 Coding 1 comment , , , , ,

Authenticode Signing Service and Client

Last night I published a new project on GitHub to make it easier to integrate Authenticode signing into a CI process by providing a secured API for submitting artifacts to be signed by a code signing cert held on the server. It uses Azure AD with two application entries for security:

  1. One registration for the service itself
  2. One registration to represent each code signing client you want to allow

Azure AD was chosen as it makes it easy to restrict access to a single application/user in a secure way. Azure App Services also provide a secure location to store certificates, so the combination works well.

The service currently supports either individual files, or a zip archive that contains supported files to sign (works well for NuGet packages). The service code is easy to extend if additional filters or functionality is required.

Supported File Types

  • .msi, .msp, .msm, .cab, .dll, .exe, .sys, .vxd and Any PE file (via SignTool)
  • .ps1 and .psm1 via Set-AuthenticodeSignature

Deployment

You will need an Azure AD tenant. These are free if you don’t already have one. In the “old” Azure Portal, you’ll need to
create two application entries: one for the server and one for your client.

Azure AD Configuration

Server

Create a new application entry for a web/api application. Use whatever you want for the sign-on URI and App ID Uri (but remember what you use for the App ID Uri as you’ll need it later). On the application properties, edit the manifest to add an application role.

In the appRoles element, add something like the following:

{
  "allowedMemberTypes": [
    "Application"
  ],
  "displayName": "Code Sign App",
  "id": "<insert guid here>",
  "isEnabled": true,
  "description": "Application that can sign code",
  "value": "application_access"
}

After updating the manifest, you’ll likely want to edit the application configuration to enable “user assignment.” This means that only assigned users and applications can get an access token to/for this service. Otherwise, anyone who can authenticate in your directory can call the service.

Client

Create a new application entry to represent your client application. The client will use the “client credentials” flow to login to Azure AD
and access the service as itself. For the application type, also choose “web/api” and use anything you want for the app id and sign in url.

Under application access, click “Add application” and browse for your service (you might need to hit the circled check to show all). Choose your service app and select the application permission.



Finally, create a new client secret and save the value for later (along with the client id of your app).

Server Configuration

Create a new App Service on Azure (I used a B1 for this as it’s not high-load). Build/deploy the service however you see fit. I used VSTS connected to this GitHub repo along with a Release Management build to auto-deploy to my site.

In the Azure App Service, in the certificates area, upload your code signing certificate and take note of the thumbprint id. In the Azure App Service, go to the settings section and add the following setting entries:

NameValueNotes
CertificateInfo:Thumbprintthumbprint of your certThumbprint of the cert to sign with
CertificateInfo:TimeStampUrlurl of timestamp server
WEBSITE_LOAD_CERTIFICATESthumbprint of your certThis exposes the cert’s private key to your app in the user store
Authentication:AzureAd:AudienceApp ID URI of your service from the application entry
Authentication:AzureAd:ClientIdclient id of your service app from the application entry
Authentication:AzureAd:TenantIdAzure AD tenant IDeither the guid or the name like mydirectory.onmicrosoft.com

Enable “always on” if you’d like and disable PHP then save changes. Your service should now be configured.

Client Configuration

The client is distributed via NuGet and uses both a json config file and command line parameters. Common settings, like the client id and service url are stored in a config file, while per-file parameters and the client secret are passed in on the command line.

You’ll need to create an appsettings.json similar to the following:

{
  "SignClient": {
    "AzureAd": {
      "AADInstance": "https://login.microsoftonline.com/",
      "ClientId": "<client id of your client app entry>",
      "TenantId": "<guid or domain name>"
    },
    "Service": {
      "Url": "https://<your-service>.azurewebsites.net/",
      "ResourceId": "<app id uri of your service>"
    }
  }
}

Then, somewhere in your build, you’ll need to call the client tool. I use AppVeyor and have the following in my yml:

environment:
  SignClientSecret:
    secure: <the encrypted client secret using the appveyor secret encryption tool>

install: 
  - cmd: appveyor DownloadFile https://dist.nuget.org/win-x86-commandline/v3.5.0-rc1/NuGet.exe
  - cmd: nuget install SignClient -Version 0.5.0-beta3 -SolutionDir %APPVEYOR_BUILD_FOLDER% -Verbosity quiet -ExcludeVersion -pre

build: 
 ...

after_build:
  - cmd: nuget pack nuget\Zeroconf.nuspec -version "%GitVersion_NuGetVersion%-bld%GitVersion_BuildMetaDataPadded%" -prop "target=%CONFIGURATION%" -NoPackageAnalysis
  - ps: '.\SignClient\SignPackage.ps1'
  - cmd: appveyor PushArtifact "Zeroconf.%GitVersion_NuGetVersion%-bld%GitVersion_BuildMetaDataPadded%.nupkg"  

SignPackage.ps1 looks like this:

$currentDirectory = split-path $MyInvocation.MyCommand.Definition

# See if we have the ClientSecret available
if([string]::IsNullOrEmpty($env:SignClientSecret)){
    Write-Host "Client Secret not found, not signing packages"
    return;
}

# Setup Variables we need to pass into the sign client tool

$appSettings = "$currentDirectory\appsettings.json"

$appPath = "$currentDirectory\..\packages\SignClient\tools\SignClient.dll"

$nupgks = ls $currentDirectory\..\*.nupkg | Select -ExpandProperty FullName

foreach ($nupkg in $nupgks){
    Write-Host "Submitting $nupkg for signing"

    dotnet $appPath 'zip' -c $appSettings -i $nupkg -s $env:SignClientSecret -n 'Zeroconf' -d 'Zeroconf' -u 'https://github.com/onovotny/zeroconf' 

    Write-Host "Finished signing $nupkg"
}

Write-Host "Sign-package complete"

The parameters to the signing client are as follows. There are two modes, file for a single file and zip for a zip-type archive:

usage: SignClient <command> [<args>]

    file    Single file
    zip     Zip-type file (NuGet, etc)

File mode:

usage: SignClient file [-c <arg>] [-i <arg>] [-o <arg>] [-h <arg>]
                  [-s <arg>] [-n <arg>] [-d <arg>] [-u <arg>]

    -c, --config <arg>            Full path to config json file
    -i, --input <arg>             Full path to input file
    -o, --output <arg>            Full path to output file. May be same
                                  as input to overwrite. Defaults to
                                  input file if ommited
    -h, --hashmode <arg>          Hash mode: either dual or Sha256.
                                  Default is dual, to sign with both
                                  Sha-1 and Sha-256 for files that
                                  support it. For files that don't
                                  support dual, Sha-256 is used
    -s, --secret <arg>            Client Secret
    -n, --name <arg>              Name of project for tracking
    -d, --description <arg>       Description
    -u, --descriptionUrl <arg>    Description Url

Zip-type archive mode, including NuGet:

usage: SignClient zip [-c <arg>] [-i <arg>] [-o <arg>] [-h <arg>]
                  [-f <arg>] [-s <arg>] [-n <arg>] [-d <arg>] [-u <arg>]

    -c, --config <arg>            Full path to config json file
    -i, --input <arg>             Full path to input file
    -o, --output <arg>            Full path to output file. May be same
                                  as input to overwrite
    -h, --hashmode <arg>          Hash mode: either dual or Sha256.
                                  Default is dual, to sign with both
                                  Sha-1 and Sha-256 for files that
                                  support it. For files that don't
                                  support dual, Sha-256 is used
    -f, --filter <arg>            Full path to file containing paths of
                                  files to sign within an archive
    -s, --secret <arg>            Client Secret
    -n, --name <arg>              Name of project for tracking
    -d, --description <arg>       Description
    -u, --descriptionUrl <arg>    Description Url

Contributing

I’m very much open to any collaboration and contributions to this tool to enable additional scenarios. Pull requests are welcome, though please open an issue to discuss first. Security reviews are also much appreciated!

Connecting SharePoint to Azure AD B2C

September 8, 2016 Coding 4 comments , , ,

Connecting SharePoint to Azure AD B2C

Overview

This post will describe how to use Azure AD B2C as an authentication mechanism for SharePoint on-prem/IaaS sites. It assumes a working knowledge of identity and authentication protocols, WS-Federation (WsFed) and OpenID Connect (OIDC). If you need a refresher on those, there are some great resources out there, including Vittorio Bertocci’s awesome book.

Background

Azure AD B2C is a hyper-scalable standards-based authentication and user storage mechanism typically aimed at consumer or customer scenarios. It is a separate product from “regular” Azure AD. Whereas “regular” Azure AD is normally meant to house identities for a single organization, B2C is designed to host identities of external users. In my opinion, it’s the best alternative to writing your own authentication mechanism (which no one should ever do!)

For one client, we had a scenario where we needed to enable external users to access specific site collections within SharePoint. Azure AD wasn’t a good fit, even with the B2B functionality, as we needed to collect additional information during user sign-up. Out-of-the-box, B2C doesn’t yet support WsFed or SAML 1.1 and SharePoint doesn’t support OpenID Connect. This leaves us needing a tool that can bridge B2C to SharePoint by acting as an OIDC relying party (RP) to B2C and a WsFed Identity Provider (IdP) to SharePoint.

The Solution

Fortunately, the identity guru’s Dominick Baier and Brock Allen created just such a tool with IdentityServer 3. From the docs:

IdentityServer is a framework and a hostable component that allows implementing single sign-on and access control for modern web applications and APIs using protocols like OpenID Connect and OAuth2.

IdentityServer has plugins to support additional functionality, like acting as a WsFed IdP. This means we can use IdentityServer as a bridge from OIDC to WsFed. We’ll register an application in B2C for IdentityServer and then create an entry in IdentityServer for SharePoint.

Here’s a diagram of the pieces:
Diagram

While you can use IdentityServer to act as an IdP to multiple clients, in the model we used, we considered IdentityServer as “part of SharePoint.” That is, SharePoint is the only client and in B2C, the application entry visible is called “SharePoint.” I mention this because B2C allows applications to choose different policies/flows for sign up/sign in, password reset, and more. In our solution, we’ve configured IdentityServer to use a particular set of policies that meet SharePoint’s needs — it many not meet the needs of other applications.

Diving deep

As mentioned, IdentityServer isn’t so much a “drop in product,” but rather, it’s a framework that needs customization. The rest of this post will look at how we customized and configured B2C, IdentityServer and SharePoint to enable the end-to-end flow.

B2C

Let’s start with B2C. As far as B2C is concerned, we register a new web application and create a couple of policies: sign-up/sign-in and password reset. When you register the application, enter the redirect uri’s you’ll need for IdentityServer (localhost and/or your real url.) You don’t need a client secret for these flows.

IdentityServer

Follow the IdentityServer getting started guide to create a blank ASP.NET 4.6 MVC site and install/configure IdentityServer. ASP.NET Core is not yet supported on CoreCLR as .NET Core doesn’t yet have the XML cryptography libraries needed for WsFed (that support will come as part of .NET Standard 2.0.) After installing the IdentityServer3 NuGet, you’ll need to install the WsFed plugin NuGet.

The key here is that we don’t need a local user store as IdentityServer won’t be acting as the user database. We just need to configure B2C as an identity provider and the WsFed plugin to act as an IdP. IdentityServer won’t maintain any state and is simply a pass-through, validating JWT’s and issuing SAML tokens.

Below, I’ll explain some of the core snippets; the full set of files are available here.

Inventory

There are several areas of IdentityServer that need to either be configured or have custom code added:

  • Identity Provider B2C via the standard OIDC OWIN middleware
  • WS-Federation plugin IdentityServer plug for WsFed
  • Relying parties An entry or two for your WsFed/SAML client (SharePoint or a test app configured with WsFed auth)
  • User Service IdentityServer component for mapping external auth to users

Identity Provider

We need to configure B2C as an OIDC middleware to IdentityServer. Due to the way B2C works, we need some additional code to handle the different policies — it’s not enough to configure a single OIDC endpoint. For normal flows, it’ll default to the policy specified in “SignInPolicyId”. Where it gets tricky is in handling password reset.

First, let’s look at the normal “happy path” flow, where a user either sign’s up or sign’s in. Here’s what the flow looks like:
Sign in

In B2C, password reset is a separate policy and thus requires a specific call to the /authorize endpoint specifying the password reset policy to use. If you use the “combined sign up/sign in” policy, which is recommended as it’s the most styleable, it provides a link button for “password reset”. What this does, however, is return a specific error code to the app that started the sign up flow. It’s up to the app to start the password reset flow. Then, once the password reset flow is complete, despite appearing to be authenticated (as defined by having had a signed JWT returned), B2C’s SSO mechanisms won’t consider the user signed in. You’ll notice this if you try to use a profile edit flow or any other flow where SSO should have signed in the user w/o additional prompting. The guidance from the B2C team here is that after the password reset flow completes, an app should immediately trigger the sign in flow again. Logically, this makes sense, as a user started the reset password flow from the sign in screen, once the password is reset, they should logically resume there to actually sign in.

Sign in with password reset

Implementing this all with IdentityServer requires a little bit of extra code. Unfortunately, with IdentityServer, we cannot simply add individual OIDC middleware instances for each endpoint as we would in a normal web app because IdentityServer will see them as different providers and present an identity provider selection screen. To avoid this, we are only configuring a single identity provider and passing the policy as an authentication parameter. The B2C samples provide a PolicyConfigurationManager class that can retrieve and cache the OIDC metadata for each of the policies (sign-up/sign-in and password reset).

Here’s an example from Startup.Auth.B2C.cs:

ConfigurationManager = new PolicyConfigurationManager(
    string.Format(CultureInfo.InvariantCulture, B2CAadInstance, B2CTenant, "https://static-content.oren.codes/v2.0", OIDCMetadataSuffix), 
    new[] {SignUpPolicyId, ResetPasswordPolicyId}),

The main work in getting IdentityServer to handle the B2C flows are in handling the OpenID Connect Event’s RedirectToIdentityProvider, AuthenticationFailed, and SecurityTokenValidated. By handling these three, we can bounce between the flows.

In the Startup.Auth.B2C.cs file, the OnRedirectToIdentityProvider event handler looks for the policy authentication parameter and ensures the correct /authorize endpoint is used. As IdentityServer handles the initial auth call, we cannot specify a policy parameter, so we assume it’s a sign-in. IdentityServer tracks some state for the sign in request, and we’ll need access to it in case the user needs to do a password reset later, so we store it in a short-lived, encrypted, session cookie.

Once the B2C flow comes back, we need to handle both the failed and validated events. If failed, we look for the specific error codes and take appropriate action. If success, we check if it’s from a password reset and then bounce back to the sign in to complete the journey.

WS-Federation plugin

Configuring IdentitySever to act as a WS-Federation IdP is fairly simple: install the plugin package and provide the plugin configuration in Startup.cs. As an aside, don’t forget to either provide your own certificate or alter the logic to pull the cert from somewhere else!

The main WsFed configuration is a list of Relying Parties, seen in RelyingParties.cs. I’ve hard-coded it, but you can generate this data however you see fit.

Relying parties

Within the Relying Party configuration, you can specify the required WsFed parameters, including Realm, ReplyUrl and PostLogoutRedirectUris. The final thing you need is a map of OIDC claims to SAML claim types returned.

User Service

The User Service is what IdentityServer uses to match external claims to internal identities. For our use, we don’t have any internal identities and we simply pass the claims through as you can see in AadUserService.cs. The main thing we do is to extract a few specific claims and tell IdentityServer to use those for name, subject, issuer and authentication method.

WsFed Client (or SharePoint)

Adding a WsFed client should be faily easy at this point. Configure the realm and reply url’s as required and point to the metadata address. For IdentityServer, this is https://localhost:44352/wsfed/metadata by default (or whatever your hostname is.)

ASP.NET 4.6

I find it useful to have a basic ASP.NET MVC site I can use for testing that authenticates and prints out the claims — helps isolate me from difficult SharePoint issues.

With ASP.NET MVC 4.6, add the Microsoft.Owin.Security.WsFederation NuGet package and use this in your Startup class where realm is the configured realm and adfsMetadata is the IdentityServer metadata endpoint:

public void ConfigureAuth(IAppBuilder app)
{
    app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

    app.UseCookieAuthentication(new CookieAuthenticationOptions());

    app.UseWsFederationAuthentication(
        new WsFederationAuthenticationOptions
        {
            Wtrealm = realm,
            MetadataAddress = adfsMetadata
        });
}

SharePoint

I will readily confess that I am not a SharePoint expert. I’ll happily leave that to other’s like Bob German, a colleague and SharePoint MVP. From Bob:

The Microsoft documentation is fine, but is oriented toward a connection with Active Directory via AD FS, so it includes claims attributes such as the SID value, which won’t exist in this scenario. The only real claims SharePoint needs are email address, first name, and last name. Any role claims passed in are available for setting permissions in SharePoint. Follow the relevant portions of the documentation, but only map the claims that make sense.

For example,

$emailClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
$firstNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" -IncomingClaimTypeDisplayName "FirstName" -SameAsIncoming
$lastNameClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" -IncomingClaimTypeDisplayName "LastName" -SameAsIncoming
$roleClaimMap = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming

New-SPTrustedIdentityTokenIssuer -Name <somename> -Description <somedescription> -realm <realmname> -ImportTrustCertificate <token signing cert> -ClaimsMappings $emailClaimMap,$roleClaimMap,$firstNameClaimMap,$lastNameClaimMap -IdentifierClaim $emailClaimMap.InputClaimType

You can pass in additional claims attributes and SharePoint’s STS will pass them along to you, but you can only access them server-side via Thread.CurrentPrincipal; for example,

IClaimsPrincipal claimsPrincipal = Thread.CurrentPrincipal as IClaimsPrincipal;
If (claimsPrincipal != null)
{
    IClaimsIdentity claimsIdentity = (IClaimsIdentity)claimsPrincipal.Identity;
    foreach (Claim c in claimsIdentity.Claims)
    {
        // Do something
    }
}

With this scenario, you can assign permissions based on an individual user using the email claim, or based on a role using the role claim. However SharePoint’s people picker isn’t especially helpful in this case. Since it has no way to look up and resolve the claims attribute value, it will let users type anything they want. Type something and then hover over the people picker; you’ll see a list of claims. Select the Role claim to grant permission based on a role, or the Email claim to grant permission to an individual user based on their email address.

SharePoint does not use WsFed metadata, so you need to provide the signing certificate’s public key directly and specify the WsFed signin url. For the scenario here, that’s https://localhost:44352/wsfed

Conclusion

While not without its challenges, it is possible to use B2C with a system that only knows WsFed. One thing I have not yet done is implement a profile edit flow. I need to give that more thought around how that’d work and interact. I’m open to ideas if you have them and I’ll blog a follow-up once that’s done.

Using Xamarin Forms with .NET Standard

July 9, 2016 Coding 10 comments , , , ,

Using Xamarin Forms with .NET Standard

With the release of .NET Core and the .NET Standard Library last week, many people want to know how they can use packages targeting netstandard1.x with their Xamarin projects. It is possible today if you use Visual Studio; for Xamarin Studio users, support is coming soon.

Prerequisites

Using .NET Standard pretty much requires you to use project.json to eliminate the pain of “lots of packages” as well as properly handle transitive dependencies. While you may be able to use .NET Standard without project.json, I wouldn’t recommend it.

You’ll need to use the following tools:

Getting Started

As of now, the project templates for creating a new Xamarin Forms project start with an older-style packages.config template, so whether you create a new project or have an existing project, the steps will be pretty much the same.

Step 1: Convert your projects to project.json following the steps in my previous blog post.

Step 2: As part of this, you can remove dependencies from your “head” projects that are referenced by your other projects you reference. This should simplify things dramatically for most projects. In the future, when you want to update to the next Xamarin Forms version, you can update it in one place, not 3-4 places. It also means, you only need the main Xamarin.Forms package, not each of the packages it pulls in.

If you hit any issues with binaries not showing up in your bin directories (for your Android and iOS “head” projects), make sure that you have set CopyNuGetImplementations to true in your csproj as per the steps in the post.

At this point, your project should be compiling and working, but not yet using netstandard1.x anywhere.

Step 3: In your Portable Class Library projects, find the highest .NET Standard version you need/want to support.

Here’s a cheat sheet:

  • If you only want to support iOS and Android, you can use .NET Standard 1.6. In practicality though, most features are currently available at .NET Standard 1.3 and up.
  • If you want to support iOS, Android and UWP, then NET Standard 1.4 is the highest you can use.
  • If you want to support Windows Phone App 8.1 and Windows 8.1, then NET Standard 1.2 is your target.
  • If you’re still supporting Windows 8, .NET Standard 1.1 is for you.
  • Finally, if you need to support Windows Phone 8 Silverlight, then .NET Standard 1.0 is your only option.

Once you determine the netstandard version you want, in your PCL’s project.json, change what you might have had:

{
    "dependencies": {
        "Xamarin.Forms": "2.3.0.107"        
    },
    "frameworks": {        
        ".NETPortable,Version=v4.5,Profile=Profile111": { }
    },
    "supports": { }
}

to

{
    "dependencies": {
        "NETStandard.Library": "1.6.0",
        "Xamarin.Forms": "2.3.0.107"        
    },
    "frameworks": {        
        "netstandard1.4": {
            "imports": [ "portable-net45+wpa81+wp8+win8" ]
         }
    },
    "supports": { }
}

Note the addition of the imports section. This is required to tell NuGet that specified TFM is compatible here because the Xamarin.Forms package has not yet been updated to use netstandard directly.

Then, edit the csproj to set the TargetFrameworkVersion element to v5.0 and remove any value from the TargetFrameworkProfile element.

At this point, when you reload the project, it should restore the packages and build correctly. You may need to do a full clean/rebuild.

Seeing it in action

I created a sample solution showing this all working over on GitHub. It’s a good idea to clone, build and run it to ensure your environment and tooling is up-to-date. If you get stuck converting your own projects, I’d recommend referring back to that repo to find the difference.

As always, feel free to tweet me @onovotny as well.

Portable- is dead, long live NetStandard

June 23, 2016 Coding 10 comments

Portable- is dead, long live NetStandard

With the RC of NuGet 2.12 for VS 2012/2013, and imminent release of .NET Core on Monday the 27th, it’s time to bid farewell to our beloved/cursed PCL profiles. So long, you won’t be missed! Oh, and dotnet, please don’t let the door hit you on the way out either.

In its place we join the new world of the .NET Platform Standard and it’s new moniker netstandard.

When dotnet was released last July, there was a lot of confusion around what it is and how it worked. Working with the NuGet and CoreFX teams, I tried to explain it in a few of my previous posts. Despite good intentions, dotnet was a constant frustration to many library authors due to its design and limited support. dotnet only worked with NuGet v3, which meant that packages would need to ship a dotnet version and a version in a PCL directory like portable-net45+win8+wp8+wpa81 to support VS 2012/2013.

It was hard to fault anyone for wondering, “why bother?” The other downfall of dotnet, and likely the main one, was that dotnet fell into a mess trying to work with different compatibility levels of libraries. What if you wanted to have a version that worked with newer packages than were supported by Windows 8? What if you wanted to have multiple versions which “light up” based on platform capabilities? How many of you who installed a dotnet-based package, with its dependencies listed, saw a System.Runtime 4.0.10 entry and wondered why you were getting errors trying to update it? After all, NuGet showed an update to 4.0.20, why wouldn’t that work? The reality was that you had to release packages with multiple versions that were incompatible with some platforms because there was no way to put all of it into one package.

Enter .NET Platform Standard

netstandard fixes the short comings of dotnet by being versioned. As of today, there’s 1.0 – 1.6, with the following TFM’s netstandard1.0netstandard1.6. Now, the idea is that each .NET Platform Standard version supports a given set of platforms and when authoring a library, you’d ideally want to target the lowest one that has the features you need and runs on your desired target platform versions.

With NuGet 2.12, netstandard is also supported in VS 2012 and 2013, so that there’s no further need to include a portable-* version and a netstandard version in the same package. Finally!

What does this all mean

The short answer is that if you currently have a Profile 259 PCL today, you can change your NuGet package to put it in a netstandard1.0 directory and add the appropriate dependency group. The full list if profile -> netstandard mappings is here. If you support .NET 4.0 or Silverlight 5 in your PCL — basically a PCL “older” than 259, then you can continue to put that in your NuGet alongside the netstandard1.0+ version and things will continue to work. In addition, a platform-specific TFM (like net45) will always “win” over netstandard if compatible.

Detour: Dependencies for NetStandard

Like dotnet, netstandard requires listing package dependencies. For the most part, it’s easier than with dotnet as there is a meta-package, NETStandard.Library 1.6.0 (the RC2 version is 1.5.0-rc2-24027), that has most BCL dependencies included. This package is one that you probably have in your project.json today. Put the NETStandard.Library 1.6.0 dependency in a netstandard1.0 dependency group, along with any other top-level dependencies you have:

<dependencies>
  <group targetFramework="netstandard1.0">
    <dependency id="NETStandard.Library" version="1.6.0" />
    <dependency id="System.Linq.Queryable" version="4.0.1" />
  </group>
</dependencies>

Next steps

There’s about to be a lot to do over the coming weeks:

  • Get the .NET Core 1.0 RTM with the Preview 2 tooling
  • Download VS 2015 Update 3 when available
  • Grab NuGet 2.12 for VS 2012/2013

Start updating your packages to support .NET Core if you haven’t already. If you’ve been waiting for .NET Core RTM, that time has finally come. You can either use a csproj-based portable library targeting netstandard or use xproj with project.json. As an aside, my personal opinion is that xproj‘s main advantage over csproj today is cross-compiling. If you need to compile for multiple targets, that’s the currently the best option, otherwise, you’ll get a better experience using the csproj approach.

Converting existing PCL projects

VS 2015 Update 3 has a new feature that makes it easy to convert a PCL to netstandard. If your library has any NuGet dependencies installed, you need to first follow the steps to switch packages.config to project.json. Once you do that, you can select Target .NET Platform Standard from the project properties and it’ll convert it over:
Project Properties

Icing on the cake

Xamarin supports all versions of netstandard as well. This means that if you were cross-compiling because PCL 259 was too limiting, try taking another look at netstandard1.3+. There’s a lot more surface area there and it may mean you can eliminate a few target platforms.

Bonus Round

If you want to use xproj for its cross-compiling features and also need to reference that library from a project type that doesn’t support csproj -> xproj references today (most of them, including UWP don’t work super-well), I’ve written up an explanation of how you can do this on StackOverflow.