Scott Hanselman

Real Browser Integration Testing with Selenium Standalone, Chrome, and ASP.NET Core 2.1

May 23, '18 Comments [0] Posted in ASP.NET | DotNetCore | Open Source
Sponsored By

I find your lack of tests disturbingBuckle up kids, this is nuts and I'm probably doing it wrong. ;) And it's 2am and I wrote this fast. I'll come back tomorrow and fix the spelling.

I want to have lots of tests to make sure my new podcast site is working well. As mentioned before, I've been updating the site to ASP.NET Core 2.1.

Here's some posts if you want to catch up:

I've been doing my testing with XUnit and I want to test in layers.

Basic Unit Testing

Simply create a Razor Page's Model in memory and call OnGet or WhateverMethod. At this point you are NOT calling Http, there is no WebServer.

public IndexModel pageModel;

public IndexPageTests()
{
var testShowDb = new TestShowDatabase();
pageModel = new IndexModel(testShowDb);
}

[Fact]
public async void MainPageTest()
{
// FAKE HTTP GET "/"
IActionResult result = await pageModel.OnGetAsync(null, null);

Assert.NotNull(result);
Assert.True(pageModel.OnHomePage); //we are on the home page, because "/"
Assert.Equal(16, pageModel.Shows.Count()); //home page has 16 shows showing
Assert.Equal(620, pageModel.LastShow.ShowNumber); //last test show is #620
}

Moving out a layer...

In-Memory Testing with both Client and Server using WebApplicationFactory

Here we are starting up the app and calling it with a client, but the "HTTP" of it all is happening in memory/in process. There are no open ports, there's no localhost:5000. We can still test HTTP semantics though.

public class TestingFunctionalTests : IClassFixture<WebApplicationFactory<Startup>>
{
public HttpClient Client { get; }
public ServerFactory<Startup> Server { get; }

public TestingFunctionalTests(ServerFactory<Startup> server)
{
Client = server.CreateClient();
Server = server;
}

[Fact]
public async Task GetHomePage()
{
// Arrange & Act
var response = await Client.GetAsync("/");

// Assert
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}
...
}

Testing with a real Browser and real HTTP using Selenium Standalone and Chrome

THIS is where it gets interesting with ASP.NET Core 2.1 as we are going to fire up both the complete web app, talking to the real back end (although it could talk to a local test DB if you want) as well as a real headless version of Chrome being managed by Selenium Standalone and talked to with the WebDriver. It sounds complex, but it's actually awesome and super useful.

First I add references to Selenium.Support and Selenium.WebDriver to my Test project:

dotnet add reference "Selenium.Support"
dotnet add reference "Selenium.WebDriver"

Make sure you have node and npm then you can get Selenium Standalone like this:

npm install -g selenium-standalone@latest
selenium-standalone install

Chrome is being controlled by automated test softwareSelenium, to be clear, puts your browser on a puppet's strings. Even Chrome knows it's being controlled! It's using the (soon to be standard, but clearly defacto standard) WebDriver protocol. Imagine if your browser had a localhost REST protocol where you could interrogate it and click stuff! I've been using Selenium for over 11 years. You can even test actual Windows apps (not in the browser) with WinAppDriver/Appium but that's for another post.

Now for this part, bare with me because my ServerFactory class I'm about to make is doing two things. It's setting up my ASP.NET Core 2. 1 app and actually running it so it's listening on https://localhost:5001. It's assuming a few things that I'll point out. It also (perhaps questionable) is launching Selenium Standalone from within its constructor. Questionable, to be clear, and there's others ways to do this, but this is VERY simple.

If it offends you, remembering that you do need to start Selenium Standalone with "selenium-standalone start" you could do it OUTSIDE your test in a script.

Perhaps do the startup/teardown work in a PowerShell or Shell script. Start it up, save the process id, then stop it when you're done. Note I'm also doing checking code coverage here with Coverlet but that's not related to Selenium - I could just "dotnet test."

#!/usr/local/bin/powershell
$SeleniumProcess = Start-Process "selenium-standalone" -ArgumentList "start" -PassThru
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests
Stop-Process -Id $SeleniumProcess.Id

Here my SeleniumServerFactory is getting my Browser and Server ready.

SIDEBAR NOTE: I want to point out that this is NOT perfect and it's literally the simplest thing possible to get things working. It's my belief, though, that there are some problems here and that I shouldn't have to fake out the "new TestServer" in CreateServer there. While the new WebApplicationFactory is great for in-memory unit testing, it should be just as easy to fire up your app and use a real port for things like Selenium testing. Here I'm building and starting the IWebHostBuilder myself (!) and then making a fake TestServer only to satisfy the CreateServer method, which I think should not have a concrete class return type. For testing, ideally I could easily get either an "InMemoryWebApplicationFactory" and a "PortUsingWebApplicationFactory" (naming is hard). Hopefully this is somewhat clear and something that can be easily adjusted for ASP.NET Core 2.1.x.

My app is configured to listen on both http://localhost:5000 and https://localhost:5001, so you'll note where I'm getting that last value (in an attempt to avoid hard-coding it). We also are sure to stop both Server and Brower in Dispose() at the bottom.

public class SeleniumServerFactory<TStartup> : WebApplicationFactory<Startup> where TStartup : class
{
public string RootUri { get; set; } //Save this use by tests

Process _process;
IWebHost _host;

public SeleniumServerFactory()
{
ClientOptions.BaseAddress = new Uri("https://localhost"); //will follow redirects by default

_process = new Process() {
StartInfo = new ProcessStartInfo {
FileName = "selenium-standalone",
Arguments = "start",
UseShellExecute = true
}
};
_process.Start();
}

protected override TestServer CreateServer(IWebHostBuilder builder)
{
//Real TCP port
_host = builder.Build();
_host.Start();
RootUri = _host.ServerFeatures.Get<IServerAddressesFeature>().Addresses.LastOrDefault(); //Last is https://localhost:5001!

//Fake Server we won't use...this is lame. Should be cleaner, or a utility class
return new TestServer(new WebHostBuilder().UseStartup<TStartup>());
}

protected override void Dispose(bool disposing)
{
        base.Dispose(disposing);
        if (disposing) {
            _host.Dispose();
_process.CloseMainWindow(); //Be sure to stop Selenium Standalone
        }
    }
}

But what does a complete series of tests look like? I have a Server, a Browser, and an (theoretically optional) HttpClient. Focus on the Browser and Server.

At the point when a single test starts, my site is up (the Server) and an invisible headless Chrome (the Browser) is actually being puppeted with local calls via WebDriver. All this is hidden from to you - if you want. You can certainly see Chrome (or other browsers) get automated, but what's nice about Selenium Standalone with hidden/headless Browser testing is that my unit tests now also include these complete Integration Tests and can run as part of my Continuous Integration Build.

Again, layers. I test classes, then move out and test Http Request/Response interactions, and finally the site is up and I'm making sure I can navigate, that data is loading. I'm automating the "smoke tests" that I used to do myself! And I can make as many of this a I'd like now that the scaffolding work is done.

public class SeleniumTests : IClassFixture<SeleniumServerFactory<Startup>>, IDisposable
{
public SeleniumServerFactory<Startup> Server { get; }
public IWebDriver Browser { get; }
public HttpClient Client { get; }
public ILogs Logs { get; }

public SeleniumTests(SeleniumServerFactory<Startup> server)
{
Server = server;
Client = server.CreateClient(); //weird side effecty thing here. This call shouldn't be required for setup, but it is.

var opts = new ChromeOptions();
opts.AddArgument("--headless"); //Optional, comment this out if you want to SEE the browser window
opts.SetLoggingPreference(OpenQA.Selenium.LogType.Browser, LogLevel.All);

var driver = new RemoteWebDriver(opts);
Browser = driver;
Logs = new RemoteLogs(driver); //TODO: Still not bringing the logs over yet
}

[Fact]
public void LoadTheMainPageAndCheckTitle()
{
Browser.Navigate().GoToUrl(Server.RootUri);
Assert.StartsWith("Hanselminutes Technology Podcast - Fresh Air and Fresh Perspectives for Developers", Browser.Title);
}

[Fact]
public void ThereIsAnH1()
{
Browser.Navigate().GoToUrl(Server.RootUri);

var headerSelector = By.TagName("h1");
Assert.Equal("HANSELMINUTES PODCAST\r\nby Scott Hanselman", Browser.FindElement(headerSelector).Text);
}

[Fact]
public void KevinScottTestThenGoHome()
{
Browser.Navigate().GoToUrl(Server.RootUri + "/631/how-do-you-become-a-cto-with-microsofts-cto-kevin-scott");

var headerSelector = By.TagName("h1");
var link = Browser.FindElement(headerSelector);
link.Click();
Assert.Equal(Browser.Url.TrimEnd('/'),Server.RootUri); //WTF
}

public void Dispose()
{
Browser.Dispose();
}
}

Here's a build, unit test/selenium test with code coverage actually running. I started running it from PowerShell. The black window in the back is Selenium Standalone doing its thing (again, could be hidden).

Two consoles, one with PowerShell running XUnit and one running Selenium

If I comment out the "--headless" line, I'll see this as Chrome is automated. Cool.

Chrome is loading my site and being automated

Of course, I can also run these in the .NET Core Test Explorer in either Visual Studio Code, or Visual Studio.

image

Great fun. What are your thoughts?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Installing PowerShell Core on a Raspberry Pi (powered by .NET Core)

May 18, '18 Comments [1] Posted in Linux | Open Source | PowerShell
Sponsored By

PowerShell Core on a Raspberry Pi!Earlier this week I set up .NET Core and Docker on a Raspberry Pi and found that I could run my podcast website quite easily on a Pi. Check that post out as there's a lot going on. I can test within a Linux Container and output the test results to the host and then open them in VS. I also explored a reasonably complex Dockerfile that is both multiarch and multistage. I can reliably build and test my website either inside a container or on the bare metal of Windows or Linux. Very fun.

As primarily a Windows developer I have lots of batch/cmd files like "test.bat" or "dockerbuild.bat." They start as little throwaway bits of automation but as the project grows inevitably more complex.

I'm not interested in "selling" anyone PowerShell. If you like bash, use bash, it's lovely, as are shell scripts. PowerShell is object-oriented in its pipeline, moving lists of real objects as standard output. They are different and most importantly, they can live together. Just like you might call Python scripts from bash, you can call PowerShell scripts from bash, or vice versa. Another tool in our toolkits.

PS /home/pi> Get-Process | Where-Object WorkingSet -gt 10MB

NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.00 10.92 890.87 917 917 docker-containe
0 0.00 35.64 1,140.29 449 449 dockerd
0 0.00 10.36 0.88 1272 037 light-locker
0 0.00 20.46 608.04 1245 037 lxpanel
0 0.00 69.06 32.30 3777 749 pwsh
0 0.00 31.60 107.74 647 647 Xorg
0 0.00 10.60 0.77 1279 037 zenity
0 0.00 10.52 0.77 1280 037 zenity

Bash and shell scripts are SUPER powerful. It's a whole world. But it is text based (or json for some newer things) so you're often thinking about text more.

pi@raspberrypidotnet:~ $ ps aux | sort -rn -k 5,6 | head -n6
root 449 0.5 3.8 956240 36500 ? Ssl May17 19:00 /usr/bin/dockerd -H fd://
root 917 0.4 1.1 910492 11180 ? Ssl May17 14:51 docker-containerd --config /var/run/docker/containerd/containerd.toml
root 647 0.0 3.4 155608 32360 tty7 Ssl+ May17 1:47 /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
pi 1245 0.2 2.2 153132 20952 ? Sl May17 10:08 lxpanel --profile LXDE-pi
pi 1272 0.0 1.1 145928 10612 ? Sl May17 0:00 light-locker
pi 1279 0.0 1.1 145020 10856 ? Sl May17 0:00 zenity --warning --no-wrap --text

You can take it as far as you like. For some it's intuitive power, for others, it's baroque.

pi@raspberrypidotnet:~ $ ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'
0.00 Mb COMMAND
161.14 Mb /usr/bin/dockerd -H fd://
124.20 Mb docker-containerd --config /var/run/docker/containerd/containerd.toml
78.23 Mb lxpanel --profile LXDE-pi
66.31 Mb /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
61.66 Mb light-locker

Point is, there's choice. Here's a nice article about PowerShell from the perspective of a Linux user. Can I install PowerShell on my Raspberry Pi (or any Linux machine) and use the same scripts in both places? YES.

For many years PowerShell was a Windows-only thing that was part of the closed Windows ecosystem. In fact, here's video of me nearly 12 years ago (I was working in banking) talking to Jeffrey Snover about PowerShell. Today, PowerShell is open source up at https://github.com/PowerShell with lots of docs and scripts, also open source. PowerShell is supported on Windows, Mac, and a half-dozen Linuxes. Sound familiar? That's because it's powered (ahem) by open source cross platform .NET Core. You can get PowerShell Core 6.0 here on any platform.

Don't want to install it? Start it up in Docker in seconds with

docker run -it microsoft/powershell

Sweet. How about Raspbian on my ARMv7 based Raspberry Pi? I was running Raspbian Jessie and PowerShell is supported on Raspbian Stretch (newer) so I upgraded from Jesse to Stretch (and tidied up and did the firmware while I'm at it) with:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/raspi.list
$ sudo apt-get update && sudo apt-get upgrade -y
$ sudo apt-get dist-upgrade -y
$ sudo rpi-update

Cool. Now I'm on Raspbian Stretch on my Raspberry Pi 3. Let's install PowerShell! These are just the most basic Getting Started instructions. Check out GitHub for advanced and detailed info if you have issues with prerequisites or paths.

NOTE: Here I'm getting PowerShell Core 6.0.2. Be sure to check the releases page for newer releases if you're reading this in the future. I've also used 6.1.0 (in preview) with success. The next 6.1 preview will upgrade to .NET Core 2.1. If you're just evaluating, get the latest preview as it'll have the most recent bug fixes.

$ sudo apt-get install libunwind8
$ wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.2/powershell-6.0.2-linux-arm32.tar.gz
$ mkdir ~/powershell
$ tar -xvf ./powershell-6.0.2-linux-arm32.tar.gz -C ~/powershell
$ sudo ln -s ~/powershell/pwsh /usr/bin/pwsh
$ sudo ln -s ~/powershell/pwsh /usr/local/bin/powershell
$ powershell

Lovely.

GOTCHA: Because I upgraded from Jessie to Stretch, I ran into a bug where libssl1.0.0 is getting loaded over libssl1.0.2. This is a complex native issue with interaction between PowerShell and .NET Core 2.0 that's being fixed. Only upgraded machines like mind will it it, but it's easily fixed with sudo apt-get remove libssl1.0.0

Now this means my PowerShell build scripts can work on both Windows and Linux. This is a deeply trivial example (just one line) but note the "shebang" at the top that lets Linux know what a *.ps1 file is for. That means I can keep using bash/zsh/fish on Raspbian, but still "build.ps1" or "test.ps1" on any platform.

#!/usr/local/bin/powershell
dotnet watch --project .\hanselminutes.core.tests test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov

Here's a few totally random but lovely PowerShell examples:

PS /home/pi> Get-Date | Select-Object -Property * | ConvertTo-Json
{
"DisplayHint": 2,
"DateTime": "Sunday, May 20, 2018 5:55:35 AM",
"Date": "2018-05-20T00:00:00+00:00",
"Day": 20,
"DayOfWeek": 0,
"DayOfYear": 140,
"Hour": 5,
"Kind": 2,
"Millisecond": 502,
"Minute": 55,
"Month": 5,
"Second": 35,
"Ticks": 636623925355021162,
"TimeOfDay": {
"Ticks": 213355021162,
"Days": 0,
"Hours": 5,
"Milliseconds": 502,
"Minutes": 55,
"Seconds": 35,
"TotalDays": 0.24693868190046295,
"TotalHours": 5.9265283656111105,
"TotalMilliseconds": 21335502.1162,
"TotalMinutes": 355.59170193666665,
"TotalSeconds": 21335.502116199998
},
"Year": 2018
}

You can take PowerShell objects to and from Objects, Hashtables, JSON, etc.

PS /home/pi> $hash | ConvertTo-Json
{
"Shape": "Square",
"Color": "Blue",
"Number": 1
}
PS /home/pi> $hash = @{ Number = 1; Shape = "Square"; Color = "Blue"}
PS /home/pi> $hash

Name Value
---- -----
Shape Square
Color Blue
Number 1


PS /home/pi> $hash | ConvertTo-Json
{
"Shape": "Square",
"Color": "Blue",
"Number": 1
}

Here's a nice one from MCPMag:

PS /home/pi> $URI = "https://query.yahooapis.com/v1/public/yql?q=select  * from weather.forecast where woeid in (select woeid from geo.places(1) where  text='{0}, {1}')&format=json&env=store://datatables.org/alltableswithkeys"  -f 'Omaha','NE'
PS /home/pi> $Data = Invoke-RestMethod -Uri $URI
PS /home/pi> $Data.query.results.channel.item.forecast|Format-Table

code date day high low text
---- ---- --- ---- --- ----
39 20 May 2018 Sun 62 56 Scattered Showers
30 21 May 2018 Mon 78 53 Partly Cloudy
30 22 May 2018 Tue 88 61 Partly Cloudy
4 23 May 2018 Wed 89 67 Thunderstorms
4 24 May 2018 Thu 91 68 Thunderstorms
4 25 May 2018 Fri 92 69 Thunderstorms
34 26 May 2018 Sat 89 68 Mostly Sunny
34 27 May 2018 Sun 85 65 Mostly Sunny
30 28 May 2018 Mon 85 63 Partly Cloudy
47 29 May 2018 Tue 82 63 Scattered Thunderstorms

Or a one-liner if you want to be obnoxious.

PS /home/pi> (Invoke-RestMethod -Uri  "https://query.yahooapis.com/v1/public/yql?q=select  * from weather.forecast where woeid in (select woeid from geo.places(1) where  text='Omaha, NE')&format=json&env=store://datatables.org/alltableswithkeys").query.results.channel.item.forecast|Format-Table

Example: This won't work on Linux as it's using Windows specific AIPs, but if you've got PowerShell on your Windows machine, try out this one-liner for a cool demo:

iex (New-Object Net.WebClient).DownloadString("http://bit.ly/e0Mw9w")

Thoughts?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Using LazyCache for clean and simple .NET Core in-memory caching

May 12, '18 Comments [18] Posted in ASP.NET | DotNetCore | Open Source
Sponsored By

Tai Chi by Luisen Rodrigo - Used under CCI'm continuing to use .NET Core 2.1 to power my Podcast Site, and I've done a series of posts on some of the experiments I've been doing. I also upgraded to .NET Core 2.1 RC that came out this week. Here's some posts if you want to catch up:

Having a blast, if I may say so.

I've been trying a number of ways to cache locally. I have an expensive call to a backend (7-8 seconds or more, without deserialization) so I want to cache it locally for a few hours until it expires. I have a way that work very well using a SemaphoreSlim. There's some issues to be aware of but it has been rock solid. However, in the comments of the last caching post a number of people suggested I use "LazyCache."

Alastair from the LazyCache team said this in the comments:

LazyCache wraps your "build stuff I want to cache" func in a Lazy<> or an AsyncLazy<> before passing it into MemoryCache to ensure the delegate only gets executed once as you retrieve it from the cache. It also allows you to swap between sync and async for the same cached thing. It is just a very thin wrapper around MemoryCache to save you the hassle of doing the locking yourself. A netstandard 2 version is in pre-release.
Since you asked the implementation is in CachingService.cs#L119 and proof it works is in CachingServiceTests.cs#L343

Nice! Sounds like it's worth trying out. Most importantly, it'll allow me to "refactor via subtraction."

I want to have my "GetShows()" method go off and call the backend "database" which is a REST API over HTTP living at SimpleCast.com. That backend call is expensive and doesn't change often. I publish new shows every Thursday, so ideally SimpleCast would have a standard WebHook and I'd cache the result forever until they called me back. For now I will just cache it for 8 hours - a long but mostly arbitrary number. Really want that WebHook as that's the correct model, IMHO.

LazyCache was added on my Configure in Startup.cs:

services.AddLazyCache();

Kind of anticlimactic. ;)

Then I just make a method that knows how to populate my cache. That's just a "Func" that returns a Task of List of Shows as you can see below. Then I call IAppCache's "GetOrAddAsync" from LazyCache that either GETS the List of Shows out of the Cache OR it calls my Func, does the actual work, then returns the results. The results are cached for 8 hours. Compare this to my previous code and it's a lot cleaner.

public class ShowDatabase : IShowDatabase
{
    private readonly IAppCache _cache;
    private readonly ILogger _logger;
    private SimpleCastClient _client;

    public ShowDatabase(IAppCache appCache,
            ILogger<ShowDatabase> logger,
            SimpleCastClient client)
    {
        _client = client;
        _logger = logger;
        _cache = appCache;
    }

    public async Task<List<Show>> GetShows()
    {    
        Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache();
        var retVal = await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));
        return retVal;
    }
 
    private async Task<List<Show>> PopulateShowsCache()
    {
        List<Show> shows = await _client.GetShows();
        _logger.LogInformation($"Loaded {shows.Count} shows");
        return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList();
    }
}

It's always important to point out there's a dozen or more ways to do this. I'm not selling a prescription here or The One True Way, but rather exploring the options and edges and examining the trade-offs.

  • As mentioned before, me using "shows" as a magic string for the key here makes no guarantees that another co-worker isn't also using "shows" as the key.
    • Solution? Depends. I could have a function-specific unique key but that only ensures this function is fast twice. If someone else is calling the backend themselves I'm losing the benefits of a centralized (albeit process-local - not distributed like Redis) cache.
  • I'm also caching the full list and then doing a where/filter every time.
    • A little sloppiness on my part, but also because I'm still feeling this area out. Do I want to cache the whole thing and then let the callers filter? Or do I want to have GetShows() and GetActiveShows()? Dunno yet. But worth pointing out.
  • There's layers to caching. Do I cache the HttpResponse but not the deserialization? Here I'm caching the List<Shows>, complete. I like caching List<T> because a caller can query it, although I'm sending back just active shows (see above).
    • Another perspective is to use the <cache> TagHelper in Razor and cache Razor's resulting rendered HTML. There is value in caching the object graph, but I need to think about perhaps caching both List<T> AND the rendered HTML.
    • I'll explore this next.

I'm enjoying myself though. ;)

Go explore LazyCache! I'm using beta2 but there's a whole number of releases going back years and it's quite stable so far.

Lazy cache is a simple in-memory caching service. It has a developer friendly generics based API, and provides a thread safe cache implementation that guarantees to only execute your cachable delegates once (it's lazy!). Under the hood it leverages ObjectCache and Lazy to provide performance and reliability in heavy load scenarios.

For ASP.NET Core it's quick to experiment with LazyCache and get it set up. Give it a try, and share your favorite caching techniques in the comments.

Tai Chi photo by Luisen Rodrigo used under Creative Commons Attribution 2.0 Generic (CC BY 2.0), thanks!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly

April 24, '18 Comments [9] Posted in ASP.NET | DotNetCore | Open Source
Sponsored By

b30f5128-181e-11e6-8780-bc9e5b17685eLast week while upgrading my podcast site to ASP.NET Core 2.1 and .NET. Core 2.1 I moved my Http Client instances over to be created by the new HttpClientFactory. Now I have a single central place where my HttpClient objects are created and managed, and I can set policies as I like on each named client.

It really can't be overstated how useful a resilience framework for .NET Core like Polly is.

Take some code like this that calls a backend REST API:

public class SimpleCastClient
{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com");
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
var res = await _client.GetAsync(episodesUrl);
return await res.Content.ReadAsAsync<List<Show>>();
}
}

Now consider what it takes to add things like

  • Retry n times - maybe it's a network blip
  • Circuit-breaker - Try a few times but stop so you don't overload the system.
  • Timeout - Try, but give up after n seconds/minutes
  • Cache - You asked before!
    • I'm going to do a separate blog post on this because I wrote a WHOLE caching system and I may be able to "refactor via subtraction."

If I want features like Retry and Timeout, I could end up littering my code. OR, I could put it in a base class and build a series of HttpClient utilities. However, I don't think I should have to do those things because while they are behaviors, they are really cross-cutting policies. I'd like a central way to manage HttpClient policy!

Enter Polly. Polly is an OSS library with a lovely Microsoft.Extensions.Http.Polly package that you can use to combine the goodness of Polly with ASP.NET Core 2.1.

As Dylan from the Polly Project says:

HttpClientFactory in ASPNET Core 2.1 provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call.

I just went into my Startup.cs and changed this

services.AddHttpClient<SimpleCastClient>();

to this (after adding "using Polly;" as a namespace)

services.AddHttpClient<SimpleCastClient>().
AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.RetryAsync(2));

and now I've got Retries. Change it to this:

services.AddHttpClient<SimpleCastClient>().
AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
));

And now I've got CircuitBreaker where it backs off for a minute if it's broken (hit a handled fault) twice!

I like AddTransientHttpErrorPolicy because it automatically handles Http5xx's and Http408s as well as the occasional System.Net.Http.HttpRequestException. I can have as many named or typed HttpClients as I like and they can have all kinds of specific policies with VERY sophisticated behaviors. If those behaviors aren't actual Business Logic (tm) then why not get them out of your code?

Go read up on Polly at https://githaub.com/App-vNext/Polly and check out the extensive samples at https://github.com/App-vNext/Polly-Samples/tree/master/PollyTestClient/Samples.

Even though it works great with ASP.NET Core 2.1 (best, IMHO) you can use Polly with .NET 4, .NET 4.5, or anything that's compliant with .NET Standard 1.1.

Gotchas

A few things to remember. If you are POSTing to an endpoint and applying retries, you want that operation to be idempotent.

"From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result."

But everyone's API is different. What would happen if you applied a Polly Retry Policy to an HttpClient and it POSTed twice? Is that backend behavior compatible with your policies? Know what the behavior you expect is and plan for it. You may want to have a GET policy and a post one and use different HttpClients. Just be conscious.

Next, think about Timeouts. HttpClient's have a Timeout which is "all tries overall timeout" while a TimeoutPolicy inside a Retry is "timeout per try." Again, be aware.

Thanks to Dylan Reisenberger for his help on this post, along with Joel Hulen! Also read more about HttpClientFactory on Steve Gordon's blog and learn more about HttpClientFactory and Polly on the Polly project site.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

How to setup Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows

April 19, '18 Comments [8] Posted in Open Source
Sponsored By

This commit was signed with a verified signature.This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those "it's good for you" things like diet and exercise and setting up 2 Factor Authentication. I just want to be able to sign my code commits to GitHub so I might avoid people impersonating my Git Commits (happens more than you'd think and has happened recently.) However, I also was hoping to make it more secure by using a YubiKey 4 or Yubikey NEO security key. They're happy to tell you that it supports a BUNCH of stuff that you have never heard of like Yubico OTP, OATH-TOTP, OATH-HOTP, FIDO U2F, OpenPGP, Challenge-Response. I am most concerned with it acting like a Smart Card that holds a PGP (Pretty Good Privacy) key since the YubiKey can look like a "PIV (Personal Identity Verification) Smart Card."

NOTE: I am not a security expert. Let me know if something here is wrong (be nice) and I'll update it. Note also that there are a LOT of guides out there. Some are complete and encyclopedic, some include recommendations and details that are "too much," but this one was my experience. This isn't The Bible On The Topic but rather  what happened with me and what I ran into and how I got past it. Until this is Super Easy (TM) on Windows, there's gonna be guides like this.

As with all things security, there is a balance between Capital-S Secure with offline air-gapped what-nots, and Ease Of Use with tools like Keybase. It depends on your tolerance, patience, technical ability, and if you trust any online services. I like Keybase and trust them so I'm starting there with a Private Key. You can feel free to get/generate your key from wherever makes you happy and secure.

Welcome to Keybase.io

I use Windows and I like it, so if you want to use a Mac or Linux this blog post likely isn't for you. I love and support you and your choice though. ;)

Make sure you have a private PGP key that has your Git Commit Email Address associated with it

I download and installed (and optionally donated) a copy of Gpg4Win here.

Take your private key - either the one you got from Keybase or one you generated locally - and make sure that your UID (your email address that you use on GitHub) is a part of it. Here you can see mine is not, yet. That could be the main email or might be an alias or "uid" that you'll add.

Certs in Kleopatra

If not - as in my case since I'm using a key from keybase - you'll need to add a new uid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG

------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]

uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>

You can adduid in the gpg command line or you can add it in the Kleopatra GUI.

image

List them again and you'll see the added uid.

> gpg --list-secret-keys --keyid-format LONG

------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]
uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>
uid [ unknown] Scott Hanselman <scott@hanselman.com>

When you make changes like this, you can export your public key and update it in Keybase.io (again, if you're using Keybase).

image

Plugin your YubiKey

When you plug your YubiKey in (assuming it's newer than 2015) it should get auto-detected and show up like this "Yubikey NEO OTP+U2F+CCID." You want it so show up as this kind of "combo" or composite device. If it's older or not in this combo mode, you may need to download the YubiKey NEO Manager and switch modes.

Setting up a YubiKey on Windows

Test that your YubiKey can be seen as a Smart Card

Go to the command line and run this to confirm that your Yubikey can be see as a smart card by the GPG command line.

> gpg --card-status
Reader ...........: Yubico Yubikey NEO OTP U2F CCID 0
Version ..........: 2.0
....

IMPORTANT: Sometimes Windows machines and Corporate Laptops have multiple smart card readers, especially if they have Windows Hello installed like my SurfaceBook2! If you hit this, you'll want to create a text file at %appdata%\gnupg\scdaemon.conf and include a reader-port that points to your YubiKey. Mine is a NEO, yours might be a 4, etc, so be aware. You may need to reboot or at least restart/kill the GPG services/background apps for it to notice you made a change.
If you want to know what string should go in that file, go to Device Manager, then View | Show Hidden Devices and look under Software Devices. THAT is the string you want. Put this in scdaemon.conf:

reader-port "Yubico Yubikey NEO OTP+U2F+CCID 0"

Yubico Yubikey NEO OTP+U2F+CCID 0

Yubikey NEO can hold keys up to 2048 bits and the Yubikey 4 can hold up to 4096 bits - that's MOAR bits! However, you might find yourself with a 4096 bit key that is too big for the Yubikey NEO. Lots of folks believe this is a limitation of the NEO that sucks and is unacceptable. Since I'm using Keybase and starting with a 4096 bit key, one solution is to make separate 2048 bit subkeys for Authentication and Signing, etc.

From the command line, edit your keys then "addkey"

> gpg --edit-key <scott@hanselman.com>

You'll make a 2048 bit Signing key and you'll want to decide if it ever expires. If it never does, also make a revocation certificate so you can revoke it at some future point.

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all

Save your changes, and then export the keys. You can do that with Kleopatra or with the command line:

--export-secret-keys --armor KEYID

Here's a GUI view. I have my main 4096 bit key and some 2048 bit subkeys for Signing or Encryption, etc. Make as many as you like

image

LEVEL SET - It will be the public version of the 2048 bit Signing Key that we'll tell GitHub about and we'll put the private part on the YubiKey, acting as a Smart Card.

Move the signing subkey over to the YubiKey

Now I'm going to take my keychain here, select the signing one (note the ASTERISK after I type "key 1" then "keytocard" to move/store it on the YubyKey's SmartCard Signature slot. I'm using my email as a way to get to my key, but if your email is used in multiple keys you'll want to use the unique Key Id/Signature. BACK UP YOUR KEYS.

> gpg --edit-key scott@hanselman.com

gpg (GnuPG) 2.2.6; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>
gpg> toggle
gpg> key 1

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb* rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>

gpg> keytocard
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1
gpg> save

If you're storing thing on your Smart Card, it should have a pin to protect it. Also, make sure you have a backup of your primary key (if you like) because keytocard is a destructive action.

Have you set up PIN numbers for your Smart Card?

There's a PIN and an Admin PIN. The Admin PIN is the longer one. The default admin PIN is usually ‘12345678’ and the default PIN is usually ‘123456’. You'll want to set these up with either the Kleopatra GUI "Tools | Manage Smart Cards" or the gpg command line:

>gpg --card-edit
gpg/card> admin
Admin commands are allowed
gpg/card> passwd
*FOLLOW THE PROMPTS TO SET PINS, BOTH ADMIN AND STANDARD*

Tell Git about your Signing Key Globally

Be sure to tell Git on your machine some important configuration info like your signing key, but also WHERE the gpg.exe is. This is important because git ships its own older local copy of gpg.exe and you installed a newer one!

git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
git config --global commit.gpgsign true
git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY

If you don't want to set ALL commits to signed, you can skip the commit.gpgsign=true and just include -S as you commit your code:

git commit -S -m your commit message

Test that you can sign things

if you are running Kleopatra (the noob Windows GUI) when you run gpg --card-status you'll notice the cert will turn boldface and get marked as certified.

The goal here is for you to make sure GPG for Windows knows that there's a private key on the smart card, and associates a signing Key ID with that private key so when Git wants to sign a commit, you'll get a Smart Card PIN Prompt.

Advanced: If you make SubKeys for individual things so that they might also be later revoked without torching your main private key. Using the Kleopatra tool from GPG for Windows you can explore the keys and get their IDs. You'll use those Subkey IDs in your git config to remove to your signingkey.

At this point things should look kinda like this in the Kleopatra GUI:

Multiple PGP Sub keys

Make sure to prove you can sign something by making a text file and signing it. If you get a Smart Card prompt (assuming a YubiKey) and a larger .gpg file appears, you're cool.

> gpg --sign .\quicktest.txt
> dir quic*

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 4/18/2018 3:29 PM 9 quicktest.txt
-a---- 4/18/2018 3:38 PM 360 quicktest.txt.gpg

Now, go up into GitHub to https://github.com/settings/keys at the bottom. Remember that's GPG Keys, not SSH Keys. Make a new one and paste in your public signing key or subkey.

Note the KeyID (or the SubKey ID) and remember that one of them (either the signing one or the primary one) should be the ID you used when you set up user.signingkey in git above.

GPG Keys in GitHub

The most important thing is that:

  • the email address associated with the GPG Key
  • is the same as the email address GitHub has verified for you
  • is the same as the email in the Git Commit
    • git config --global user.email "email@example.com"

If not, double check your email addresses and make sure they are the same everywhere.

Try a signed commit

If pressing enter pops a PIN Dialog then you're getting somewhere!

Please unlock the card

Commit and push and go over to GitHub and see if your commit is Verified or Unverified. Unverified means that the commit was signed but either had an email GitHub had never seen OR that you forgot to tell GitHub about your signing public key.

Signed Verified Git Commits

Yay!

Setting up to a second (or third) machine

Once you've told Git about your signing key and you've got your signing key stored in your YubiKey, you'll likely want to set up on another machine.

  • Install GPG for Windows
    • gpg --card-status
    • Import your public key. If I'm setting up signing on another machine, I'll can import my PUBLIC certificates like this or graphically in Kleopatra.
      >gpg --import "keybase public key.asc"
      gpg: key *KEYID*: "keybase.io/shanselman <shanselman@keybase.io>" not changed
      gpg: Total number processed: 1
      gpg: unchanged: 1

      You may also want to run gpg --expert --edit-key *KEYID* and type "trust" to certify your key as someone (yourself) that you trust.

  • Install Git (I assume you did this) and configure GPG
    • git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
    • git config --global commit.gpgsign true
    • git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY
  • Sign something with "gpg --sign" to test
  • Do a test commit.

Finally, feel superior for 8 minutes, then realize you're really just lucky because you just followed the blog post of someone who ALSO has no clue, then go help a co-worker because this is TOO HARD.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb
Page 1 of 49 in the Open Source category Next Page

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.