Scott Hanselman

Building, Running, and Testing .NET Core and ASP.NET Core 2.1 in Docker on a Raspberry Pi (ARM32)

May 16, '18 Comments [13] Posted in Docker | DotNetCore
Sponsored By

I love me some Raspberry Pi. They are great little learning machines and are super fun for kids to play with. Even if those kids are adults and they build a 6 node Kubernetes Raspberry Pi Cluster.

Open source .NET Core runs basically everywhere - Windows, Mac, and a dozen Linuxes. However, there is an SDK (that compiles and builds) and a Runtime (that does the actual running of your app). In the past, the .NET Core SDK (to be clear, the ability to "dotnet build") wasn't supported on ARMv7/ARMv8 chips like the Raspberry Pi. Now it is.

.NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu!

Note: .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. Folks on the Azure IoT Edge team use the .NET Core Bionic ARM32 Docker images to support developers writing C# with Edge devices.

There's two ways to run .NET Core on a Raspberry Pi.

One, use Docker. This is literally the fastest and easiest way to get .NET Core up and running on a Pi. It sounds crazy but Raspberry Pis are brilliant little Docker container capable systems. You can do it in minutes, truly. You can install Docker quickly on a Raspberry Pi with just:

curl -sSL https://get.docker.com | sh
sudo usermod -aG docker pi

After installing Docker you'll want to log in and out. You might want to try a quick sample to make sure .NET Core runs! You can explore the available Docker tags at https://hub.docker.com/r/microsoft/dotnet/tags/ and you can read about the .NET Core Docker samples here https://github.com/dotnet/dotnet-docker/tree/master/samples/dotnetapp

Now I can just docker run and then pass in "dotnet --info" to find out about dotnet on my Pi.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 2.1.300-rc1-008673
Commit: f5e3ddbe73

Runtime Environment:
OS Name: debian
OS Version: 9
OS Platform: Linux
RID: debian.9-x86
Base Path: /usr/share/dotnet/sdk/2.1.300-rc1-008673/

Host (useful for support):
Version: 2.1.0-rc1
Commit: eb9bc92051

.NET Core SDKs installed:
2.1.300-rc1-008673 [/usr/share/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.NETCore.App 2.1.0-rc1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download

This is super cool. There I'm on the Raspberry Pi (RPi) and I just ask for the dotnet:2.1-sdk and because they are using "multiarch" docker files, Docker does the right thing and it just works. If you want to use .NET Core on ARM32 with Docker, you can use any of the following tags.

Note: The first three tags are multi-arch and bionic is Ubuntu 18.04. The codename stretch is Debian 9. So I'm using 2.1-sdk and it's working on my RPi, but I can be specific if I'd prefer.

  • 2.1-sdk
  • 2.1-runtime
  • 2.1-aspnetcore-runtime
  • 2.1-sdk-stretch-arm32v7
  • 2.1-runtime-stretch-slim-arm32v7
  • 2.1-aspnetcore-runtime-stretch-slim-arm32v7
  • 2.1-sdk-bionic-arm32v7
  • 2.1-runtime-bionic-arm32v7
  • 2.1-aspnetcore-runtime-bionic-arm32v7

Try one in minutes like this:

docker run --rm microsoft/dotnet-samples:dotnetapp

Here it is downloading the images...

Docker on a Raspberry Pi

In previous versions of .NET Core's Dockerfiles it would fail if you were running an x64 image on ARM:

standard_init_linux.go:190: exec user process caused "exec format error"

Different processors! But with multiarch per https://github.com/dotnet/announcements/issues/14 Kendra from Microsoft it just works with 2.1.

Docker has a multi-arch feature that microsoft/dotnet-nightly recently started utilizing. The plan is to port this to the official microsoft/dotnet repo shortly. The multi-arch feature allows a single tag to be used across multiple machine configurations. Without this feature each architecture/OS/platform requires a unique tag. For example, the microsoft/dotnet:1.0-runtime tag is based on Debian and microsoft/dotnet:1.0-runtime-nanoserver if based on Nano Server. With multi-arch there will be one common microsoft/dotnet:1.0-runtime tag. If you pull that tag from a Linux container environment you will get the Debian based image whereas if you pull that tag from a Windows container environment you will get the Nano Server based image. This helps provide tag uniformity across Docker environments thus eliminating confusion.

In these examples above I can:

  • Run a preconfigured app within a Docker image like:
    • docker run --rm microsoft/dotnet-samples:dotnetapp
  • Run dotnet commands within the SDK image like:
    • docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info
  • Run an interactive terminal within the SDK image like:
    • docker run --rm -it microsoft/dotnet:2.1-sdk

As a quick example, here I'll jump into a container and new up a quick console app and run it, just to prove I can. This work will be thrown away when I exit the container.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk
root@063f3c50c88a:/# ls
bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
root@063f3c50c88a:/# cd ~
root@063f3c50c88a:~# mkdir mytest
root@063f3c50c88a:~# cd mytest/
root@063f3c50c88a:~/mytest# dotnet new console
The template "Console Application" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on /root/mytest/mytest.csproj...
Restoring packages for /root/mytest/mytest.csproj...
Installing Microsoft.NETCore.DotNetAppHost 2.1.0-rc1.
Installing Microsoft.NETCore.DotNetHostResolver 2.1.0-rc1.
Installing NETStandard.Library 2.0.3.
Installing Microsoft.NETCore.DotNetHostPolicy 2.1.0-rc1.
Installing Microsoft.NETCore.App 2.1.0-rc1.
Installing Microsoft.NETCore.Platforms 2.1.0-rc1.
Installing Microsoft.NETCore.Targets 2.1.0-rc1.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.props.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.targets.
Restore completed in 15.8 sec for /root/mytest/mytest.csproj.

Restore succeeded.
root@063f3c50c88a:~/mytest# dotnet run
Hello World!
root@063f3c50c88a:~/mytest# dotnet exec bin/Debug/netcoreapp2.1/mytest.dll
Hello World!

If you try it yourself, you'll note that "dotnet run" isn't very fast. That's because it does a restore, build, and run. Compilation isn't super quick on these tiny devices. You'll want to do as little work as possible. Rather than a "dotnet run" all the time, I'll do a "dotnet build" then a "dotnet exec" which is very fast.

If you're doing to do Docker and .NET Core, I can't stress enough how useful the resources are over at https://github.com/dotnet/dotnet-docker.

Building .NET Core Apps with Docker

Develop .NET Core Apps in a Container

  • Develop .NET Core Applications - This sample shows how to develop, build and test .NET Core applications with Docker without the need to install the .NET Core SDK.
  • Develop ASP.NET Core Applications - This sample shows how to develop and test ASP.NET Core applications with Docker without the need to install the .NET Core SDK.

Optimizing Container Size

ARM32 / Raspberry Pi

I found the samples to be super useful...be sure to dig into the Dockerfiles themselves as it'll give you a ton of insight into how to structure your own files. Being able to do Multistage Dockerfiles is crucial when building on a small device like a RPi. You want to do as little work as possible and let Docker cache as many layers with its internal "smarts." If you're not thoughtful about this, you'll end up wasting 10x the time building image layers every build.

Dockerizing a real ASP.NET Core Site with tests!

Can I take my podcast site and Dockerize it and build/test/run it on a Raspberry Pi? YES.

FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY hanselminutes.core/*.csproj ./hanselminutes.core/
COPY hanselminutes.core.tests/*.csproj ./hanselminutes.core.tests/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/hanselminutes.core
RUN dotnet build


FROM build AS testrunner
WORKDIR /app/hanselminutes.core.tests
ENTRYPOINT ["dotnet", "test", "--logger:trx"]


FROM build AS test
WORKDIR /app/hanselminutes.core.tests
RUN dotnet test


FROM build AS publish
WORKDIR /app/hanselminutes.core
RUN dotnet publish -c Release -o out


FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=publish /app/hanselminutes.core/out ./
ENTRYPOINT ["dotnet", "hanselminutes.core.dll"]

Love it. Now I can "docker build ." on my Raspberry Pi. It will restore, test, and build. If the tests fail, the Docker build will fail.

See how there's an extra section up there called "testrunner" and then after it is "test?" That testrunner section is a no-op. It sets an ENTRYPOINT but it is never used...yet. The ENTRYPOINT is an implicit run if it is the last line in the Dockerfile. That's there so I can "Run up to it" if I want to.

I can just build and run like this:

docker build -t podcast .
docker run --rm -it -p 8000:80 podcast

NOTE/GOTCHA: Note that the "runtime" image is microsoft/dotnet:2.1-aspnetcore-runtime, not microsoft/dotnet:2.1-runtime. That aspnetcore one pre-includes the binaries I need for running an ASP.NET app, that way I can just include a single reference to "<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0-rc1-final" />" in my csproj. If didn't use the aspnetcore-runtime base image, I'd need to manually pull in all the ASP.NET Core packages that I want. Using the base image might make the resulting image files larger, but it's a balance between convenience and size. It's up to you. You can manually include just the packages you need, or pull in the "Microsoft.AspNetCore.App" meta-package for convenience. My resulting "podcast" image ended up 205megs, so not to bad, but of course if I wanted I could trim in a number of ways.

Or, if I JUST want test results from Docker, I can do this! That means I can run the tests in the Docker container, mount a volume between the Linux container and (theoretical) Window host, and then open the .trx resulting file in Visual Studio!

docker build --pull --target testrunner -t podcast:test .
docker run --rm -v D:\github\hanselminutes-core\TestResults:/app/hanselminutes.core.tests/TestResults podcast:test

Check it out! These are the test results from the tests that ran within the Linux Container:

XUnit Tests from within a Docker Container on Linux viewed within Visual Studio on Windows

Here's the result. I've now got my Podcast website running in Docker on an ARM32 Raspberry Pi 3 with just an hours' work (writing the Dockerfile)!

It's my podcast site running under Docker on .NET Core 2.1 on a Raspberry Pi

Second - did you make it this far down? - You can just install the .NET Core 2.1 SDK "on the metal." No Docker, just get the tar.gz and set it up. Looking at the RPi ARM32v7 Dockerfile, I can install it on the metal like this. Note I'm getting the .NET Core SDK *and* the ASP.NET Core shared runtime. In the final release build you will just get the SDK and it'll include everything, including ASP.NET.

$ sudo apt-get -y update
$ sudo apt-get -y install libunwind8 gettext
$ wget https://dotnetcli.blob.core.windows.net/dotnet/Sdk/2.1.300-rc1-008673/dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz
$ wget https://dotnetcli.blob.core.windows.net/dotnet/aspnetcore/Runtime/2.1.0-rc1-final/aspnetcore-runtime-2.1.0-rc1-final-linux-arm.tar.gz
$ sudo mkdir /opt/dotnet
$ sudo tar -xvf dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz -C /opt/dotnet/
$ sudo tar -xvf aspnetcore-runtime-2.1.0-rc1-final-linux-arm.tar.gz -C /opt/dotnet/
$ sudo ln -s /opt/dotnet/dotnet /usr/local/bin
$ dotnet --info

Cross-platform for the win!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

A multi-player server-side GameBoy Emulator written in .NET Core and Angular

March 5, '18 Comments [4] Posted in Docker | DotNetCore | Javascript | Open Source
Sponsored By

Server-side GameBoyOne of the great joys of sharing and discovering code online is when you stumble upon something so truly epic, so amazing, that you have to dig in. Head over to https://github.com/axle-h/Retro.Net and ask yourself why this GitHub project has only 20 stars?

Alex Haslehurst has created some retro hardware libraries in open source .NET Core with an Angular Front End!

Translation?

A multiplayer server-side Game Boy emulator. Epic.

You can run it in minutes with

docker run -p 2500:2500 alexhaslehurst/server-side-gameboy

Then just browse to http://localhost:2500 and play Tetris on the original GameBoy!

I love this for a number of reasons.

First, I love his perspective:

Please check out my GameBoy emulator written in .NET Core; Retro.Net. Yes, a GameBoy emulator written in .NET Core. Why? Why not. I plan to do a few write-ups about my experience with this project. Firstly: why it was a bad idea.

  1. Emulation on .NET
  2. Emulating the GameBoy CPU on .NET

The biggest issue one has trying to emulate a CPU with a platform like .NET is the lack of reliable high-precision timing. However, he manages a nice from-scratch emulation of the Z80 processor, modeling low level things like registers in very high level C#. I love that public class GameBoyFlagsRegister is a thing. ;) I did similar things when I ported a 15 year old "Tiny CPU" to .NET Core/C#.

Address space diagram from https://ax-h.com/software/development/emulation/2017/12/03/emulating-the-gameboy-cpu-on-dot-net.html

Be sure to check out Alex's extremely detailed explanation on how he modeled the Z80 microprocessor.

Luckily the GameBoy CPU, a Sharp LR35902, is derived from the popular and very well documented Zilog Z80 - A microprocessor that is unbelievably still in production today, over 40 years after it’s introduction.

The Z80 is an 8-bit microprocessor, meaning that each operation is natively performed on a single byte. The instruction set does have some 16-bit operations but these are just executed as multiple cycles of 8-bit logic. The Z80 has a 16-bit wide address bus, which logically represents a 64K memory map. Data is transferred to the CPU over an 8-bit wide data bus but this is irrelevant to simulating the system at state machine level. The Z80 and the Intel 8080 that it derives from have 256 I/O ports for accessing external peripherals but the GameBoy CPU has none - favouring memory mapped I/O instead

He didn't just create an emulator - there's lots of those - but uniquely he runs it on the server-side while allowing shared controls in a browser. "In between each unique frame, all connected clients can vote on what the next control input should be. The server will choose the one with the most votes… most of the time." Massively multi-player online GameBoy! Then he streams out the next frame! "GPU rendering is completed on the server once per unique frame, compressed with LZ4 and streamed out to all connected clients over websockets."

This is a great learning repository because:

  • it has complex business logic on the server-side but the front end uses Angular and web-sockets and open web technologies.
  • It's also nice that he has a complete multi-stage Dockerfile that is itself a great example of how to build both .NET Core and Angular apps in Docker.
  • Extensive (thousands) of Unit Tests with the Shouldly Assertion Framework and Moq Mocking Framework.
  • Great example usages of Reactive Programming
  • Unit Testing on both server AND client, using Karma Unit Testing for Angular

Here's a few favorite elegant code snippets in this huge repository.

The Reactive Button Presses:

_joyPadSubscription = _joyPadSubject
    .Buffer(FrameLength)
    .Where(x => x.Any())
    .Subscribe(presses =>
                {
                    var (button, name) = presses
                        .Where(x => !string.IsNullOrEmpty(x.name))
                        .GroupBy(x => x.button)
                        .OrderByDescending(grp => grp.Count())
                        .Select(grp => (button: grp.Key, name: grp.Select(x => x.name).First()))
                        .FirstOrDefault();
                    joyPad.PressOne(button);
                    Publish(name, $"Pressed {button}");

                    Thread.Sleep(ButtonPressLength);
                    joyPad.ReleaseAll();
                });

The GPU Renderer:

private void Paint()
{
    var renderSettings = new RenderSettings(_gpuRegisters);

    var backgroundTileMap = _tileRam.ReadBytes(renderSettings.BackgroundTileMapAddress, 0x400);
    var tileSet = _tileRam.ReadBytes(renderSettings.TileSetAddress, 0x1000);
    var windowTileMap = renderSettings.WindowEnabled ? _tileRam.ReadBytes(renderSettings.WindowTileMapAddress, 0x400) : new byte[0];

    byte[] spriteOam, spriteTileSet;
    if (renderSettings.SpritesEnabled) {
        // If the background tiles are read from the sprite pattern table then we can reuse the bytes.
        spriteTileSet = renderSettings.SpriteAndBackgroundTileSetShared ? tileSet : _tileRam.ReadBytes(0x0, 0x1000);
        spriteOam = _spriteRam.ReadBytes(0x0, 0xa0);
    }
    else {
        spriteOam = spriteTileSet = new byte[0];
    }

    var renderState = new RenderState(renderSettings, tileSet, backgroundTileMap, windowTileMap, spriteOam, spriteTileSet);

    var renderStateChange = renderState.GetRenderStateChange(_lastRenderState);
    if (renderStateChange == RenderStateChange.None) {
        // No need to render the same frame twice.
        _frameSkip = 0;
        _framesRendered++;
        return;
    }

    _lastRenderState = renderState;
    _tileMapPointer = _tileMapPointer == null ? new TileMapPointer(renderState) : _tileMapPointer.Reset(renderState, renderStateChange);
    var bitmapPalette = _gpuRegisters.LcdMonochromePaletteRegister.Pallette;
    for (var y = 0; y < LcdHeight; y++) {
        for (var x = 0; x < LcdWidth; x++) {
            _lcdBuffer.SetPixel(x, y, (byte) bitmapPalette[_tileMapPointer.Pixel]);

            if (x + 1 < LcdWidth) {
                _tileMapPointer.NextColumn();
            }
        }

        if (y + 1 < LcdHeight){
            _tileMapPointer.NextRow();
        }
    }
    
    _renderer.Paint(_lcdBuffer);
    _frameSkip = 0;
    _framesRendered++;
}

The GameBoy Frames are composed on the server side then compressed and sent to the client over WebSockets. He's got backgrounds and sprites working, and there's still work to be done.

The Raw LCD is an HTML5 canvas:

<canvas #rawLcd [width]="lcdWidth" [height]="lcdHeight" class="d-none"></canvas>
<canvas #lcd
        [style.max-width]="maxWidth + 'px'"
        [style.max-height]="maxHeight + 'px'"
        [style.min-width]="minWidth + 'px'"
        [style.min-height]="minHeight + 'px'"
        class="lcd"></canvas>

I love this whole project because it has everything. TypeScript, 2D JavaScript Canvas, retro-gaming, and so much more!

const raw: HTMLCanvasElement = this.rawLcdCanvas.nativeElement;
const rawContext: CanvasRenderingContext2D = raw.getContext("2d");
const img = rawContext.createImageData(this.lcdWidth, this.lcdHeight);

for (let y = 0; y < this.lcdHeight; y++) {
  for (let x = 0; x < this.lcdWidth; x++) {
    const index = y * this.lcdWidth + x;
    const imgIndex = index * 4;
    const colourIndex = this.service.frame[index];
    if (colourIndex < 0 || colourIndex >= colours.length) {
      throw new Error("Unknown colour: " + colourIndex);
    }

    const colour = colours[colourIndex];

    img.data[imgIndex] = colour.red;
    img.data[imgIndex + 1] = colour.green;
    img.data[imgIndex + 2] = colour.blue;
    img.data[imgIndex + 3] = 255;
  }
}
rawContext.putImageData(img, 0, 0);

context.drawImage(raw, lcdX, lcdY, lcdW, lcdH);

I would encourage you to go STAR and CLONE https://github.com/axle-h/Retro.Net and give it a run with Docker! You can then use Visual Studio Code and .NET Core to compile and run it locally. He's looking for help with GameBoy sound and a Debugger.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Why should I care about Kubernetes, Docker, and Container Orchestration?

February 8, '18 Comments [37] Posted in Docker | Kubernetes
Sponsored By

A person at work chatted me, commenting on my recent blog posts on the Raspberry Pi Kubernetes Clusters that are being built, and wondered "why should I care about Kubernetes or Docker or any of that stuff?"

WOCinTech Chat pic used under CC

Great question, and I'm figuring it out myself. There are lots of resources out there but none that spoke my language, so here's my thoughts and how I explain it.

"Hey, I have this great new blog app!"

"Fab, gimme!"

"Sure, first make sure you have this version of Windows/Linux, this version of .NET/Python/Node, and these prerequisites."

"Hang on, lemme call you next week when that's handled."

This is how software was built for years. Now let's deploy it.

"Here's the code/dlls/application zipped up."

"Lemme FTP/SFTP/Drag this from one Explorer Window to another."

"Is this version of that file set to this?"

"Wait, what?"

"Make sure that system/boss/dll/nounjs is version 4.5.4.1, they patched it."

"Ok, Imma shush* into production."

Again, we've all been there. It's 2018 and there's more folks doing this than you care to admit.

Enter Virtual Machines! Way better, right? Here's a USB key with a  file that is EVERYTHING you need. Handled.

"Forget that, use this. It's better than a computer, it's a Virtual Machine. But be aware, It doesn't know it's Virtual, so respect the lie."

"OK, email it to me."

"Well, it's 32 gigs. Lemme UPS it."

Your app is only 100 megs, and this VM is tens of gigs. Why does a 150 pound person need a 6000lb Hummer? Isolation, I guess.

"The app is getting more complex, but it's cool. There's four VMs now. One for the DB, one for Redis, and a front end one, and the shopping cart gets one. It's microservices!"

"I'm loving it."

"Here's a 2 TB drive."

Nice that we're breaking it up, but not so nice that we're getting bloated. Now we have to run apt upgrade/windows update on all these things and maintain them. Why drive a Hummer when I can get a Lyft?

"Ok I got them all running on this beefy machine under my desk."

"Cool, we're moving to the cloud."

"Sigh. I need to update all these connection strings and start uploading VMs."

"It'll be great. It's like a machine under your desk, except your desk is in the cloud."

"What's the cloud?"

"It's a server room you can't see. Basically it's the computers under your desk. But invisible."

Most VM infrastructure is pretty sloppy. It's hard coded IP addresses, it's poorly named VMs living in the same subnets, then we'll move them to the cloud (lift and shift!) but then they are still messy, but they're in the Cloud™, right?

"You know, all these VMs are heavy. I have to maintain and move a bunch of stuff that ISN'T the app. Containers are the way. Just define the app's base requirement and share everything else."

"I've been hearing about this. I can type "docker run hello-world" and on any machine it'll load the hello world image (based on Ubuntu) from a central hub and run it in a mostly isolated way. Guaranteed to work and run, even as time passes."

"Nice, because more and more parts of our app are in .NET Core on Linux, but there's also some Python and node."

"Yep and it'll all just run as the prerequisites are clearly listed in the container...and the prereqs are in fact references to other container images."

"It's containers all the way down."

Now the DB, Redis, the front end, and the shopping cart can call be defined in some simple text files. Rather than your Host OS (the main computer...the metal) loading up a bunch of Guest OS's (literally copies!) and then loading all the apps and prerequisites, you'll share  OSes, and when appropriate, the binaries and libraries.

"OK, now we have a bunch of containers running in Docker, but sometimes they go down or stop."

"Run them again?"

"It's more that that, we need to sometimes have 3 shopping cart containers, and other times we need 2 or more DB containers. Plus their IPs sometimes change"

"So we need something to keep them running, scale or auto-scale them, as well manage networking and naming/dns."

Enter a container orchestrator. There's Docker Swarm, Mesos/Marathon, Azure Service Fabric, and others, but for this post we'll use Kubernetes.

"So Kubernetes runs my containers, keeps them running, and helps manage the network?"

"Yes, and no. Parts of Kubernetes - or k8s, as cool people like me who have been using it for nearly 3 hours say - are part of the master components, like etcd for key value storage, and the kube-scheduler for selecting what node to run a "pod" on (a pod is cooler to say than container, but sometimes a pod is more than one container. Still, very cool.)

"I'll need to make a glossary."

"Darn tootin' you will."

Kubernetes has basically pluggable everything. Don't like their networking setup? There's literally over a dozen options. Want better charts and graphs? Whole world of options.

Just as one Dockerfile can explain declare what's needed to run an app, a Kubernetes YAML file describes not only the containers, but the ports needed, the number of replicas of each (think web farm), names, environment variables, and more. Here's a file that shows a front end, back end, and load balancer. Everything is there, connection strings become internal DNS lookups, every service has a load balancer (if you like), and you can scale manually or auto-scale.

"Ok so why should I care?"

"A few reasons. In the past, to install our app I'd need to give you a Word document and a weekend. Now you type kubectl apply theapp.yaml and it's running in less than a minute."

"I'm still billing for the weekend."

Simply stated, we are at the beginning of a new phase of DevOps. One that is programmatic, elastic, and declarative. It's consistent and clear and modular.

I recommend you check out Julia Evans' "Reasons Kubernetes is cool" as well as reading up on how to make a Kubernetes cluster (and the management VMS are free) in Azure.

* I'm trying to make shush a thing. We don't Es Es Eaytch into machines! We shush in! It's pronounced somewhere between shush and shoosh. Make sure you throw in a little petit jeté when you say it.

* Pic used under CC


Sponsor: Unleash a faster Python Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python* Now!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Trying out new .NET Core Alpine Docker Images

November 22, '17 Comments [11] Posted in Docker | DotNetCore | Open Source
Sponsored By

Docker ContainersI blogged recently about optimizing .NET and ASP.NET Docker files sizes. .NET Core 2.0 has previously been built on a Debian image but today there is preview image with .NET Core 2.1 nightlies using Alpine. You can read about the announcement here about this new Alpine preview image. There's also a good rollup post on .NET and Docker.

They have added two new images:

  • 2.1-runtime-alpine
  • 2.1-runtime-deps-alpine

Alpine support is part of the .NET Core 2.1 release. .NET Core 2.1 images are currently provided at the microsoft/dotnet-nightly repo, including the new Alpine images. .NET Core 2.1 images will be promoted to the microsoft/dotnet repo when released in 2018.

NOTE: The -runtime-deps- image contains the dependancies needed for a .NET Core application, but NOT the .NET Core runtime itself. This is the image you'd use if your app was a self-contained application that included a copy of the .NET Core runtime. This is apps published with -r [runtimeid]. Most folks will use the -runtime- image that included the full .NET Core runtime. To be clear:

- The runtime image contains the .NET Core runtime and is intended to run Framework-Dependent Deployed applications - see sample

- The runtime-deps image contains just the native dependencies needed by .NET Core and is intended to run Self-Contained Deployed applications - see sample

It's best with .NET Core to use multi-stage build files, so you have one container that builds your app and one that contains the results of that build. That way you don't end up shipping an image with a bunch of SDKs and compilers you don't need.

NOTE: Read this to learn more about image versions in Dockerfiles so you can pick the right tag and digest for your needs. Ideally you'll pick a docker file that rolls forward to include the latest servicing patches.

Given this docker file, we build with the SDK image, then publish, and the result is about 219megs.

FROM microsoft/dotnet:2.0-sdk as builder  

RUN mkdir -p /root/src/app/dockertest
WORKDIR /root/src/app/dockertest

COPY dockertest.csproj .
RUN dotnet restore ./dockertest.csproj

COPY . .
RUN dotnet publish -c release -o published

FROM microsoft/dotnet:2.0.0-runtime

WORKDIR /root/
COPY --from=builder /root/src/app/dockertest/published .
ENV ASPNETCORE_URLS=http://+:5000
EXPOSE 5000/tcp
CMD ["dotnet", "./dockertest.dll"]

Then I'll save this as Dockerfile.debian and build like this:

> docker build . -t shanselman/dockertestdeb:0.1 -f dockerfile.debian

With a standard ASP.NET app this image ends up being 219 megs.

Now I'll just change one line, and use the 2.1 alpine runtime

FROM microsoft/dotnet-nightly:2.1-runtime-alpine

And build like this:

> docker build . -t shanselman/dockertestalp:0.1 -f dockerfile.alpine

and compare the two:

> docker images | find /i "dockertest"
shanselman/dockertestalp 0.1 3f2595a6833d 16 minutes ago 82.8MB
shanselman/dockertestdeb 0.1 0d62455c4944 30 minutes ago 219MB

Nice. About 83 megs now rather than 219 megs for a Hello World web app. Now the idea of a microservice is more feasible!

Please do head over to the GitHub issue here https://github.com/dotnet/dotnet-docker-nightly/issues/500 and offer your thoughts and results as you test these Alpine images. Also, are you interested in a "-debian-slim?" It would be halfway to Alpine but not as heavy as just -debian.

Lots of great stuff happening around .NET and Docker. Be sure to also check out Jeff Fritz's post on creating a minimal ASP.NET Core Windows Container to see how you can squish .(full) Framework applications running on Windows containers as well. For example, the Windows Nano Server images are just 93 megs compressed.


Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Docker and Linux Containers on Windows, with or without Hyper-V Virtual Machines

November 20, '17 Comments [29] Posted in Docker | Win10
Sponsored By

Containers are lovely, in case you haven't heard. They are a nice and clean way to get a reliable and guaranteed deployment, no matter the host system.

If I want to run my my ASP.NET Core application, I can just type "docker run -p 5000:80 shanselman/demos" at the command line, and it'll start up! I don't have any concerns that it won't run. It'll run, and run well.

Some containers naysayers say , sure, we could do the same thing with Virtual Machines, but even today, a VHD (virtual hard drive) is rather an unruly thing and includes a ton of overhead that a container doesn't have. Containers are happening and you should be looking hard at them for your deployments.

docker run shanselman/demos

Historically on Windows, however, Linux Containers run inside a Hyper-V virtual machine. This can be a good thing or a bad thing, depending on what your goals are. Running Containers inside a VM gives you significant isolation with some overhead. This is nice for Servers but less so for my laptop. Docker for Windows hides the VM for the most part, but it's there. Your Container runs inside a Linux VM that runs within Hyper-V on Windows proper.

HyperV on Windows

With the latest version of Windows 10 (or 10 Server) and the beta of Docker for Windows, there's native Linux Container support on Windows. That means there's no Virtual Machine or Hyper-V involved (unless you want), so Linux Containers run on Windows itself using Windows 10's built in container support.

For now you have to switch "modes" between Hyper V and native Containers, and you can't (yet) run Linux and Windows Containers side by side. The word on the street is that this is just a point in time thing, and that Docker will at some point support running Linux and Windows Containers in parallel. That's pretty sweet because it opens up all kinds of cool hybrid scenarios. I could run a Windows Server container with an .NET Framework ASP.NET app that talks to a Linux Container running Redis or Postgres. I could then put them all up into Kubernetes in Azure, for example.

Once I've turned Linux Containers on Windows on within Docker, everything just works and has one less moving part.

Linux Containers on Docker

I can easily and quickly run busybox or real Ubuntu (although Windows 10 already supports Ubuntu natively with WSL):

docker run -ti busybox sh

More useful even is to run the Azure Command Line with no install! Just "docker run -it microsoft/azure-cli" and it's running in a Linux Container.

Azure CLI in a Container

I can even run nyancat! (Thanks Thomas!)

docker run -it supertest2014/nyan

nyancat!

Speculating - I look forward to the day I can run "minikube start --vm-driver="windows" (or something) and easily set up a Kubernetes development system locally using Windows native Linux Container support rather than using Hyper-V Virtual Machines, if I choose to.


Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb
Page 1 of 2 in the Docker category Next Page

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.