Scott Hanselman

Where's DNVM? Safely running multiple versions of the .NET Core SDK and Tooling with global.json

July 10, '16 Comments [15] Posted in DotNetCore | Open Source
Sponsored By

On June 27th both the ASP.NET Core and .NET Core 1.0 runtimes were officially released. They are now version 1.0 and are both supported frameworks. However, the "tooling" around .NET Core remains in a Preview state. However, it's really easy and safe to swap between command-line tooling versions.

  • NET Core SDK = Develop apps with .NET Core and the SDK+CLI (Software Development Kit/Command Line Interface) tools
  • .NET Core = Run apps with the .NET Core runtime

You'll see over on the .NET Advanced Downloads page the complete list of downloads including those for Windows, Mac, and several flavors of Linux. It's even supported on RedHat Enterprise Linux...it's surreal to see that RedHat even has .NET Core docs on their site.

Where's DNVM List?

A year ago before ASP.NET Core and .NET Core fully merged and the "dotnet" command line was created, there was a command line tool called "dnvm" or the .NET Version Manager. It would give you a list of the .NET Core runtimes you had installed and let you switch between them. While that exact style of functionality may return as the SDK and tools continues development, you can easily have multiple .NET Core SDKs and CLIs installed and switch between them on a per project basis.

For now, if you want the equivalent to "dnvm list" to see what .NET Core SDKs are installed at a system level, you'll look here.

Where is the .NET Core SDK installed?

When you install the .NET Core SDK on Windows it shows up in C:\Program Files\dotnet\sdk.

C:\Program Files\dotnet\sdk

In this screenshot I have four .NET Core SDKs installed. The SDK that ships with .NET Core 1.0 is 1.0.0-preview2-003121. However, you'll note that I have two newer .NET SDKs installed. Since it's all open source, you can head over to https://github.com/dotnet/cli and scroll down a bit.

There are CI (continuous integration builds) and a complete table of versions that you can download. Be sure to check the Build Status and see that things are passing and healthy, but also have a reason for downloading a daily build.

Know WHY you want a daily build of the .NET Core SDK. Are you checking on a specific bug to see if it's fixed? Is there a new feature that you require?

image

I noticed a specific bug that was bothering me in the Preview 2 tooling. I like to use the new logging system and I like that it uses ANSI Colors when logging to the console. When I go "dotnet run" I get very nice ANSI-colored output. However, when I used "dotnet test" or "dotnet watch," I would lose all my ANSI colors from the same logging calls and just get plaintext. I commented on the GitHub issue here as it's clearly a bug.

ANSI Colors are lost with dotnet watch

It's a cosmetic bug on the way dotnet.exe works with child processes, but it was still annoying to me. The cool part is that when it was/is fixed, as it was with this pull request, I can get a build and install it without fear.

Side by Side .NET Core SDK installs and global.json

I can check the version at the command line like this:

C:\>dotnet --version
1.0.0-preview2-003121

C:\>dotnet --info
.NET Command Line Tools (1.0.0-preview2-003121)

Product Information:
Version: 1.0.0-preview2-003121
Commit SHA-1 hash: 1e9d529bc5

Runtime Environment:
OS Name: Windows
OS Version: 10.0.143xx
OS Platform: Windows
RID: win10-x64

Here I've got the version that shipped with .NET Core 1.0. I want to use the latest one, then go back to my app and use "dotnet watch" or "dotnet test" and see if the bug was really fixed in this version. But what if want my app to be driven by this new dotnet CLI?

I've got a global.json in the root of my solution in c:\lab2 that looks like this. I'm going to change the version to the new one in a moment.

{
"projects": [ "src", "test" ],
"sdk": {
"version": "1.0.0-preview2-003121"
}
}

My projects are in src and my tests are in test, all underneath the main solution folder that contains this global.json file. If the "sdk" section didn't exist, running dotnet --version would pick up the latest one.

If the sdk is "pinned" to a specific version that means that when I run dotnet --version while in this folder or below, I'll get the specific version I've asked for.

Now I'll go to https://github.com/dotnet/cli and install (for example) 1.0.0-preview3-003180. This daily build has the fix for that ANSI bug I care about. Again, you can see this version is installed by looking in the first Windows Explorer screenshot above, and in c:\program files\dotnet\sdk.

Remember that my global.json in my c:\lab2 folder specifies (pinned) preview2? Now running dotnet.exe looks and works like this...read carefully.

C:\lab2>dotnet --version
1.0.0-preview2-003121

C:\lab2>cd ..

C:\>dotnet --version
1.0.0-preview3-003180

C:\>where dotnet
C:\Program Files\dotnet\dotnet.exe

See that? I get preview2 inside the lab2 folder but I get the latest anywhere else. But how?

A little known Windows command line trick is the "where" command. You can say "where notepad" and if there are more than one on the PATH, you'll get a list. However, here there's just one dotnet.exe, but I get different results when I run it in different folders. Exactly how this works is explained in exquisite detail in Matt Warren's post "How the dotnet CLI tooling runs your code" but it LOOKS like this, as viewed in Process Explorer:

DotNet.exe picks up the SDK version from global.json

And when I change the version in global.json to the daily I downloaded?

Here dotnet.exe uses the a newer installed SDK

The dotnet.exe application will look at global.json and then do the right thing. This way I can have lots of projects bring driven by different versions of the "dotnet" command without having to type anything other than "dotnet run" or "dotnet test."

It also allows me to keep using the .NET 1.0 runtime that is released and supported, while quickly testing new tooling features and checking on fixed bugs like this ANSI one that was annoying me.


Sponsor: Many thanks to my friends at Stackify for sponsoring the feed this week! it’s what being a developer is all about so do it the best you can. That’s why Stackify built Prefix. No .NET profiler is easier or more powerful. You’re 2 clicks and $0 away, so build on! Prefix.io.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Is your stuff backed up? Recovering from a hardware failure

July 7, '16 Comments [34] Posted in Musings
Sponsored By

My computer died and I am sadI had a massive hardware failure over the holiday weekend. I can only tell you what I think happened. I'm not an electrical engineer so if you know more (and I'm sure you do), do feel free to share your thoughts in the comments.

For my main machine in my home office I'm running the Ultimate PC that Jeff Atwood was so kind to build for me a few years back. It's not the Ultimate anymore, but I will say it's still VERY competent. I've upgraded the memory, video card, and SSD (more on the SSD in a moment) and it's a great machine and very very fast.

The tower was connected to an APC 850VA UPS that had worked nicely for a while. I replace just the batteries in my UPS's every 18-24 months. This last week the UPS started a low "scream" so I went over to check it out. It had turned off suddenly, so the PC and all the accessories were also off. I turned it back on and there was a series of snaps and pops, another screen, followed by a non-trivial amount of smoke and a burning electrical smell.

When it was all over, the main fuse of the house had popped, the UPS was dead and smoking, and the power supply in my computer was dead and smoking. Ugh.

I headed down to the local electronics shop and bought a new 1000W power supply, a new APC BR1000G, and went to town, rebuilding my machine. After redoing all the cables and stuff...it didn't boot. I didn't even see the hard drive (SSD). The drive is a Crucial c300. I loved this drive and it worked great for like 6 years...and now it's dead. Turns out that these Crucial drives are known to die when they lose power quickly. I tried to bring it back to live using all the various forums and whitepapers about this known issue, but nope. It's dead.

OK, so have I lost data? What now? Fortunately I backup my systems. And I hope you, Dear Reader, also backup as well.

Stop reading this now and please, think about your backups. Do you have one? Have you tested to see if you can restore from your backups?

Backups always succeed. It's restores that fail. Test your backups by restoring from them.

I've got a number of backups because I practice the Backup Rule of Three.

  • 3 copies of anything you care about - Two isn't enough if it's important.
  • 2 different formats - Example: Dropbox+DVDs or Hard Drive+Memory Stick or CD+Crash Plan, or more
  • 1 off-site backup - If the house burns down, how will you get your memories back?

Here's what my backup situation is/was and how I restored.

While you can use Imaging Software and restore an entire image of Windows or Mac, I find that reinstalling Windows takes less than an hour. I keep a bootable USB key of Windows 10 around. You can also download an ISO and make a USB key quickly. You don't usually need an activation key if you're reinstalling Windows. In my case, I installed the new drive, booted off the USB, signed into Windows with my Live ID (Microsoft Account) and it picked up my Windows license already.

Windows File History

I have a 4TB external drive on my desk that uses Windows 10 File History. This is like the Mac Time Machine feature. It's one of the best little "hidden" features of Windows 10 and everyone should use it. It's actually been around for years. My Documents, Desktop, and any other folders I want are automatically backed up as often as I want. I have a backup going every 30 min and I never think about it. It just works, and I don't notice any performance issues.

In this case, I *did* have crap on my desktop that wasn't in Dropbox and wasn't yet backed up to the cloud. I just hooked up the drive and restored from File History. I literally lost nothing. All my desktop crap was restored in place. If you have an external drive that you always have hooked up but it's not really getting use, setup File History in just minutes.

Multiple Cloud-based Backups

I have a number of clouds in my backup rotation:

  • GitHub - I have github repos, both private and public for code.
  • DropBox - My primary cloud files backup
  • OneDrive for Business - My work cloud files backup
  • Synology - I love my Synology. It's a complete home NAS Server with massive storage, RAID, VPN, Docker, and so much more. A daily joy and a local cloud.
  • CrashPlan - I keep TBs up there and pay them happily for the service.

Related Links

Here's some additional reading on ways to back up your system. Please do also help non-technical relative back up their stuff as well. Every week I hear about someone working on their PhD thesis losing their whole life's work an instant. Backup is a system and it CAN be automatic.

What do you do for backup?


Sponsor: Many thanks to my friends at Stackify for sponsoring the feed this week! it’s what being a developer is all about so do it the best you can. That’s why Stackify built Prefix. No .NET profiler is easier or more powerful. You’re 2 clicks and $0 away, so build on! Prefix.io.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

VIDEO: How to run Linux and Bash on "Windows 10 Anniversary Update"

July 1, '16 Comments [27] Posted in Win10
Sponsored By

Ya, I'm not a fan of the name Windows 10 "Anniversary Update" but it has been a year since Windows 10 came out. It's my daily driver and it gets better every month. This year it's gonna get better (like Windows 10.1 better if you ask me) with an update that's coming August 2nd!

In that update (or in the Windows 10 Insider Builds you can get if you're a techie or adventurous) you're going to get a lot of nice polish AND the ability to optionally run Linux (ELF) Binaries on Windows 10 at the command line. The feature is the Linux Subsystem for Windows or "Bash on Windows" or sometimes "Ubuntu on Windows." Call it what you like, they're real, and they're spectacular.

We first saw Bash on Windows 10 in march of this year at the BUILD conference.

Developers can run all their Linux user-mode developer tools like Redis or even TensorFlow (without GPU support).

I went and recorded a 20 min video screencast showing what you need to do to enable and some cool stuff that just scratches the surface of this new feature. Personally, I love that I can develop with Rails on Windows and it actually works and isn't a second class citizen. If you're a developer of any kind this opens up a whole world where you can develop for Windows and Linux without compromise and without the weight of a VM.

I hope you enjoy this video! Also check out (and share) my other Windows 10 videos or my Windows 10 playlist at http://hanselman.com/windows10.

Sponsor: Build servers are great at compiling code and running tests, but not so great at deployment. When you find yourself knee-deep in custom scripts trying to make your build server do something it wasn't meant to, give Octopus Deploy a try.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

.NET Core 1.0 is now released!

June 27, '16 Comments [68] Posted in ASP.NET | Open Source
Sponsored By
Code .NET in the Cloud!

I feel like it's the culmination of all these years of work in .NET and Open Source. This is why I came to work at Microsoft; we wanted to open source as much as we could and build a community around .NET and open source at Microsoft. 15 years and the work of thousands of people later, today we released .NET Core 1.0.

Take a moment and head over to http://dot.net and check out the download page. It's got a really nice place you can try out C# directly in the browser without having to install anything! There's also a great C# Tutorial with interactive browser-based tools as well.

.NET Core 1.0 runs on Windows, Mac, and several flavors of Linux including RedHat Enterprise Linux and Ubuntu. It supports C#, VB, and F# and modern constructs like generics, Language Integrated Query (LINQ), async support and more. The Core Runtime, libraries, compiler, languages and tools are all open source on GitHub where contributions are accepted, tested and fully supported.

Getting started with .NET Core

What is .NET Core? Here's some details from the .NET Blog:

.NET Core is a new cross-platform .NET product. The primary points of .NET Core are:

  • Cross-platform: Runs on Windows, macOS and Linux.
  • Flexible deployment: Can be included in your app or installed side-by-side user- or machine-wide.
  • Command-line tools: All product scenarios can be exercised at the command-line.
  • Compatible: .NET Core is compatible with .NET Framework, Xamarin and Mono, via the .NET Standard Library.
  • Open source: The .NET Core platform is open source, using MIT and Apache 2 licenses. Documentation is licensed under CC-BY. .NET Core is a .NET Foundation project.
  • Supported by Microsoft: .NET Core is supported by Microsoft, per .NET Core Support

.NET Core is composed of the following parts:

  • A .NET runtime, which provides a type system, assembly loading, a garbage collector, native interop and other basic services.
  • A set of framework libraries, which provide primitive data types, app composition types and fundamental utilities.
  • A set of SDK tools and language compilers that enable the base developer experience, available in the .NET Core SDK.
  • The ‘dotnet’ app host, which is used to launch .NET Core apps. It selects and hosts the runtime, provides an assembly loading policy and launches the app. The same host is also used to launch SDK tools in the same way.

Blogs

Here are the major blogs carrying the announcement.

We are also releasing .NET documentation today at docs.microsoft.com, the new documentation service for Microsoft. The documentation you see there is just a start. You can follow our progress at core-docs on GitHub. ASP.NET Core documentation is also available and open source.

Have fun!


Sponsor: Build servers are great at compiling code and running tests, but not so great at deployment. When you find yourself knee-deep in custom scripts trying to make your build server do something it wasn't meant to, give Octopus Deploy a try.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Adding a Custom Inline Route Constraint in ASP.NET Core 1.0

June 23, '16 Comments [22] Posted in ASP.NET | ASP.NET MVC
Sponsored By

ASP.NET supports both attribute routing as well as centralized routes. That means that you can decorate your Controller Methods with your routes if you like, or you can map routes all in one place.

Here's an attribute route as an example:

[Route("home/about")]
public IActionResult About()
{
//..
}

And here's one that is centralized. This might be in Startup.cs or wherever you collect your routes. Yes, there are better examples, but you get the idea. You can read about the fundamentals of ASP.NET Core Routing in the docs.

routes.MapRoute("about", "home/about",
new { controller = "Home", action = "About" });

A really nice feature of routing in ASP.NET Core is inline route constraints. Useful URLs contain more than just paths, they have identifiers, parameters, etc. As with all user input you want to limit or constrain those inputs. You want to catch any bad input as early on as possible. Ideally the route won't even "fire" if the URL doesn't match.

For example, you can create a route like

files/{filename}.{ext?}

This route matches a filename or an optional extension.

Perhaps you want a dateTime in the URL, you can make a route like:

person/{dob:datetime}

Or perhaps a Regular Expression for a Social Security Number like this (although it's stupid to put a SSN in the URL ;) ):

user/{ssn:regex(d{3}-d{2}-d{4})}

There is a whole table of constraint names you can use to very easily limit your routes. Constraints are more than just types like dateTime or int, you can also do min(value) or range(min, max).

However, the real power and convenience happens with Custom Inline Route Constraints. You can define your own, name them, and reuse them.

Lets say my application has some custom identifier scheme with IDs like:

/product/abc123

/product/xyz456

Here we see three alphanumerics and three numbers. We could create a route like this using a regular expression, of course, or we could create a new class called CustomIdRouteConstraint that encapsulates this logic. Maybe the logic needs to be more complex than a RegEx. Your class can do whatever it needs to.

Because ASP.NET Core is open source, you can read the code for all the included ASP.NET Core Route Constraints on GitHub. Marius Schultz has a great blog post on inline route constraints as well.

Here's how you'd make a quick and easy {customid} constraint and register it. I'm doing the easiest thing by deriving from RegexRouteConstraint, but again, I could choose another base class if I wanted, or do the matching manually.

namespace WebApplicationBasic
{
public class CustomIdRouteConstraint : RegexRouteConstraint
{
public CustomIdRouteConstraint() : base(@"([A-Za-z]{3})([0-9]{3})$")
{
}
}
}

In your ConfigureServices in your Startup.cs you just configure the route options and map a string like "customid" with your new type like CustomIdRouteConstraint.

public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMvc();
services.Configure<RouteOptions>(options =>
options.ConstraintMap.Add("customid", typeof(CustomIdRouteConstraint)));
}

Once that's done, my app knows about "customid" so I can use it in my Controllers in an inline route like this:

[Route("home/about/{id:customid}")]
public IActionResult About(string customid)
{
// ...
return View();
}

If I request /Home/About/abc123 it matches and I get a page. If I tried /Home/About/999asd I would get a 404! This is ideal because it compartmentalizes the validation. The controller doesn't need to sweat it. If you create an effective route with an effective constraint you can rest assured that the Controller Action method will never get called unless the route matches.

If the route doesn't fire it's a 404

Unit Testing Custom Inline Route Constraints

You can unit test your custom inline route constraints as well. Again, take a look at the source code for how ASP.NET Core tests its own constraints. There is a class called ConstrainsTestHelper that you can borrow/steal.

I make a separate project and setup xUnit and the xUnit runner so I can call "dotnet test."

Here's my tests that include all my "Theory" attributes as I test multiple things using xUnit with a single test. Note we're using Moq to mock the HttpContext.

public class TestProgram
{

[Theory]
[InlineData("abc123", true)]
[InlineData("xyz456", true)]
[InlineData("abcdef", false)]
[InlineData("totallywontwork", false)]
[InlineData("123456", false)]
[InlineData("abc1234", false)]
public void TestMyCustomIDRoute(
string parameterValue,
bool expected)
{
// Arrange
var constraint = new CustomIdRouteConstraint();

// Act
var actual = ConstraintsTestHelper.TestConstraint(constraint, parameterValue);

// Assert
Assert.Equal(expected, actual);
}
}

public class ConstraintsTestHelper
{
public static bool TestConstraint(IRouteConstraint constraint, object value,
Action<IRouter> routeConfig = null)
{
var context = new Mock<HttpContext>();

var route = new RouteCollection();

if (routeConfig != null)
{
routeConfig(route);
}

var parameterName = "fake";
var values = new RouteValueDictionary() { { parameterName, value } };
var routeDirection = RouteDirection.IncomingRequest;
return constraint.Match(context.Object, route, parameterName, values, routeDirection);
}
}

Now note the output as I run "dotnet test". One test with six results. Now I'm successfully testing my custom inline route constraint, as a unit. in isolation.

xUnit.net .NET CLI test runner (64-bit .NET Core win10-x64)
Discovering: CustomIdRouteConstraint.Test
Discovered: CustomIdRouteConstraint.Test
Starting: CustomIdRouteConstraint.Test
Finished: CustomIdRouteConstraint.Test
=== TEST EXECUTION SUMMARY ===
CustomIdRouteConstraint.Test Total: 6, Errors: 0, Failed: 0, Skipped: 0, Time: 0.328s

Lots of fun!


Sponsor: Working with DOC, XLS, PDF or other business files in your applications? Aspose.Total Product Family contains robust APIs that give you everything you need to create, manipulate and convert business files along with many other formats in your applications. Stop struggling with multiple vendors and get everything you need in one place with Aspose.Total Product Family. Start a free trial today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.