Scott Hanselman

Using the ASP.NET Core Environment Feature to manage Development vs. Production for any config file type

October 22, 2020 Comment on this post [3] Posted in ASP.NET
Sponsored By

ASP.NET Core can understand what "environment" it's running under. For me, that's "development," "test," "staging," "production," but for you it can be whatever makes you happy. By default, ASP.NET understand Development, Staging, and Production.

You can the change how your app behaves by asking "IsDevelopment" to do certain things. For example:

if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

if (env.IsProduction() || env.IsStaging() || env.IsEnvironment("Staging_2"))
{
app.UseExceptionHandler("/Error");
}

There are helpers for the standard environments, or I can just pass in a string.

You can also make Environmental decisions with taghelpers like this in your Views/Razor Pages. I did this when I dynamically generated my robots.txt files:

@page
@{
Layout = null;
this.Response.ContentType = "text/plain";
}
# /robots.txt file for http://www.hanselman.com/
User-agent: *
<environment include="Development,Staging">Disallow: /</environment>
<environment include="Production">Disallow: /blog/private
Disallow: /blog/secret
Disallow: /blog/somethingelse</environment>

This is a really nice way to include things like banners or JavaScript when your site is running in a certain environment. These are easily set as environment variables if you're running in a container. If you're running in an Azure App Service you set the environment from the Config blade:

Now that I've moved this blog to Azure, we have a number of config files that are specific to this blog. Since the configuration features of ASP.NET are so flexible it was easy to extend this idea of environments to our own config files.

Our Startup class sets up the filesnames of our various config files. Note the second line, if we have no environment, we just look for the regular file name.

public Startup(IWebHostEnvironment env)
{
hostingEnvironment = env;

var envname = string.IsNullOrWhiteSpace(hostingEnvironment.EnvironmentName) ?
"." : string.Format($".{hostingEnvironment.EnvironmentName}.");

SiteSecurityConfigPath = Path.Combine("Config", $"siteSecurity{envname}config");
IISUrlRewriteConfigPath = Path.Combine("Config", $"IISUrlRewrite{envname}config");
SiteConfigPath = Path.Combine("Config", $"site{envname}config");
MetaConfigPath = Path.Combine("Config", $"meta{envname}config");
AppSettingsConfigPath = $"appsettings.json";

...

Here's the files in my Visual Studio. Note that another benefit of this naming structure is that the files nest nicely underneath their parent file.

Nested config files

The formalization of environments is not a new thing, but the adoption of it deeply into our application at every level has allowed us to move from dev to staging to production very easily. It's very likely that you have done this in your application, but you may have rolled your own solution. Take a look if you can remove code and adopt this built in technique.

Here's some articles I've already written on the subject of moving this blog to the cloud:

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!


Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Don't ever break a URL if you can help it

October 20, 2020 Comment on this post [14] Posted in DasBlog | Open Source
Sponsored By

404Back in 2017 I said "URLs are UI" and I stand by it. At the time, however, I was running this 18 year old blog using ASP.NET WebForms and the URL was, ahem, https://www.hanselman.com/blog/URLsAreUI.aspx

The blog post got on Hacker News and folks were not impressed with my PascalCasing but were particularly offended by the .aspx extension shouting "this is the technology this blog is written in!" A rightfully valid complaint, to be clear.

ASP.NET has supported extensionless URLs for nearly a decade but I have been just using and enjoying my blog. I've been slowly moving my three "Hanselman, Inc" (it's not really a company) sites over to Azure, to Linux, and to ASP.NET Core. You can actually scroll to the bottom of this site and see the git commit hash AND CI/CD Build (both private links) that this production instance was built and deployed from.

As tastes change, from anglebrackets to curly braces to significant whitespace, they also change in URL styles, from .cgi extesnions, to my PascalCased.aspx, to the more 'modern' lowercased kebab-casing of today.

But how does one change 6000 URLs without breaking their Google Juice? I have history here. Here's a 17 year old blog post...the URL isn't broken. It's important to never change a URL and if you do, always offer a redirect.

When Mark Downie and I discussed moving the venerable .NET blog engine "DasBlog" over to .NET Core, we decided that no matter what, we'd allow for choice in URL style without breaking URLs. His blog runs DasBlog Core also and applies these same techniques.

We decided on two layers of URL management.

  • An optional and configurable XML file in the older IIS Rewrite format that users can update to taste.
    • Why? Users with old blogs like me already have rules in this IISRewrite format. Even though I now run on Linux and there's no IIS to be found, the file exists and works. So we use the IIS Rewrite Module to consume these files. It's a wonderful compatibility feature of ASP.NET Core.
  • The core/base Endpoints that DasBlog would support on its own. This would include a matrix of every URL format that DasBlog has ever supported in the last 10 years.

Here's that code. There may be terser ways to express this, but this is super clear. With or without extension, without or without year/month/day.

app.UseEndpoints(endpoints =>
{
endpoints.MapHealthChecks("/healthcheck");

if (dasBlogSettings.SiteConfiguration.EnableTitlePermaLinkUnique)
{
endpoints.MapControllerRoute(
"Original Post Format",
"~/{year:int}/{month:int}/{day:int}/{posttitle}.aspx",
new { controller = "BlogPost", action = "Post", posttitle = "" });

endpoints.MapControllerRoute(
"New Post Format",
"~/{year:int}/{month:int}/{day:int}/{posttitle}",
new { controller = "BlogPost", action = "Post", postitle = "" });
}
else
{
endpoints.MapControllerRoute(
"Original Post Format",
"~/{posttitle}.aspx",
new { controller = "BlogPost", action = "Post", posttitle = "" });

endpoints.MapControllerRoute(
"New Post Format",
"~/{posttitle}",
new { controller = "BlogPost", action = "Post", postitle = "" });

}
endpoints.MapControllerRoute(
name: "default", "~/{controller=Home}/{action=Index}/{id?}");
});

If someone shows up at any of the half dozen URL formats I've had over the years they'll get a 301 permanent redirect to the canonical one.

UPDATE: Great tip from Tune in the comments: "After moving several websites to new navigation and url structures, I've learned to start redirecting with harmless temporary redirects (http 302) and replace it with a permanent redirect (http 301), only after the dust has settled…"

The old IIS format is added to our site with just two lines:

var options = new RewriteOptions().AddIISUrlRewrite(env.ContentRootFileProvider, IISUrlRewriteConfigPath);
app.UseRewriter(options);

And offers rewrites to everything that used to be. Even thousands of old RSS readers (yes, truly) that continually hit my blog will get the right new clean URLs with rules like this:

<rule name="Redirect RSS syndication" stopProcessing="true">
<match url="^SyndicationService.asmx/GetRss" />
<action type="Redirect" url="/blog/feed/rss" redirectType="Permanent" />
</rule>

Or even when posts used GUIDs (not sure what we were thinking, Clemens!):

<rule name="Very old perm;alink style (guid)" stopProcessing="true">
<match url="^PermaLink.aspx" />
<conditions>
<add input="{QUERY_STRING}" pattern="&amp;?guid=(.*)" />
</conditions>
<action type="Redirect" url="/blog/post/{C:1}" redirectType="Permanent" />
</rule>

We also always try to express rel="canonical" to tell search engines which link is the official - canonical - one. We've also autogenerated Google Sitemaps for over 14 years.

What's the point here? I care about my URLs. I want them to stick around. Every 404 is someone having a bad experience and some thoughtful rules at multiple layers with the flexibility to easily add others will ensure that even 10-20 year old references to my blog will still resolve!

Oh, and that article that they didn't like over on Hacker News? It's automatically now https://www.hanselman.com/blog/urls-are-ui so that's nice, too!

Here's some articles I've already written on the subject of moving this blog to the cloud:

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!


Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Upgrading the Storage Pool, Drives, and File System in a Synology to Btrfs

October 15, 2020 Comment on this post [9] Posted in Reviews | Tools
Sponsored By

Making a 21TB Synology Storage PoolI recently moved my home NAS over from a Synology DS1511 that I got in May of 2011 to a DS1520 that just came out.

I have blogged about the joy of having a home server over these last nearly 10 years in a number of posts.

That migration to the new Synology is complete, and I used the existing 2TB Seagate drives from before. These were Seagate 2TB Barracudas which are quite affordable. They aren't NAS rated though, and I'm starting to generate a LOT of video since working from home. I've also recently setup Synology Active Backup on the machines in the house, so everyone's system is imaged weekly, plus I've got our G-Suite accounts backed up locally.

REFERRAL LINKS: I use Amazon links in posts like this. When you use them, you're supporting this blog, my writing, and helping pay for hosting. Thanks!

I wanted to get reliable large drives that are also NAS-rated (vibration and duty cycle) and the sweet spot right for LARGE drives now is a 10TB Seagate IronWolf NAS drive. You can also get 4TB drives for under $100! I'm "running a business" here so I'm going to deduct these drives and make the investment so I got 4 drives. I could have also got two 18TBs, or three 12TBs to similar effect. These drives will be added to the pool and become a RAID'ed roughly 21TB.

My Synology was running the ext4 file system on Volume1, so the process to migrate two all new drives and an all new file system was very manual, but very possible:

  • Use a spare slot and add one drive.
    • I had a hot spare in my 5 drive NAS so I removed it to make a spare slot. At this point I have my 4x2TB and 1x10TB in slots.
  • Make a new Storage Pool on the one drive
  • Make a new Volume with the newer Btrfs file system to get snapshots, self-healing, and better mirroring.
  • Copy everything from Volume1 to Volume2.
    • I copied from my /volume1 to /volume2. I made all new shares that were "Videos2" and "Software2" with the intention to rename them to be the primaries later.
  • Remove Volume1 by removing a drive at a time until the Synology decides it's "failed" and can be totally forgotten.
    • As I removed a 2TB drive, I replace it with a 10TB and expanded the new Storage Pool and the Volume2. These expansions take time as there's a complete consistency check.
    • Repeat this step for each drive.
  • You can either leave a single drive as Volume1 and keep your Synology Applications on them, or you can
  • When I've removed the final Storage Pool (as seen in the pic below) and my apps are either reinstalled on Volume 2 or I've moved them, I renamed all my shares from "Software2" etc to Software, removing the appended "2."

The wholes process took a few days with checkpoints in between. Be ready to have a plan, go slow, and execute on that plan, checking in as the file system consistency checks itself.

Removing drives

To be clear, another way would have been to copy EVERYTHING off to a single external drive, torch the whole Synology install, install the new drives, and copy back to the new install. There would have been a momentary risk there, with the single external holding everything. It's up to you, depending on your definitions of "easy" and "hassle." My way was somewhat tedious, but relatively risk free. Net net - it worked. Consider what works for you before you do anything drastic. Make a LOT OF BACKUPS. Practice the Backup Rule of Three.

Note you CAN remove all but one drive from a Synology as the "OS" seems to be mirrored on each drive. However, your apps are almost always on /volume1/@apps

Some Synology devices have 10Gbs connectors, but the one I have has 4x1Gbs. Next, I'll Link Aggregate those 4 ports, and with a 10Gbps desktop network card be cable to at get 300-400MB/s disk access between my main Desktop and the NAS.

The Seagate drives have worked great so far. My only criticism is that the drives are somewhat louder (clickier) than their Western Digital counterparts. This isn't a problem as the NAS is in a closet, but I suspect I'd notice the sound if I had 4 or 5 drives going full speed with the NAS sitting on my desk.

Here are my other Synology posts:

Hope this helps!


Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Classic Path.DirectorySeparatorChar gotchas when moving from .NET Core on Windows to Linux

October 13, 2020 Comment on this post [4] Posted in Azure | DotNetCore | Linux
Sponsored By

It's a Unix System, I know this!An important step in moving my blog to Azure was to consider getting this .NET app, now a .NET Core app, to run on Linux AND Windows. Being able to run on Linux and Windows would give me and others a wider choice of hosting, allow hosting in Linux Containers, and for me, save me money as Linux Hosting tends to be cheaper, even on Azure.

Getting something to compile on Linux is not the same as getting it to run, of course.

Additionally, something might run well in one context and not other. My partner Mark (poppastring) on this project has been running this code on .NET for a while, albeit on Windows. Additionally he runs on IIS in /blog as a subapplication. I run on Linux on Azure, and while I'm also on /blog, my site is behind Azure Front Door as a reverse proxy which handles the domain/blog/path and forwards along domain/path to the app.

Long story short, it's worked on both his blog and mine, until I tried to post a new blog post.

I use Open Live Writer (open sourced version of Windows Live Writer) to make a MetaWebLog API call to my blog. There's multiple calls to upload the binaries (PNGs) and a path is returned.  A newly uploaded binary might have a path like https://hanselman.com/blog/content/binary/something.png. The file on disk (from the server's perspective) might be d:\whatever\site\wwwroot\content\binary\something.png.

This is 15 year old ASP.NET 1, so there's some idiomatic stuff going on here that isn't modern, plus the vars have been added for watch window debugging, but do you see the potential issue?

private string GetAbsoluteFileUri(string fullPath, out string relFileUri)
{
var relPath = fullPath.Replace(contentLocation, "").TrimStart('\\');
var relUri = new Uri( relPath, UriKind.Relative);
relFileUri = relUri.ToString();
return new Uri(binaryRoot, relPath).ToString();
}

That '\\' is making a big assumption. A reasonable one in 2003, but a big one today. It's trimming a backslash off the start of the passed in string. Then the Uri constructor starts coming things and we're mixing and matching \ and / and we end up with truncated URLs that don't resolve.

Assumptions about path separators are a top issue when moving .NET code to Linux or Mac, and it's often buried deep in utiltiy methods like this.

var relPath = fullPath.Replace(contentLocation, String.Empty).TrimStart(Path.DirectorySeparatorChar);

We can use the correct constant for Path.DirectorySeparatorChar, or the little-known AltDirectorySeparatorChar as Windows supports both. That's why this code works on Mark's Windows deployment but doesn't break until it runs on my Linux deployment.

DOCS: Note that Windows supports either the forward slash (which is returned by the AltDirectorySeparatorChar field) or the backslash (which is returned by the DirectorySeparatorChar field) as path separator characters, while Unix-based systems support only the forward slash.

It's also worth noting that each OS has different invalid path chars. I have some 404'ed images because some of my files have leading spaces on Linux but underscores on Windows. More on that )(and other obscure but fun bugs/behaviors) in future posts.

static void Main()
{
Console.WriteLine($"Path.DirectorySeparatorChar: '{Path.DirectorySeparatorChar}'");
Console.WriteLine($"Path.AltDirectorySeparatorChar: '{Path.AltDirectorySeparatorChar}'");
Console.WriteLine($"Path.PathSeparator: '{Path.PathSeparator}'");
Console.WriteLine($"Path.VolumeSeparatorChar: '{Path.VolumeSeparatorChar}'");
var invalidChars = Path.GetInvalidPathChars();
Console.WriteLine($"Path.GetInvalidPathChars:");
for (int ctr = 0; ctr < invalidChars.Length; ctr++)
{
Console.Write($" U+{Convert.ToUInt16(invalidChars[ctr]):X4} ");
if ((ctr + 1) % 10 == 0) Console.WriteLine();
}
Console.WriteLine();
}

Here's some articles I've already written on the subject of legacy migrations to the cloud.

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!

Oh, and please subscribe to my YouTube and tell your friends. It's lovely.


Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Migrating this blog to Azure. It's done. Now the work begins.

October 08, 2020 Comment on this post [13] Posted in ASP.NET | Azure
Sponsored By

imageI have been running this https://hanselman.com/blog for almost 20 years. Like coming up on 19 I believe.

Recently it moved from being:

  • a 13(?) year old .NET Framework app called DasBlog running on ASP.NET and a Windows Server on real metal hardware

to

Finally. This blog, the main site, and the podcast site are all running on Azure Web Apps, built in Azure DevOps, and managed by Azure Front Door and watched by Application Insights. Yes I pay for it with cash, I have no unlimited free Azure credits other than my $100 MSDN account.

Mark and I have been pairing on this for months and having a wonderful time. In fact, it's been about a year since this started.

Moving this blog is a significant achievement for a number of reasons, IMHO.

  • If we did it right:
    • you didn't notice anything
    • The URLs look cooler.
    • We broke nothing in SEO.
    • Perf is better.
    • Before I could deploy the site a few times a year, and was afraid of it. Yesterday I deployed 11 times.
  • It was .NET 1.1, then 2.0, then 3.5, then 4.0, then stuck for 8 years.
    • It ran on a real Windows Server 2008 machine (no VM) at Sherweb who has been a great partner for years. Extremely reliable hosting!
    • Now it's on Azure under Linux
  • We upgraded the ASP.NET WebForms app to ASP.NET Core with Mark's genius idea of splitting the app responsibilities such that the original DasBlog blog templating language could be converted to simple Razor pages and we could use ASP.NET TagHelpers to replace WebForms controls.
    • This allowed me to port my template over in a day with minimal changes.
    • Once it compiled under .NET Core it was easy to move it from Windows to Linux and testing in WSL first.
    • We then just moved the other dependent projects to .NET Standard 2 and compiled the while thing as a .NET Core 3.1 LTS (Long Term Support) app. In fact, scroll down to the VERY bottom of this page and you can see what version we're on.
  • I set up CI/CD for the main site hanselman.com, this blog, and hanselminutes.com.
    • There are 3 sites now, all behind a reverse proxy from Azure Front Door to handle SSL, Firewalls, and more.

Next steps? Keep it running, watch for errors, 5xx and 4xx and make small incremental changes. The pages are still heavy, while ASP.NET has server response time under 20ms, there's still 2 sec of JavaScript and bunch of old crap to clean up. I've also got two decades of links, so I'm fixing 404s as they are reported or they show up in Application Insights. I made a Dashboard here:

image

I'm going spend the next month or so blogging about the process and experience in as much detail as I can.

Here's some articles I've already written on the subject:

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!

Oh, and please subscribe to my YouTube and tell your friends. It's lovely.


Sponsor: Never miss a beat with Seq. Live application logs and health checks. Download the Windows installer or pull the Docker image now.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.