Scott Hanselman

How to deal with Extreme Physical Pain

October 27, 2020 Comment on this post [75] Posted in Musings
Sponsored By

francisco-gonzalez-M8UEJd58GcE-unsplashI'm in a LOT of pain right now. It's hard to say that, especially considering that everyone experiences pain be it emotional or physical. I don't want to make unneeded comparisons or consider my pain as being more important than anyone else's. I'm not burned. I'm not dying of cancer. I am blessed.

But I'm hurting. A lot. It's mine and it's now and it's not clear when it will stop.

It's hard to think. It's hard to move. I can't sleep. Oxycodone makes me feel sick. Advil does nothing.

I've just had my second frozen shoulder surgery (adhesive capsular release) in 4 years. Frozen shoulder is idiopathic (who knows why it happen) and it's known to be quite painful. I can attest that it is. I've been unable to move my right arm for nearly a year. Not just that I couldn't move it, I mean it couldn't be moved by anyone. I couldn't fit the deodorant in to my armpit because the whole joint was hardened.

I had cortisone shots. No result. I finally had formal capsular release surgery where the surgeon goes in there and tidies up, removes scar tissue.

Then the months of physical therapy and forced stretching starts.

I'm going to physical therapy five days a week for an hour a day, and working at home stretching myself for an other 1-2 hours. It's overwhelming and consuming. I just want to be able to pick up a cup from a high shelf. I have basic arm-usage requirements. This is going to be a marathon, and this is the second time this has happened.

Why am I telling you this?

A few reasons. I need the outlet. It's my blog. Because I appreciate you all and you've been here, some of you, for nearly 20 years. Not everything is code.

I had a nerve block in my neck that turned off my right side for a week. That was an extraordinary experience as it was an opportunity to experience a significant, albeit temporary, physical disability. Before I had no ability to move my arm but I had feeling. Now I had zero use of my right arm. It was a numb cadaver arm - dangerously dead weight. I used the time to play Xbox with my feet using the Xbox Adaptive Controller.

This nerve block is wearing off and it's gone from itching, to tingling, to the feeling of an ice pick shoved into my deltoid and armpit every few minutes.

I burst into tears at physical therapy today. The year just hit me all at once. It hurts. Between diabetes and this temporary paralysis, it’s been a week. 2020 is ass. It’s OK. Happens a few times a decade. Maybe it happens to you twice a week. Let it out, listen to your body.

Why am I telling you this?

It's OK to tell people you hurt. You're human. Talk about your pain. Cry. Yell. Sob. Talk some more.

When I'm done yelling, I'm trying to sit quietly and meditate about this pain. What is it trying to tell me? Can I mentally follow the nerve from the location (referred pain or otherwise) to my brain and determine what the body wants me to know? Am I being told there's danger?

I'm finding that there is soft tissue tolerance - what I can handle - and that doesn't always line up with what I'm feeling. I'm feeling near intolerable pain in PT (physical therapy). Like torture with an unknown end date, it's taken me to the level of pain where vomiting is the only escape and then it starts again. However, I persist. I breathe. I try to listen and trust the process and know that if I want to regain the full use of my arms, this is a medically known and studied process. Physical therapy works if you do it.

The cognitive dissonance is overwhelming. Your body says you're actively dying but your conscious brain can - must - override it and let the pain flow freely. You observe it, rather than obstruct it.

I hate this process but I'm going to learn from it. I'm learning and listening to my body and how I react to something so extreme.

The pain is important to acknowledge because this pain is gonna make me better and stronger. But it still hurts. Here we go.

I hope that you, Dear Reader, are not in pain. But if you are, I hope it passes and that you come out better on the other side. I'm going to use this Bad Input for Good.

BTW: Thanks to Volterra for sponsoring the blog this week. I suspect they didn't know what blog post(s) their ad would land on, but I appreciate their support and understanding as not every blog post is about code. This one is about people and their pain. Give them a click.


Sponsor: Need a multi-cluster load balancer and API gateway? Try VoltMesh: built for modern and distributed apps that require automation, performance and visibility. Start for free today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Using the ASP.NET Core Environment Feature to manage Development vs. Production for any config file type

October 22, 2020 Comment on this post [4] Posted in ASP.NET
Sponsored By

ASP.NET Core can understand what "environment" it's running under. For me, that's "development," "test," "staging," "production," but for you it can be whatever makes you happy. By default, ASP.NET understand Development, Staging, and Production.

You can the change how your app behaves by asking "IsDevelopment" to do certain things. For example:

if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

if (env.IsProduction() || env.IsStaging() || env.IsEnvironment("Staging_2"))
{
app.UseExceptionHandler("/Error");
}

There are helpers for the standard environments, or I can just pass in a string.

You can also make Environmental decisions with taghelpers like this in your Views/Razor Pages. I did this when I dynamically generated my robots.txt files:

@page
@{
Layout = null;
this.Response.ContentType = "text/plain";
}
# /robots.txt file for http://www.hanselman.com/
User-agent: *
<environment include="Development,Staging">Disallow: /</environment>
<environment include="Production">Disallow: /blog/private
Disallow: /blog/secret
Disallow: /blog/somethingelse</environment>

This is a really nice way to include things like banners or JavaScript when your site is running in a certain environment. These are easily set as environment variables if you're running in a container. If you're running in an Azure App Service you set the environment from the Config blade:

Now that I've moved this blog to Azure, we have a number of config files that are specific to this blog. Since the configuration features of ASP.NET are so flexible it was easy to extend this idea of environments to our own config files.

Our Startup class sets up the filesnames of our various config files. Note the second line, if we have no environment, we just look for the regular file name.

public Startup(IWebHostEnvironment env)
{
hostingEnvironment = env;

var envname = string.IsNullOrWhiteSpace(hostingEnvironment.EnvironmentName) ?
"." : string.Format($".{hostingEnvironment.EnvironmentName}.");

SiteSecurityConfigPath = Path.Combine("Config", $"siteSecurity{envname}config");
IISUrlRewriteConfigPath = Path.Combine("Config", $"IISUrlRewrite{envname}config");
SiteConfigPath = Path.Combine("Config", $"site{envname}config");
MetaConfigPath = Path.Combine("Config", $"meta{envname}config");
AppSettingsConfigPath = $"appsettings.json";

...

Here's the files in my Visual Studio. Note that another benefit of this naming structure is that the files nest nicely underneath their parent file.

Nested config files

The formalization of environments is not a new thing, but the adoption of it deeply into our application at every level has allowed us to move from dev to staging to production very easily. It's very likely that you have done this in your application, but you may have rolled your own solution. Take a look if you can remove code and adopt this built in technique.

Here's some articles I've already written on the subject of moving this blog to the cloud:

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!


Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Don't ever break a URL if you can help it

October 20, 2020 Comment on this post [12] Posted in DasBlog | Open Source
Sponsored By

404Back in 2017 I said "URLs are UI" and I stand by it. At the time, however, I was running this 18 year old blog using ASP.NET WebForms and the URL was, ahem, https://www.hanselman.com/blog/URLsAreUI.aspx

The blog post got on Hacker News and folks were not impressed with my PascalCasing but were particularly offended by the .aspx extension shouting "this is the technology this blog is written in!" A rightfully valid complaint, to be clear.

ASP.NET has supported extensionless URLs for nearly a decade but I have been just using and enjoying my blog. I've been slowly moving my three "Hanselman, Inc" (it's not really a company) sites over to Azure, to Linux, and to ASP.NET Core. You can actually scroll to the bottom of this site and see the git commit hash AND CI/CD Build (both private links) that this production instance was built and deployed from.

As tastes change, from anglebrackets to curly braces to significant whitespace, they also change in URL styles, from .cgi extesnions, to my PascalCased.aspx, to the more 'modern' lowercased kebab-casing of today.

But how does one change 6000 URLs without breaking their Google Juice? I have history here. Here's a 17 year old blog post...the URL isn't broken. It's important to never change a URL and if you do, always offer a redirect.

When Mark Downie and I discussed moving the venerable .NET blog engine "DasBlog" over to .NET Core, we decided that no matter what, we'd allow for choice in URL style without breaking URLs. His blog runs DasBlog Core also and applies these same techniques.

We decided on two layers of URL management.

  • An optional and configurable XML file in the older IIS Rewrite format that users can update to taste.
    • Why? Users with old blogs like me already have rules in this IISRewrite format. Even though I now run on Linux and there's no IIS to be found, the file exists and works. So we use the IIS Rewrite Module to consume these files. It's a wonderful compatibility feature of ASP.NET Core.
  • The core/base Endpoints that DasBlog would support on its own. This would include a matrix of every URL format that DasBlog has ever supported in the last 10 years.

Here's that code. There may be terser ways to express this, but this is super clear. With or without extension, without or without year/month/day.

app.UseEndpoints(endpoints =>
{
endpoints.MapHealthChecks("/healthcheck");

if (dasBlogSettings.SiteConfiguration.EnableTitlePermaLinkUnique)
{
endpoints.MapControllerRoute(
"Original Post Format",
"~/{year:int}/{month:int}/{day:int}/{posttitle}.aspx",
new { controller = "BlogPost", action = "Post", posttitle = "" });

endpoints.MapControllerRoute(
"New Post Format",
"~/{year:int}/{month:int}/{day:int}/{posttitle}",
new { controller = "BlogPost", action = "Post", postitle = "" });
}
else
{
endpoints.MapControllerRoute(
"Original Post Format",
"~/{posttitle}.aspx",
new { controller = "BlogPost", action = "Post", posttitle = "" });

endpoints.MapControllerRoute(
"New Post Format",
"~/{posttitle}",
new { controller = "BlogPost", action = "Post", postitle = "" });

}
endpoints.MapControllerRoute(
name: "default", "~/{controller=Home}/{action=Index}/{id?}");
});

If someone shows up at any of the half dozen URL formats I've had over the years they'll get a 301 permanent redirect to the canonical one.

UPDATE: Great tip from Tune in the comments: "After moving several websites to new navigation and url structures, I've learned to start redirecting with harmless temporary redirects (http 302) and replace it with a permanent redirect (http 301), only after the dust has settled…"

The old IIS format is added to our site with just two lines:

var options = new RewriteOptions().AddIISUrlRewrite(env.ContentRootFileProvider, IISUrlRewriteConfigPath);
app.UseRewriter(options);

And offers rewrites to everything that used to be. Even thousands of old RSS readers (yes, truly) that continually hit my blog will get the right new clean URLs with rules like this:

<rule name="Redirect RSS syndication" stopProcessing="true">
<match url="^SyndicationService.asmx/GetRss" />
<action type="Redirect" url="/blog/feed/rss" redirectType="Permanent" />
</rule>

Or even when posts used GUIDs (not sure what we were thinking, Clemens!):

<rule name="Very old perm;alink style (guid)" stopProcessing="true">
<match url="^PermaLink.aspx" />
<conditions>
<add input="{QUERY_STRING}" pattern="&amp;?guid=(.*)" />
</conditions>
<action type="Redirect" url="/blog/post/{C:1}" redirectType="Permanent" />
</rule>

We also always try to express rel="canonical" to tell search engines which link is the official - canonical - one. We've also autogenerated Google Sitemaps for over 14 years.

What's the point here? I care about my URLs. I want them to stick around. Every 404 is someone having a bad experience and some thoughtful rules at multiple layers with the flexibility to easily add others will ensure that even 10-20 year old references to my blog will still resolve!

Oh, and that article that they didn't like over on Hacker News? It's automatically now https://www.hanselman.com/blog/urls-are-ui so that's nice, too!

Here's some articles I've already written on the subject of moving this blog to the cloud:

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!


Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Upgrading the Storage Pool, Drives, and File System in a Synology to Btrfs

October 15, 2020 Comment on this post [9] Posted in Reviews | Tools
Sponsored By

Making a 21TB Synology Storage PoolI recently moved my home NAS over from a Synology DS1511 that I got in May of 2011 to a DS1520 that just came out.

I have blogged about the joy of having a home server over these last nearly 10 years in a number of posts.

That migration to the new Synology is complete, and I used the existing 2TB Seagate drives from before. These were Seagate 2TB Barracudas which are quite affordable. They aren't NAS rated though, and I'm starting to generate a LOT of video since working from home. I've also recently setup Synology Active Backup on the machines in the house, so everyone's system is imaged weekly, plus I've got our G-Suite accounts backed up locally.

REFERRAL LINKS: I use Amazon links in posts like this. When you use them, you're supporting this blog, my writing, and helping pay for hosting. Thanks!

I wanted to get reliable large drives that are also NAS-rated (vibration and duty cycle) and the sweet spot right for LARGE drives now is a 10TB Seagate IronWolf NAS drive. You can also get 4TB drives for under $100! I'm "running a business" here so I'm going to deduct these drives and make the investment so I got 4 drives. I could have also got two 18TBs, or three 12TBs to similar effect. These drives will be added to the pool and become a RAID'ed roughly 21TB.

My Synology was running the ext4 file system on Volume1, so the process to migrate two all new drives and an all new file system was very manual, but very possible:

  • Use a spare slot and add one drive.
    • I had a hot spare in my 5 drive NAS so I removed it to make a spare slot. At this point I have my 4x2TB and 1x10TB in slots.
  • Make a new Storage Pool on the one drive
  • Make a new Volume with the newer Btrfs file system to get snapshots, self-healing, and better mirroring.
  • Copy everything from Volume1 to Volume2.
    • I copied from my /volume1 to /volume2. I made all new shares that were "Videos2" and "Software2" with the intention to rename them to be the primaries later.
  • Remove Volume1 by removing a drive at a time until the Synology decides it's "failed" and can be totally forgotten.
    • As I removed a 2TB drive, I replace it with a 10TB and expanded the new Storage Pool and the Volume2. These expansions take time as there's a complete consistency check.
    • Repeat this step for each drive.
  • You can either leave a single drive as Volume1 and keep your Synology Applications on them, or you can
  • When I've removed the final Storage Pool (as seen in the pic below) and my apps are either reinstalled on Volume 2 or I've moved them, I renamed all my shares from "Software2" etc to Software, removing the appended "2."

The wholes process took a few days with checkpoints in between. Be ready to have a plan, go slow, and execute on that plan, checking in as the file system consistency checks itself.

Removing drives

To be clear, another way would have been to copy EVERYTHING off to a single external drive, torch the whole Synology install, install the new drives, and copy back to the new install. There would have been a momentary risk there, with the single external holding everything. It's up to you, depending on your definitions of "easy" and "hassle." My way was somewhat tedious, but relatively risk free. Net net - it worked. Consider what works for you before you do anything drastic. Make a LOT OF BACKUPS. Practice the Backup Rule of Three.

Note you CAN remove all but one drive from a Synology as the "OS" seems to be mirrored on each drive. However, your apps are almost always on /volume1/@apps

Some Synology devices have 10Gbs connectors, but the one I have has 4x1Gbs. Next, I'll Link Aggregate those 4 ports, and with a 10Gbps desktop network card be cable to at get 300-400MB/s disk access between my main Desktop and the NAS.

The Seagate drives have worked great so far. My only criticism is that the drives are somewhat louder (clickier) than their Western Digital counterparts. This isn't a problem as the NAS is in a closet, but I suspect I'd notice the sound if I had 4 or 5 drives going full speed with the NAS sitting on my desk.

Here are my other Synology posts:

Hope this helps!


Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Classic Path.DirectorySeparatorChar gotchas when moving from .NET Core on Windows to Linux

October 13, 2020 Comment on this post [4] Posted in Azure | DotNetCore | Linux
Sponsored By

It's a Unix System, I know this!An important step in moving my blog to Azure was to consider getting this .NET app, now a .NET Core app, to run on Linux AND Windows. Being able to run on Linux and Windows would give me and others a wider choice of hosting, allow hosting in Linux Containers, and for me, save me money as Linux Hosting tends to be cheaper, even on Azure.

Getting something to compile on Linux is not the same as getting it to run, of course.

Additionally, something might run well in one context and not other. My partner Mark (poppastring) on this project has been running this code on .NET for a while, albeit on Windows. Additionally he runs on IIS in /blog as a subapplication. I run on Linux on Azure, and while I'm also on /blog, my site is behind Azure Front Door as a reverse proxy which handles the domain/blog/path and forwards along domain/path to the app.

Long story short, it's worked on both his blog and mine, until I tried to post a new blog post.

I use Open Live Writer (open sourced version of Windows Live Writer) to make a MetaWebLog API call to my blog. There's multiple calls to upload the binaries (PNGs) and a path is returned.  A newly uploaded binary might have a path like https://hanselman.com/blog/content/binary/something.png. The file on disk (from the server's perspective) might be d:\whatever\site\wwwroot\content\binary\something.png.

This is 15 year old ASP.NET 1, so there's some idiomatic stuff going on here that isn't modern, plus the vars have been added for watch window debugging, but do you see the potential issue?

private string GetAbsoluteFileUri(string fullPath, out string relFileUri)
{
var relPath = fullPath.Replace(contentLocation, "").TrimStart('\\');
var relUri = new Uri( relPath, UriKind.Relative);
relFileUri = relUri.ToString();
return new Uri(binaryRoot, relPath).ToString();
}

That '\\' is making a big assumption. A reasonable one in 2003, but a big one today. It's trimming a backslash off the start of the passed in string. Then the Uri constructor starts coming things and we're mixing and matching \ and / and we end up with truncated URLs that don't resolve.

Assumptions about path separators are a top issue when moving .NET code to Linux or Mac, and it's often buried deep in utiltiy methods like this.

var relPath = fullPath.Replace(contentLocation, String.Empty).TrimStart(Path.DirectorySeparatorChar);

We can use the correct constant for Path.DirectorySeparatorChar, or the little-known AltDirectorySeparatorChar as Windows supports both. That's why this code works on Mark's Windows deployment but doesn't break until it runs on my Linux deployment.

DOCS: Note that Windows supports either the forward slash (which is returned by the AltDirectorySeparatorChar field) or the backslash (which is returned by the DirectorySeparatorChar field) as path separator characters, while Unix-based systems support only the forward slash.

It's also worth noting that each OS has different invalid path chars. I have some 404'ed images because some of my files have leading spaces on Linux but underscores on Windows. More on that )(and other obscure but fun bugs/behaviors) in future posts.

static void Main()
{
Console.WriteLine($"Path.DirectorySeparatorChar: '{Path.DirectorySeparatorChar}'");
Console.WriteLine($"Path.AltDirectorySeparatorChar: '{Path.AltDirectorySeparatorChar}'");
Console.WriteLine($"Path.PathSeparator: '{Path.PathSeparator}'");
Console.WriteLine($"Path.VolumeSeparatorChar: '{Path.VolumeSeparatorChar}'");
var invalidChars = Path.GetInvalidPathChars();
Console.WriteLine($"Path.GetInvalidPathChars:");
for (int ctr = 0; ctr < invalidChars.Length; ctr++)
{
Console.Write($" U+{Convert.ToUInt16(invalidChars[ctr]):X4} ");
if ((ctr + 1) % 10 == 0) Console.WriteLine();
}
Console.WriteLine();
}

Here's some articles I've already written on the subject of legacy migrations to the cloud.

If you find any issues with this blog like

  • Broken links and 404s where you wouldn't expect them
  • Broken images, zero byte images, giant images
  • General oddness

Please file them here https://github.com/shanselman/hanselman.com-bugs and let me know!

Oh, and please subscribe to my YouTube and tell your friends. It's lovely.


Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Hosting By
Hosted in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.