Scott Hanselman

Upgrading a 10 year old site to ASP.NET Core's Razor Pages using the URL Rewriting Middleware

February 25, '18 Comments [6] Posted in ASP.NET | ASP.NET MVC | DotNetCore
Sponsored By

Visual Studio Code editing my new ASP.NET Core site using Razor PagesMy podcast has over 600 episodes (Every week for many years, you do the math! And subscribe!) website was written in ASP.NET Web Pages many years ago. "Web Pages" (horrible name) was it's own thing. It wasn't ASP.NET Web Forms, nor was it ASP.NET MVC. However, while open-source and cross-platform ASP.NET Core uses the "MVC" pattern, it includes an integrated architecture that supports pages created with the model-view-controller style, Web APIs that return JSON/whatever from controllers, and routing system that works across all of these. It also includes "Razor Pages."

On first blush, you'd think Razor Pages is "Web Pages" part two. I thought that, but it's not. It's an alternative model to MVC but it's built on MVC. Let me explain.

My podcast site has a home page, a single episode page, and and archives page. It's pretty basic. Back in the day I felt an MVC-style site would just be overkill, so I did it in a page model. However, the code ended up (no disrespect intended) very 90s style PHPy. Basically one super-page with too much state management to all the URL cracking happening at the top of the page.

What I wanted was a Page-focused model without the ceremony of MVC while still being able to dip down into the flexibility and power of MVC when appropriate. That's Razor Pages. Best of all worlds and simply another tool in my toolbox. And the Pages (.cshtml) are Razor so I could port 90% of my very old existing code. In fact, I just made a new site with .NET Core with "dotnet new razor," opened up Visual Studio Code, and started copying over from (gasp) my WebMatrix project. I updated the code to be cleaner (a lot has happened to C# since then) and had 80% of my site going in a few hours. I'll switch Hanselminutes.com over in the next few weeks. This will mean I'll have a proper git checkin/deploy process rather than my "publish from WebMatrix" system I use today. I can containerize the site, run it on Linux, and finally add Unit Testing as I've been able to use pervasive Dependency Injection that's built into ASP.NET.

Merging the old and the new with the ASP.NET Core's URL Rewriting Middleware

Here's the thing though, there's parts of my existing site that are 10 years old, sure, but they also WORK. For example, I have existing URL Rewrite Rules from IIS that have been around that long. I'm pretty obsessive about making old URLs work. Never break a URL. No excuses.

There are still links around that have horrible URLs in the VERY original format that (not my fault) used database ids, like https://hanselminutes.com/default.aspx?ShowID=18570. Well, that database doesn't exist anymore, but I don't break URLs. I have these old URLs store along site my new system, and along with dozens of existing rewrite URLs I have an "IISUrlRewrite.xml" file. This was IIS specific and used with the IIS URL Rewrite Module, but you have all seen these before with things like Apache's ModRewrite. Those files are often loved and managed and carried around for years. They work. A lot of work went into them. Sure, I could rewrite all these rules with ASP.NET Core's routing and custom middleware, but again, they already work. I just want them to continue to work. They can with ASP.NET Core's Url Rewriting Middleware that supports Apache Mod Rewrite AND IIS Url Rewrite without using Apache or IIS!

Here's a complex and very complete example of mixing and matching. Mine is far simpler.

public void Configure(IApplicationBuilder app)
{
using (StreamReader apacheModRewriteStreamReader =
File.OpenText("ApacheModRewrite.txt"))
using (StreamReader iisUrlRewriteStreamReader =
File.OpenText("IISUrlRewrite.xml"))
{
var options = new RewriteOptions()
.AddRedirect("redirect-rule/(.*)", "redirected/$1")
.AddRewrite(@"^rewrite-rule/(\d+)/(\d+)", "rewritten?var1=$1&var2=$2",
skipRemainingRules: true)
.AddApacheModRewrite(apacheModRewriteStreamReader)
.AddIISUrlRewrite(iisUrlRewriteStreamReader)
.Add(MethodRules.RedirectXMLRequests)
.Add(new RedirectImageRequests(".png", "/png-images"))
.Add(new RedirectImageRequests(".jpg", "/jpg-images"));

app.UseRewriter(options);
}

app.Run(context => context.Response.WriteAsync(
$"Rewritten or Redirected Url: " +
$"{context.Request.Path + context.Request.QueryString}"));
}

Remember I have URLs like default.aspx?ShowID=18570 but I don't use default.aspx any more (literally doesn't exist on disk) and I don't use those IDs (they are just stored as metadata in a new system.

NOTE: Just want to point out that last line above there, where it shows the rewritten URL. Putting that in the logs or bypassing everything and outputting it as text is a nice way to debug and developer with this middleware, then comment it out as you get things refined and working.

I have an IIS Rewrite URL that looks like this. It lives in an XML file along with dozens of other rules. Reminder - there's no IIS in this scenario. We are talking about the format and reusing that format. I load my rewrite rules in my Configure() method in Startup:

using (StreamReader iisUrlRewriteStreamReader = 
File.OpenText("IISUrlRewrite.xml"))
{
var options = new RewriteOptions()
.AddIISUrlRewrite(iisUrlRewriteStreamReader);

app.UseRewriter(options);
}

It lives in the "Microsoft.AspNetCore.Rewrite" package that I added to my csproj with "dotnet add package Microsoft.AspNetCore.Rewrite." And here's the rule I use (one of many in the old xml file):

<rule name="OldShowId">
<match url="^.*(?:Default.aspx).*$" />
<conditions>
<add input="{QUERY_STRING}" pattern="ShowID=(\d+)" />
</conditions>
<action type="Rewrite" url="/{C:1}?handler=oldshowid" appendQueryString="false" />
</rule>

I capture that show ID and I rewrite (not redirect...we rewrite and continue on to the next segment of the pipeline) it to /18570?handler=oldshowid. That handler is a magic internal part of Razor Pages. Usually if you have a page called foo.cshtml it will have a method called OnGet or OnPost or OnHTTPVERB. But if you want multiple handlers per page you'll have OnGetHANDLERNAME so I have OnGet() for regular stuff, and I have OnGetOldShowId for this rare but important URL type. But notice that my implementation isn't URL-style specific. Razor Pages doesn't even know about that URL format. It just knows that these weird IDs have their own handler.

public async Task<IActionResult> OnGetOldShowId(int id)
{
var allShows = await _db.GetShows();

string idAsString = id.ToString();
LastShow = allShows.Where(c => c.Guid.EndsWith(idAsString)).FirstOrDefault();
if (LastShow == null) return Redirect("/"); //catch all error case, 302 to home
return RedirectPermanent(LastShow.ShowNumber.ToString()); // 301 to /showid
}

That's it. I have a ton more to share as I keep upgrading my podcast site, coming soon.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

az webapp new - Azure CLI extension to create and deploy a .NET Core or nodejs site in one command

February 22, '18 Comments [3] Posted in Azure
Sponsored By

az webapp newThe Azure CLI 2.0 (Command line interface) is a clean little command line tool to query the Azure back-end APIs (which are JSON). It's easy to install and cross-platform:

Once you got it installed, go "az login" and get authenticated. Also note that the most important switch (IMHO) is --output:

usage: az [-h] [--output {json,tsv,table,jsonc}] [--verbose] [--debug]

You can get json, tables (for humans), or tsv (tab separated values) for your awks and seds, and json (or the more condensed json-c).

A nice first command after "az login" is "az configure" which will walk you through a bunch of questions interactively to set up defaults.

Then I can "az noun verb" like "az webapp list" or "az vm list" and see things like this:

128→ C:\Users\scott> az webapp list
Name Location State ResourceGroup DefaultHostName
------------------------ ---------------- ------- -------------------------- ------------------------------------------
Hanselminutes North Central US Running Default-Web-NorthCentralUS Hanselminutes.azurewebsites.net
HanselmanBandData North Central US Running Default-Web-NorthCentralUS hanselmanbanddata.azurewebsites.net
myEchoHub-WestEurope West Europe Running Default-Web-WestEurope myechohub-westeurope.azurewebsites.net
myEchoHub-SouthEastAsia Southeast Asia Stopped Default-Web-SoutheastAsia myechohub-southeastasia.azurewebsites.net

The Azure CLI supports extensions (plugins) that you can easily add, and the Azure CLI team is experimenting with a few ideas that they are implementing as extensions. "az webapp new" is one of them so I thought I'd take a look. All of this is open source and on GitHub at https://github.com/Azure/azure-cli and is discussed in the GitHub issues for azure-cli-extensions.

You can install the webapp extension with:

az extension add --name webapp

The new command "new" (I'm not sure about that name...maybe deploy? or createAndDeploy?) is basically:

az webapp new --name [app name] --location [optional Azure region name] --dryrun

Now, from a directory, I can make a little node/express app or a little .NET Core app (with "dotnet new razor" and "dotnet build") then it'll make a resource group, web app, and zip up the current folder and just deploy it. The idea being to "JUST DO IT."

128→ C:\Users\scott\desktop\somewebapp> az webapp new  --name somewebappforme
Resource group 'appsvc_rg_Windows_CentralUS' already exists.
App service plan 'appsvc_asp_Windows_CentralUS' already exists.
App 'somewebappforme' already exists
Updating app settings to enable build after deployment
Creating zip with contents of dir C:\Users\scott\desktop\somewebapp ...
Deploying and building contents to app.This operation can take some time to finish...
All done. {
"location": "Central US",
"name": "somewebappforme",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_CentralUS ",
"serverfarm": "appsvc_asp_Windows_CentralUS",
"sku": "FREE",
"src_path": "C:\\Users\\scott\\desktop\\somewebapp ",
"version_detected": "2.0",
"version_to_create": "dotnetcore|2.0"
}

I'd even like it to make up a name so I could maybe "az webapp up" or even just "az up." For now it'll make a Free site by default, so you can try it without worrying about paying. If you want to upgrade or change it, upgrade either with the az command or in the Azure portal. Also the site ends at up <name>.azurewebsites.net!

DO NOTE that these extensions are living things, so you can update after installing with

az extension update --name webapp

like I just did!

Again, it's super beta/alpha, but it's an interesting experiment. Go discuss on their GitHub issues.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

The Squishy Side of Open Source

February 21, '18 Comments [9] Posted in Open Source
Sponsored By

The Squishy Side of Open SourceA few months back my friend Keeley Hammond and I did a workshop for Women Who Code Portland called The Squishy Side of Open Source. We'd done a number of workshops before on how to use Git and the Command Line, and I've done a documentary film with Rob Conery called Get Involved In Tech: The Social Developer (watch it free!) but Keeley and I wanted to really dive into the interpersonal "soft" or squishy parts. We think that we all need to work to bring kindness back into open source.

Contributing to open source for the first time can be scary and a little overwhelming. In addition to the technical skills required, the social dynamics of contributing to a library and participating in a code review can seem strange.

That means how people talk to each other, what to do when pull requests go south, when issues heat up due to misunderstandings,

Keeley has published the deck up on SpeakerDeck. In this workshop, we talked about the work and details that go into maintaining an open source community, tell real stories from his experiences and go over what to expect when contributing to open source and how to navigate it.

Key Takeaways:

  • Understanding the work that open source maintainers do, and how to show respect for them.
  • Understanding Codes of Conduct and Style Guides for OSS repos and how to abide by them.
  • Tips for communicating clearly, and dealing with uncomfortable or hostile communication.

Good communication is a key part of contributing to open source.

  • Give context.
  • Do your homework beforehand. It’s OK not to know things, but before asking for help, check a project’s README, documentation, issues (open or closed) and search the internet for an answer.
  • Keep requests short and direct. Many projects have more incoming requests than people available to help. Be concise.
  • Keep all communication public.
  • It’s okay to ask questions (but be patient!). Show them the same patience that you’d want them to show to you.
Keep it classy. Context gets lost across languages, cultures, geographies, and time zones. Assume good intentions in these conversations.

Where to start?

What are some good resources you've found for understanding the squishy side of open source?


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

One Email Rule - Have a separate Inbox and an Inbox CC to reduce email stress. Guaranteed.

February 19, '18 Comments [16] Posted in Productivity
Sponsored By

Two folders in your email client. One called I've mentioned this tip before but once more for the folks in the back. This email productivity tip is a game-changer for most information workers.

We all struggled with email.

  • Some of us just declare Email Bankruptcy every few months. Ctrl-A, delete, right? They'll send it again.
  • Some of us make detailed and amazing Rube Goldbergian email rules and deliberately file things away into folders we will never open again.
  • Some of us just decide that if an email scrolls off the screen, well, it's gone.

Don't let the psychic weight of 50,000 unread emails give you headaches. Go ahead, declare email bankruptcy - you're already in debt - then try this one email rule.

One Email Rule

Email in your inbox is only for email where you are on the TO: line.

All other emails (BCC'ed or CC'ed) should go into a folder called "Inbox - CC."

That's it.

I just got back from a week away. Look at my email there. 728 emails. Ugh. But just 8 were sent directly to me. Perhaps that's not a realistic scenario for you, sure. Maybe it'd be more like 300 and 400. Or 100 and 600.

Point is, emails you are CC'ed on are FYI (for your information) emails. They aren't Take Action Now emails. Now if they ARE, then you need to take a moment and train your team. Very simple, just reply and say, "oops, I didn't see this immediately because I was cc'ed. If you need me to see something now, please to: me." It'll just take a moment to "train" your coworkers because this is a fundamentally intuitive way to work. They'll say, "oh, make sense. Cool."

Try this out and I guarantee it'll change your workflow. Next, do this. Check your Inbox - CC less often than your Inbox. I check CC'ed email a few times a week, while I may check Inbox a few times a day.

If you like this tip, check out my complete list of Productivity Tips!


Sponsor: Unleash a faster Python Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python* Now!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Surface Book 2 Developer Impressions and the Magic of USB-C

February 15, '18 Comments [27] Posted in Reviews
Sponsored By

Surface Book 2 15"I recently got a updated laptop for work, a 15" Surface Book 2. It's quickly become my go-to machine, and I'm often finding myself using it more than my main desktop machine.

I considered myself reasonably familiar with the Surface product line as I bought a Surface Pro 3 a few years back for myself (not a work machine), but I am genuinely impressed with this Surface Book 2 - and that surprised me.

Here's a random list of a tips, tricks, things I didn't realize, and general feelings about the 15" Surface Book 2.

15" is a NICE size

After years of "Ultrabooks" I missed an actual high-powered desktop replacement laptop. It's just 4.2 lbs and it doesn't feel unwieldy at all.

There are TWO Surface Connect ports

Legit had no idea. You can charge and dock the tablet part alone.

There's a full sized SD card reader and a 3.5mm headphone jack

Which sadly is more than I can say for my iPhone 8+.

Having a 15" screen again makes me wonder how you 11" MacBook Air people can even concentrate.

3240 x 2160, (260 PPI) is a weird resolution to be sure, but it's a hell of a lot of pixels. It's a 15" retina display. 

The high resolution issues in Windows are 90% handled IMHO

I wrote about how running any DPI greater than 96dpi on Windows has historically sucked back in 2014, but literally every little Windows Update and Office update improves it. Only the oldest apps I run have any real issues. Even WinForms has been updated to support HighDPI so I have zero HighDPI issues in my daily life in 2018.

More RAM is always nice, but 16 gigs is today's sweet spot.

I have had zero RAM issues, and I'm running Kubernetes and lots of Docker containers along size VS, VS Code, Outlook, Office, Edge, Chrome, etc. Not one memory issue.

3.84 GHz or more

Battery Life and Management is WAY better

Power Mode SliderBattery Life on my Surface Pro 3 was "fine." You know? Fine. It wasn't amazing. Maybe 4-6 hours depending. However, the new Battery Slider on Windows 10 Creators Edition really makes simple and measurable difference. You can see the CPU GHz and brightness ratchet up and down. I set it to Best battery life and it'll go 8+ hours easy. CPU will hang out around 0.85 GHz and I can type all day at 40% brightness. Then I want to compile, I pull it up to bursts of 3.95 Ghz and take care of business.

HD Camera FTW

Having a 1080p front facing camera makes Skype/Zoom/etc calls excellent. I even used the default Camera app today during an on-stage presentation and someone later commented on how clear the camera was.

USB-C - I didn't believe it, but it's really a useful thing

Honestly, I wasn't feeling the hype around USB-C "one connector to rule them all," but today I was going to pull out some HDMI and Ethernet dongles here at the Webstock Conferences in New Zealand and they mentioned that all day they'd been using a Dell USB-C dock. I plugged in one cable - I didn't even use my Surface Power Brick - and got HDMI, a USB hub, Ethernet *and* power going back into the SurfaceBook. I think a solution like this will/should become standard for conferences. It was absolutely brilliant.

I have read some about concerns about charging the Surface Book 2 (and other laptops with USB-C) and there's a reddit thread with some detail. The follow says the Apple USB-C charger he bought charges the SurfaceBook at 72% of the speed of the primary charger. My takeaway is, ok, the included charger will always charge fastest, but this work not only work in a pinch, but it's a perfectly reasonable desk-bound or presenter solution. Just as my iPhone will charge - slowly - with aftermarket USB chargers. If you're interested in the gritty details, you can read about a conversation  that the Surface has with an Apple Charger over USB as they negotiate how much power to give and take. Nutshell, USB-C chargers that can do 60W will work but 90+W are ideal - and the Dell Dock handles this well which makes it a great flexible solution for conferences.

Also worth pointing out that there wasn't any perceptible "driver install" step. I got all the Dell Dock's benefits just by plugging it in at the conference. Note that I use a Surface Dock (the original/only one?) at home. In fact, the same Surface Dock I got for my personal Surface Pro 3 is in use by my new Surface Book 2. Presumably it doesn't output the full 95W that the Surface Book 2 can use, but in daily 10+ hour use it's been a non issue. There's articles about how you can theoretically drain a Surface Book 2's batteries if you're using more power than it's getting from a power supply, but I haven't had that level of sustained power usage. Haven't needed to give it a thought.

The i7 has a NVidia 1060 with 6 gigs of RAM, so you can install GeForce and run apps on the Discrete GPU

You can go in and control which apps run on which GPU (for power savings, or graphical power) or you can right click an app and Run on NVidia.

You can control which GPU on a program by program basis

or right click any app:

Right Click | Run with Graphics Processor

It has an Xbox Wireless Adapter built in

I got this for work, so it's not a gaming machine...BUT it's got that NVidia 1060 GPU and I just discovered there's an Xbox Wireless Adapter built-in. I thought this was just Bluetooth, but it's some magical low-latency thing. You can buy the $25 USB Xbox Wireless Adapter for your PC and use all your Xbox controllers with it - BUT it's built-in, so handled. What this means for me as a road warrior is that I can throw an Xbox Controller into my bag and play Xbox Play Anywhere games in my hotel.

Conclusion

All in all, I've had no issues with the Surface Book 2, given I stay on the released software (no Windows 10 Insiders Fast on this machine). It runs 2 external monitors (3 if you count its 15" display) and both compiles fast and plays games well.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.