Scott Hanselman

13 hours debugging a segmentation fault in .NET Core on Raspberry Pi and the solution was...

July 18, '17 Comments [63] Posted in Bugs | DotNetCore
Sponsored By

Debugging is a satisfying and special kind of hell. You really have to live it to understand it. When you're deep into it you never know when it'll be done. When you do finally escape it's almost always a DOH! moment.

I spent an entire day debugging an issue and the solution ended up being a checkbox.

NOTE: If you get a third of the way through this blog post and already figured it out, well, poop on you. Where were you after lunch WHEN I NEEDED YOU?

I wanted to use a Raspberry Pi in a tech talk I'm doing tomorrow at a conference. I was going to show .NET Core 2.0 and ASP.NET running on a Raspberry Pi so I figured I'd start with Hello World. How hard could it be?

You'll write and build a .NET app on Windows or Mac, then publish it to the Raspberry Pi. I'm using a preview build of the .NET Core 2.0 command line and SDK (CLI) I got from here.

C:\raspberrypi> dotnet new console
C:\raspberrypi> dotnet run
Hello World!
C:\raspberrypi> dotnet publish -r linux-arm
Microsoft Build Engine version for .NET Core

raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\raspberrypi.dll
raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\publish\

Notice the simplified publish. You'll get a folder for linux-arm in this example, but could also publish osx-x64, etc. You'll want to take the files from the publish folder (not the folder above it) and move them to the Raspberry Pi. This is a self-contained application that targets ARM on Linux so after the prerequisites that's all you need.

I grabbed a mini-SD card, headed over to https://www.raspberrypi.org/downloads/ and downloaded the latest Raspbian image. I used etcher.io - a lovely image burner for Windows, Mac, or Linux - and wrote the image to the SD Card. I booted up and got ready to install some prereqs. I'm only 15 min in at this point. Setting up a Raspberry Pi 2 or Raspberry Pi 3 is VERY smooth these days.

Here's the prereqs for .NET Core 2 on Ubuntu or Debian/Raspbian. Install them from the terminal, natch.

sudo apt-get install libc6 libcurl3 libgcc1 libgssapi-krb5-2 libicu-dev liblttng-ust0 libssl-dev libstdc++6 libunwind8 libuuid1 zlib1g

I also added an FTP server and ran vncserver, so I'd have a few ways to talk to the Raspberry Pi. Yes, I could also SSH in but I have a spare monitor, and with that monitor plus VNC I didn't see a need.

sudo apt-get pure-ftpd
vncserver

Then I fire up Filezilla - my preferred FTP client - and FTP the publish output folder from my dotnet publish above. I put the files in a folder off my ~\Desktop.

FTPing files

Then from a terminal I

pi@raspberrypi:~/Desktop/helloworld $ chmod +x raspberrypi

(or whatever the name of your published "exe" is. It'll be the name of your source folder/project with no extension. As this is a self-contained published app, again, all the .NET Core runtime stuff is in the same folder with the app.

pi@raspberrypi:~/Desktop/helloworld $ ./raspberrypi 
Segmentation fault

The crash was instant...not a pause and a crash, but it showed up as soon as I pressed enter. Shoot.

I ran "strace ./raspberrypi" and got this output. I figured maybe I missed one of the prerequisite libraries, and I just needed to see which one and apt-get it. I can see the ld.so.nohwcap error, but that's a historical Debian-ism and more of a warning than a fatal.

strace on a bad exe in Linux

I used to be able to read straces 20 years ago but much like my Spanish, my skills are only good at Chipotle. I can see it just getting started loading libraries, seeking around in them, checking file status,  mapping files to memory, setting memory protection, then it all falls apart. Perhaps we tried to do something inappropriate with some memory that just got protected? We are dereferencing a null pointer.

Maybe you can read this and you already know what is going to happen! I did not.

I run it under gdb:

pi@raspberrypi:~/Desktop/WTFISTHISCRAP $ gdb ./raspberrypi 
GNU gdb (Raspbian 7.7.1+dfsg-5+rpi1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
This GDB was configured as "arm-linux-gnueabihf".
"/home/pi/Desktop/helloworldWRONG/./raspberrypi1": not in executable format: File truncated
(gdb)

Ok, sick files?

I called Peter Marcu from the .NET team and we chatted about how he got it working and compared notes.

I was using a Raspberry Pi 2, he a Pi 3. Ok, I'll try a 3. 30 minutes later, new SD card, new burn, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

Weird.

Maybe corruption? Here's a thread about Corrupted Files on Raspbian Jesse 2017-07-05! That's the version I have. OK, I'll try the build of Raspbian from a week before.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

BUT IT WORKS ON PETER'S MACHINE.

Weird.

Maybe a bad nuget.config? No.

Bad daily .NET build? No.

BUT IT WORKS ON PETER'S MACHINE.

Ok, I'll try Ubuntu Mate for Raspberry Pi. TOTALLY different OS.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

What's the common thread here? Ok, I'll try from another Windows machine.

SAME RESULT - segfault.

I call Peter back and we figure it's gotta be prereqs...but the strace doesn't show we're even trying to load any interesting libraries. We fail FAST.

Ok, let's get serious.

We both have Raspberry Pi 3s. Check.

What kind of SD card does he have? Sandisk? Ok,  I'll use Sandisk. But disk corruption makes no sense at that level...because the OS booted!

What did he burn with? He used Win32diskimager and I used Etcher. Fine, I'll bite.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

He sends me HIS build of a HelloWorld and I FTP it over to the Pi. SAME RESULT - segfault.

Peter is freaking out. I'm deeply unhappy and considering quitting my job. My kids are going to sleep because it's late.

I ask him what he's FTPing with, and he says WinSCP. I use FileZilla, ok, I'll try WinSCP.

WinSCP's New Session dialog starts here:

SFTP is Default

I say, WAIT. Are you using SFTP or FTP? Peter says he's using SFTP so I turn on SSH on the Raspberry Pi and SFTP into it with WinSCP and copy over my Hello World.

IT FREAKING WORKS. IMMEDIATELY.

Hello World on a Raspberry Pi

BUT WHY.

I make a folder called Good and a folder called BAD. I copy with FileZilla to BAD and with WinSCP to GOOD. Then I run a compare. Maybe some part of .NET Core got corrupted? Maybe a supporting native library?

pi@raspberrypi:~/Desktop $ diff --brief -r helloworld/ helloworldWRONG/
Files helloworld/raspberrypi1 and helloworldWRONG/raspberrypi1 differ

Wait, WHAT? The executable are different? One is 67,684 bytes and the bad one is 69,632 bytes.

Time for a  visual compare.

All the ODs are gone

At this point I saw it IMMEDIATELY.

0D is CR (13) and 0A is LF (10). I know this because I'm old and I've written printer drivers for printers that had both carriages and lines to feed. Why do YOU know this? Likely because you've transferred files between Unix and Windows once or thrice, perhaps with FTP or Git.

All the CRs are gone. From my binary file.

Why?

I went straight to settings in FileZilla:

Treat files without extensions as ASCII files

See it?

Treat files without extensions as ASCII files

That's the default in FileZilla. To change files that are just chilling, minding their own business, as ASCII, and then just randomly strip out carriage returns. What could go wrong? And it doesn't even look for CR LF pairs! No, it just looks for CRs and strips them. Classy.

In retrospect I should have used known this, but it wasn't even the switch to SFTP, it was the switch to an FTP program with different defaults.

This bug/issue whatever burned my whole Monday. But, it'll never burn another Monday, Dear Reader, because I've seen it before now.

FAIL FAST FAIL OFTEN my friends!

Why does experience matter? It means I've failed a lot in the past and it's super useful if I remember those bugs because then next time this happens it'll only burn a few minutes rather than a day.

Go forth and fail a lot, my loves.

Oh, and FTP sucks.


Sponsor: Thanks to Redgate! A third of teams don’t version control their database. Connect your database to your version control system with SQL Source Control and find out who made changes, what they did, and why. Learn more

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Ubuntu now in the Windows Store: Updates to Linux on Windows 10 and Important Tips

July 10, '17 Comments [15] Posted in Linux | Win10
Sponsored By

I noticed this blog post about Ubuntu over at the Microsoft Command Line blog. Ubuntu is now available from the Windows Store for builds of Windows over 16215.

image

You can run "Winver" to see your build number of Windows. If you run Windows 10 you can certainly sign up for the Windows Insiders builds, or you can wait a few months until these features make their way to the mainstream. I've been running Windows 10 Insiders "Fast ring" for a while with a few issues but nothing blocking.

The addition of Ubuntu to the Windows Store may initially seem confusing or even a little bizarre. However, given a minute to understand the larger architecture it make a lot of sense. However, for those of us who have been beta-testing these features, the move to the Windows Store will require some manual steps in order for you to reap the benefits.

Here's how I see it.

  • For the early betas of the Windows Subsystem for Linux you type bash from anywhere and it runs Ubuntu on Windows.
  • Ubuntu on Windows hides its filesystem in C:\Users\scott\AppData\Local\somethingetcetc and you shouldn't go there or touch it.
  • By moving the tar files and Linux distro installation into the store, that allows us users to use the Store's CDN (Content Distrubution Network) to get Distros quickly and easily. 
    • Just turn on the feature and REBOOT
      Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

then hit the store to get the binaries!

Ok, now this is where and why it gets interesting.

Soon (later this month I'm told) we will be able to have n number of native Linux distros on our Windows 10 machines at one time. You can install as many as you like from the store. No VMs, just fast Linux...on Windows!

There is a utility for the Windows Subsystem for Linux called "wslconfig" that Windows 10 has.

C:\>wslconfig
Performs administrative operations on Windows Subsystem for Linux

Usage:
/l, /list [/all] - Lists registered distributions.
/all - Optionally list all distributions, including distributions that
are currently being installed or uninstalled.
/s, /setdefault <DistributionName> - Sets the specified distribution as the default.
/u, /unregister <DistributionName> - Unregisters a distribution.

C:\WINDOWS\system32>wslconfig /l
Windows Subsystem for Linux Distributions:
Ubuntu (Default) Fedora
OpenSUSE

At this point when I type "bash" at the regular Windows command prompt or PowerShell I will be launching my default Linux. I can also just type "Ubuntu" or "Fedora," etc to get a specific one.

If I wanted to test my Linux code (.NET, node, go, ruby, whatever) I could script it from Windows and run my tests on n number of distros. Slick for developers.

TODOs if you have WSL and Bash from earlier betas

If you already have "bash" on your Windows 10 machine and want to move to the "many distros" you'll just install the Ubuntu distro from the store and then move your distro customizations out of the "legacy/beta bash" over to the "new train but beta although getting closer to release WSL." I copied my ~/ folder over to /mnt/c/Users/Scott/Desktop/WSLBackup, then opened Ubuntu and copied my .rc files and whatnot back in. Then I removed my original bash with lxrun /uninstall. Once I've done that, my distro are managed by the store and I can have as many as I like. Other than customizations, it's really easy (like, it's not a big deal and it's fast) to add or remove Linuxes on Windows 10 so fear not. Backup your stuff and this will be a 10 min operation, plus whatever apt-get installs you need to redo. Everything else is the same and you'll still want to continue storing and sharing files via /mnt/c.

NOTE: I did a YouTube video called Editing code and files on Windows Subsystem for Linux on Windows 10 that I'd love if you checked out and shared on social media!

Enjoy!


Sponsor: Seq is simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Version 4 adds integrated dashboards and alerts - check it out!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

URLs are UI

July 7, '17 Comments [42] Posted in Musings
Sponsored By

imageWhat a great title. "URLs are UI." Pithy, clear, crisp. Very true. I've been saying it for years. Someone on Twitter said "this is the professional quote of 2017" because they agreed with it.

Except Jakob Nielsen said it in 1999. And Tim Berners-Lee said "Cool URIs don't change" in 1998.

So many folks spend time on their CSS and their UX/UI but still come up with URLs that are at best, comically long, and at worst, user hostile.

Search Results that aren't GETs - Make it easy to share

Even non-technical parent or partner things URLs are UI? How do I know? How many times has a relative emailed you something like this:

"Check out this house we found!
https://www.somerealestatesite.com/
homes/for_sale/
search_results.asp"

That's not meant to tease non-technical relative! It's not their fault! The URL is the UI for them. It's totally reasonable for them to copy-paste from the box that represents where they are and give it to you so you can go there too!

Make it a priority that your website supports shareable URLs.

URLs that are easy to shorten - Can you easily shorten a URL?

I love Stack Overflow's URLs. Here's an example: https://stackoverflow.com/users/6380/scott-hanselman 

The only thing that matters there is the 6380. Try it https://stackoverflow.com/users/6380 or https://stackoverflow.com/users/6380/fancy-pants also works. SO will even support this! http://stackoverflow.com/u/6380.

Genius. Why? Because they decided it matters.

Here's another https://stackoverflow.com/questions/701030/whats-the-significance-of-oct-12-1999 again, the text after the ID doesn't matter. https://stackoverflow.com/questions/701030/

This is a great model for URLs where you want a to use a unique ID but the text/title in the URL may change. I use this for my podcasts so https://hanselminutes.com/587/brandon-bouier-on-the-defense-digital-service-and-deploying-code-in-a-war-zone is the same as https://hanselminutes.com/587.

Unnecessarily long or unintuitive URLs - Human Readable and Human Guessable

Sometimes if you want context to be carried in the URL you have to, well, carry it along. There was a little debate  on Twitter recently about URLs like this https://fabrikam.visualstudio.com/_projects. What's wrong with it? The _ is not intuitive at all. Why not https://fabrikam.visualstudio.com/projects? Because obscure technical reason. In fact, all the top level menu items for doing stuff in VSTS start with _. Not /menu/ or /action or whatever. My code is https://fabrikam.visualstudio.com/_git/FabrikamVSO and I clone from here https://fabrikam.visualstudio.com/DefaultCollection/_git/FabrikamVSO. That's weird. Where did Default Collection come from? Why can't I just add a ".git" extension to my project's URL and clone that? Well, maybe they want the paths to be nice in the URL.

Nope. https://fabrikam.visualstudio.com/_git/FabrikamVSO?path=%2Fsrc%2Fsetup%2Fcleanup.local.ps1&version=GBmaster&_a=contents is a file. Compare that to https://github.com/shanselman/TinyOS/blob/master/readme.md at GitHub. Again, I am sure there is a good, and perhaps very valid technical reason. But another valid reason is very frank. URLs weren't a UX priority.

Same with OneDrive https://onedrive.live.com/?id=CD0633A7367371152C%21172&cid=CD06A73371152C vs. DropBox https://www.dropbox.com/home/Games

As a programmer, I am sympathetic. As a user, I have zero sympathy. Now I have to remember that there is a _ and it's a thing.

I proposed this. URLs are rarely a tech problem They are an organizational willpower problem. You care a lot about the evocative 2meg jpg hero image on your website. You change fonts, move CSS around ad infinitum, and agonize over single pixels. You should also care about your URLs.

SIDE NOTE: Yes, I am fully aware of my own hypocrisy with this issue. My blog software was written by a bunch of us in 2002 and our URLs are close to OK, but their age is showing. I need to find a balance between "Cool URLs don't change" and "should I change totally uncool URLs." Ideally I'd change my blog's URLs to be all lowercase, use hyphens for spaces instead of CamelCase, and I'd hide the technology. No need (other than 17 year old historical technical ones) to have .aspx or .php at the end of your URL. It's on my list.

What is your advice, Dear Reader for good URLs?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Review: The AmpliFi HD (High-Density) Home Wi-Fi Mesh Networking System

July 6, '17 Comments [35] Posted in Reviews
Sponsored By

The AmpliFi Router is a cute small white box with a black circular touchscreenI've been very happy with the TP-Link AC3200 Router I got two years ago. It's been an excellent and solid router. However, as the kids get older and the number of mobile devices (and smart(ish) devices) in the house increase, the dead wifi spots have become more and more noticeable. Additionally I've found myself wanting more control over the kids' internet access.

There's a number of great WiFi Survey Apps but I was impressed with the simplicity of this Windows 10 WiFi Survey app, so I used it to measure the signals around my house, superimposed with a picture of the floor plan.

Here's the signal stretch of the TP-Link. Note that when you're using a WiFi Survey app you need to take into consideration if you're measuring 2.4GHz that gives you better distance at slower speeds, or 5GHz that can give you a much faster connection at the cost of range. As a general rule in a single room or small house, 5GHz is better and you'll absolutely notice it with video streaming like Netflix.

Below is a map of the 5GHz single for my single TP-Link router. It's "fine" but it's not epic if you move around. You can guess from the map that the router is under the stairs in the middle.

My older router's wifi map shows mostly Yellow

You can also guess where concrete walls are, as well as the angles of certain vectors that pass through thick walls diagonally and affect the signal. Again, it's OK but it's starting to be annoying and I wanted to see if I could fix it.

SIDE BAR: It is certainly possible to take two routers and combine them into one network with a shared SSID. If you know how to do this kind of thing (and enjoy it) then more power to you. I tried it out in 2010 and it worked OK, but I want my network to "just work" 100% of the time, out of the box. I like the easy setup of a consumer device with minimal moving parts. Mesh Networking products are reaching the consumer at a solid price point with solid tech so I thought it was time to make the switch.

Below is the same map with the same locations, except using the AmpliFi HD (High-Density) Home Wi-Fi System from Ubiquiti Networks. This is the consumer (or "prosumer") version of the technology that Ubiquiti (UBNT) uses in their commercial products.

AmpliFi HD includes the router and two "mesh points." These are extenders that use a mesh tech called 3x3 MIMO. They can transmit and receive via 3 streams at a low level. MIMO is part of the 802.11n spec.

The Singal from the AmpliFi HD is fantastic

Note that this improvement is JUST using the AmpliFi main router. When you do a Wifi Survey the "Mesh Points" will show up as the same SSID (the same wireless network) but they'll have different MAC Address. That means in my list of networks in the Survey tool my "HanselMesh" network appears three times. Don't worry, it's one SSID and your computers will only see ONE network - it's just advanced tools that see each point. It's that "meshing" of n number of access points that is the whole point.

These two maps below are the relative strengths of just the mesh points. It's the union of all three of these maps that gives the clear picture. For example, one mesh point covers the living area fantastically (as does the router itself) while the other covers the garage (not that it needs it) and the entire office.

The mesh points make the signal better in parts of the houseThe mesh points make the signal better in parts of the house

Between the main router and the two included mesh points there are NO dead spots in the house. I'll find the kids in odd corners with an iPad, behind a couch in the play room where they couldn't get signal before. I'm finding myself sitting in different rooms than I did before just because I can roam without thinking about it.

I would suspect I could get away with buying just the AmpliFi Router (around US$133) and maybe one mesh point extender but the price for all three (router + 2 mesh points) is decent. The slick part is that you can add mesh points OR a second router. It's the second router idea that is most compelling for multi-floor buildings that also have a wired network. For example, I could add a second router (not a mesh point) upstairs and plug it into the wall (so it's "wire backed").

The mesh points plug into the wall and just sit there. You can adjust them, bend them to point towards the router, and best of all - move them at will. For example, when I set up the network initially I put the two mesh points where I thought they'd work best. But one didn't and Netflix was dropping. I literally unplugged it and moved it into the hallway and plugged it in. A minute later that whole area was full speed. This means if I did/do find a dead spot, I could just move the mesh point either temporarily or permanently.

The router is adorable. Like "I wish it wasn't in a closet" adorable. It's pretty enough that you'll want it on your desk. It has a great LCD touchscreen and a lighted base. The touchscreen shows your IP, total bandwidth this month (very useful, in fact), and bandwidth currently used.

The router is best set-up with an iPhone/iPad or Android device. There is a VERY minimal web interface but you really can't manage the Amplfi (as of the time of this writing) with a web browser - it really is designed to be administered with a mobile app. And frankly, I'm OK with it because the app is excellent.

The AmpliFi App says "Everything is Great"35Mbs up/down

The download/upload numbers there aren't the maximum speed - it's the bandwidth being used right now. You can test the speed elsewhere in the app. I have 35Mb/s up and down (usually) in my house, but Gigabit inside (which is useful as I have a Synology server internally).

There a lot of ways to restrict internet for the kids. I like that the Amplify lets me group devices and apply time-limits to them. Here the Xbox and two tablets can't use the internet until 9am and they turn off at bedtime.

Notice the pause buttons as well. I can temporarily pause internet on any one device (or group of devices) whenever.

imagePhoto Jun 25, 7 41 23 PM

When you're setting up the network and positioning the mesh points you can see near-realtime signals updates in the app.

100% signal on this Mesh Point72% signal on this Mesh Point

And once it's all done, you can impose a basic QoS (Quality of Service) on individual devices by telling the AmpliFi what they are used for. Here I've setup a device for multi-player gaming, while some iPads are used mostly for streaming.

Setting up Streaming in AmpliFiNew Updates are available

Setup is a snap. It took longer to go to each device and connect them to the new network than it did to set up the network. I suppose I could have kept the same SSID and password as the old network but I wanted a fresh start and easier A/B testing.

So far I have been 100% thrilled with the AmpliFi HD. It's important to point out again that AmpliFi is the consumer arm of Ubiquiti (UBNT) and that a dozen programmer/techie-types on Twitter insisted suggested that I needed these Enterprise/Commercial Access Points. I get it. They are more advanced, fancier, offer more stats and more control. But honestly, my house isn't that big, the data I'm pushing around isn't that complex, and I don't want a Commercial Level of control. I was (and am) thoroughly impressed with the consumer stuff. The app is excellent and improving. The coverage is complete and fast. The AmpliFi is rated at 450 Mbps for 2.4 GHz and 1.3 Gbps for 5 GHz). Even if I upgrade my internet to my localities max of 150 Mbps (I only pay for 35 Mbps today) I'm not anywhere near that limit externally, and I'm not doing anything close internally.

That said, here's some things I'd like in future updates:

  • Simpler port-forwarding with common rules. "This xbox/that service"
  • An open source VPN server. I'd like to VPN directly into the Ubiquiti, rather than into my Synology.
  • More quality of service/prioritization details. "The office server always has preferred packets, period"
  • Mobile alerts - I'd like to know if I go over x bandwidth, or if we are streaming at x Mbs for y hours.
  • A fully featured administration web console.

And yes, I realize NOW I should have called the Network "Hanselmesh." Missed opportunity.

I highly recommend the AmpliFi HD. I frankly have no complaints other than my small wish list above. Buy one via my Amazon referral links so I can keep blogging in my spare time AND buy tacos. Your use of these links gives me walking around money. Thanks for reading!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Porting a 15 year old .NET 1.1 Virtual CPU Tiny Operating System school project to .NET Core 2.0

July 2, '17 Comments [10] Posted in DotNetCore | Learning .NET
Sponsored By

The 2002 TinyOS in C# is now on .NET Core in 2017 running on UbuntuI've had a number of great guests on the podcast lately. One topic that has come up a number of times is the "toy project." I've usually kept mine private - never putting them on GitHub - Somewhat concerned that people would judge me and my code. However, hypocrite that am (aren't we all?) I have advocated that others put their "Garage Sale Code" online. So here's some crappy code. ;)

The Preamble

While I've been working as an engineer for 25 years this year, I didn't graduate from school with a 4 year degree until 2003 - I just needed to get it done, for myself. I was poking around recently and found my project from OIT's CST352 "Operating Systems" class. One of the projects was to create a "Virtual CPU and OS." This is kind of a thought exercise. It's not really a parser/lexer - although there is both - and it's not a real OS. But it needs to be able to take in a made-up quasi-Assembly Language instruction set and execute them on a virtual CPU while managing virtual memory of arbitrary size. Again, a thought exercise made real to confirm that the student understands the responsibilities of a CPU.

Here's an example "application." Confused yet? Here's the original spec I was given in 2002 that includes the 36 instructions the "CPU" should understand. It has 10 general-purpose 32bit registers address as 1 through 10. Register 10 is the stack pointer. There are two bit flag registers - sign flag and zero flag.

Instructions are "opcode arg1 arg2" with constants prefixed with "$."

11 r8        ;Print r8
6 r1 $10 ;Move 10 into r1
6 r2 $6 ;Move 6 into r2
6 r3 $25 ;Move 25 into r3
23 r1 ;Acquire lock in r1 (currently 10)
11 r3 ;Print r3 (currently 25)
24 r1 ;Release r1 (currently 10)
25 r3 ;Sleep r3 (currently 25)
11 r3 ;Print r3 (currently 25)
27 ;Exit

I write my homework assignment in 2002 in the idiomatic C# of the time on .NET 1.1. That means no Generics<T> - I had to make my own strongly typed collections. That means C# has dozens of (if not a hundred) language and syntax improvements. I didn't use a Unit Testing Framework as TDD was just starting around 1999 during the XP (eXtreme Programming) days and NUnit was just getting start. It also uses "unsafe" to pin down memory in a few places. I'm sure there are WAY WAY WAY better and more sophisticated ways to do this today in idiomatic C# of 2017. Those are excuses, the real reasons are my own ignorance, ability, combined with some night-school laziness.

One of the more fun parts of this exercise was moving from physical memory (a byte array as I recall) to a full-on Memory Manager where each Process thought it could address a whole bunch of Virtual Memory while actual Physical Memory was arbitrarily sized. Then - as a joke - I would swap out memory pages as XML! ;) Yes, to be clear, it was a joke and I still love it.

You can run an "app" by passing in the total physical memory along with the text file containing the program, but you can also run an arbitrary number of programs by passing in an arbitrary number  of text files! The "TinyOS" will handle each process thinking it has its own memory and will time

If you are more of a visual learner, perhaps you'd prefer this 20-slide PowerPoint on this Tiny CPU that I presented in Malaysia later that year. You dig those early 2000-era slides? I KNOW YOU DO.

Tiny OS Memory SlidesTiny OS Memory SlidesTiny OS Memory Slides 

Updating a .NET 1.1 app to cross-platform .NET Core 2.0

Step 1 was to download the original code from my own blog. ;) This is also Reason #4134 why you should have a blog.

I decided to use Visual Studio 2017 to upgrade it, and even worse I decided to use .NET Core 2.0 which is currently in Preview. I wanted to use .NET Core 2.0 not just because it's cross-platform but also because it promises to have a pretty large API surface area and I want this to "just work." The part about getting my old application running on Linux is going to be awesome, though.

Visual Studio then pops a scary dialog about upgrading files. NOTE that another totally valid way to do this (that I will end up doing later in this blog post) is to just make a new project and move the source files into it. Natch.

image

Visual Studio says it's targeting .NET 2.0 Full Framework, but I ratchet it up to 4.6 to see what happens. It builds but with a bunch of errors about Obsolete methods, the most interesting one being this one:

Warning CS0618    
'ConfigurationSettings.AppSettings' is obsolete:
'This method is obsolete, it has been replaced by
System.Configuration!System.Configuration.ConfigurationManager.AppSettings'
C:\Users\scott\Downloads\TinyOSOLDOLD\OS Project\CPU.cs 72

That's telling me that my .NET 1/2 API will work but has been replaced in .NET 4.x, but I'm more interested in .NET Core 2.0. I could make my EXE a LIB and target .NET Standard 2.0 or I could make a .NET Core 2.0 app and perhaps get a few more APIs. I didn't do a formal analysis with the .NET Portability Analyzer but I will add that to the list of Things To Do. I may be able to make a library that works on an iPhone - a product that didn't exist when I started this assignment. That would be Just Cool(tm).

I decided to just make a new empty .NET Core 2.0 app and copy the source .cs files into it. A few interesting things.

  • My app also used "unsafe" code (it pins memory down and accesses it directly).
  • It has extensive inline documentation in comments that I used to use NDoc to make a CHM Help file. I'd like that doc to turn into HTML at some point.
  • It also has an appsettings.json file that needs to get copied to the output folder when it compiles.
  • While I could publish it to a self-contained .NET Core exe, for now I'm running it like this in my test batch files - example:
    • dotnet netcoreapp2.0/TinyOSCore.dll 512 scott13.txt

Here's the resulting csproj file.

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

<PropertyGroup>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
</PropertyGroup>

<ItemGroup>
<None Remove="appsettings.json" />
</ItemGroup>

<ItemGroup>
<Content Include="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="2.0.0-preview2-final" />
</ItemGroup>

</Project>

Other than the obsolete configuration warning and a few malformed XML comments, the app compiled and ran! You can actually "watch" the nightmare process here https://github.com/shanselman/TinyOS/commits/Core2Port in the form of GitHub commits. I also moved the docs from a 2002 Word Doc to Markdown so be sure to explore the fairly extensive spec https://github.com/shanselman/TinyOS.

The only significant change was loading the config. Configuration is even more different on .NET Core 2.0 than Full Framework. It's FAR more, ahem, configurable. I could have used "Options," I could have written my own config provider if it was important to keep the file format.

This little TinyOS has a bunch of config options that come in from a .exe.config file in XML like this (truncated):

<configuration>
<appSettings>
<!--
Must be a factor of 4
This is the total Physical Memory in bytes that the CPU can address.
This should not be confused with the amount of total or addressable memory
that is passed in on the command line.
-->
<add key="PhysicalMemory" value="128" />
<!--
Must be a factor of 4
This is the ammount of memory in bytes each process is allocated
Therefore, if this is 256 and you want to load 4 processes into the OS,
you'll need to pass a number > 1024 as the total ammount of addressable memory
on the command line.
-->
<add key="ProcessMemory" value="384" />
<add key="DumpPhysicalMemory" value="true" />
<add key="DumpInstruction" value="true" />
<add key="DumpRegisters" value="true" />
<add key="DumpProgram" value="true" />
<add key="DumpContextSwitch" value="true" />
<add key="PauseOnExit" value="false" />

I have a few choices. I could make a Configuration Provider and reach .NET Core to read this format (there's an XML adapter, in fact) or make the code porting easier by moving these "name/value" pairs to a JSON file like this:

{
"PhysicalMemory": "128",
"ProcessMemory": "384",
"DumpPhysicalMemory": "true",
"DumpInstruction": "true",
"DumpRegisters": "true",
"DumpProgram": "true",
"DumpContextSwitch": "true",
"PauseOnExit": "false",
"SharedMemoryRegionSize": "16",
"NumOfSharedMemoryRegions": "4",
"MemoryPageSize": "16",
"StackSize": "16",
"DataSize": "16"
}

This was just a few minutes of search and replace to change the XML to JSON. I could have also written a little app or shell script. By changing the config (rather than writing an adapter) I could then keep the code 99% the same.

My code was doing things like this (all over...there was no DI container yet):

bytesOfPhysicalMemory = uint.Parse(ConfigurationSettings.AppSettings["PhysicalMemory"]);

And I'd like to avoid major refactoring - yet. I added this bit of .NET Core configuration at the top of the EntryPoint and saved away an IConfigurationHost:

var builder = new ConfigurationBuilder()
.AddJsonFile("appsettings.json");
Configuration = builder.Build();

I've got a Dictionary in the format of the IConfiguration host called "Configuration." So now I just do this in a dozen places and the app compiles again:

bytesOfPhysicalMemory = uint.Parse(Configuration["PhysicalMemory"]);

This brings up that feeling we all have when we look at old code - especially our own old code. I should have abstracted that away! Why didn't I use an interface? Why so many statics? What was I thinking?

We can beat ourselves up or we can feel good about ourselves and remember this. The app worked. It still works. There is value in it. I learned a lot. I'm a better programmer now. I don't know how far I'll take this old code but I had a lovely afternoon porting it to .NET Core 2.0 and I may refactor the heck out if it or I may not.

TinyOS on Ubuntu

For now I did update the smoke tests to run on both Windows and Linux and I'm happy with the experiment.

Related Links

Have YOU done a project like this, either in school or on your own?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.