Scott Hanselman

Blocking ads before they enter your house at the DNS level with pi-hole and a cheap Raspberry Pi

April 11, '19 Comments [15] Posted in Hardware | Open Source
Sponsored By
image

Lots of folks ask me about Raspberry Pis. How many I have, what I use them for. At last count there's at least 22 Raspberry Pis in use in our house.

A Pi-hole is a Raspbery Pi appliance that takes the form of an DNS blocker at the network level. You image a Pi, set up your network to use that Pi as a DNS server and maybe white-list a few sites when things don't work.

I was initially skeptical, but I'm giving it a try. It doesn't process all network traffic, it's a DNS hop on the way out that intercepts DNS requests for known problematic sites and serves back nothing.

Installation is trivial if you just run unread and untrusted code from the 'net ;)

curl -sSL https://install.pi-hole.net | bash

Otherwise, follow their instructions and download the installer, study it, and run it.

I put my pi-hole installation on the metal, but there's also a very nice Docker Pi-hole setup if you prefer that. You can even go further, if, like me, you have Synology NAS which can also run Docker, which can in turn run a Pi-hole.

Within the admin interface you can tail the logs for the entire network, which is also amazing to see. You think you know what's talking to the internet from your house - you don't. Everything is logged and listed. After installing the Pi-hole roughly 18% of the DNS queries heading out of my house were blocked. At one point over 23% were blocked. Oy.

NOTE: If you're using an Amplifi HD or any "clever" router, you'll want to change the setting "Bypass DNS cache" otherwise the Amplifi will still remain the DNS lookup of choice on your network. This setting will also confuse the Pi-hole and you'll end up with just one "client" of the Pi-hole - the router itself.

For me it's less about advertising - especially on small blogs or news sites I want to support - it's about just obnoxious tracking cookies and JavaScript. I'm going to keep using Pi-hole for a few months and see how it goes. Do be aware that some things WILL break. Could be a kid's iPhone free-to-play game that won't work unless it can download an add, could be your company's VPN. You'll need to log into http://pi.hole/admin (make sure you save your password when you first install, and you can only change it at the SSH command line with "pihole -a -p") and sometimes disable it for a few minutes to test, then whitelist certain domains. I suspect after a few weeks I'll have it nicely dialed in.


Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download at https://datalust.co/seq.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Displaying your realtime Blood Glucose from NightScout on an AdaFruit PyPortal

March 29, '19 Comments [8] Posted in Hardware | Open Source
Sponsored By

file-2AdaFruit makes an adorable tiny little Circuit Python IoT device called the PyPortal that's just about perfect for the kids - and me. It a little dakBoard, if you will - a tiny totally programmable display with Wi-Fi and  lots of possibilities and sensors. Even better, you can just plug it in over USB and edit the code.py file directly on the drive that will appear. When you save code.py the device soft reboots and runs your code.

I've been using Visual Studio Code to program Circuit Python and it's become my most favorite IoT experience so far because it's just so easy. The "Developer's Inner Loop" of code, deploy, debug is so fast.

As you may know, I use a Dexcom CGM (Continuous Glucose Meter) to manage my Type 1 Diabetes. I feed the data every 5 minutes into an instance of the Nightscout Open Source software hosted in Azure. That gives me a REST API to my own body.

I use that REST API to make "glanceable displays" where I - or my family - can see my blood sugar quickly and easily.

I put my blood sugar in places like:

And today, on a tiny PyPortal device. The code is simple, noting that I don't speak Python, so Pull Requests are always appreciated.

import time
import board
from adafruit_pyportal import PyPortal

# Set up where we'll be fetching data from
DATA_SOURCE = "https://NIGHTSCOUTWEBSITE/api/v1/entries.json?count=1"
BG_VALUE = [0, 'sgv']
BG_DIRECTION = [0, 'direction']

RED = 0xFF0000;
ORANGE = 0xFFA500;
YELLOW = 0xFFFF00;
GREEN = 0x00FF00;

def get_bg_color(val):
if val > 200:
return RED
elif val > 150:
return YELLOW
elif val < 60:
return RED
elif val < 80:
return ORANGE
return GREEN

def text_transform_bg(val):
return str(val) + ' mg/dl'

def text_transform_direction(val):
if val == "Flat":
return "→"
if val == "SingleUp":
return "↑"
if val == "DoubleUp":
return "↑↑"
if val == "DoubleDown":
return "↓↓"
if val == "SingleDown":
return "↓"
if val == "FortyFiveDown":
return "→↓"
if val == "FortyFiveUp":
return "→↑"
return val

# the current working directory (where this file is)
cwd = ("/"+__file__).rsplit('/', 1)[0]
pyportal = PyPortal(url=DATA_SOURCE,
json_path=(BG_VALUE, BG_DIRECTION),
status_neopixel=board.NEOPIXEL,
default_bg=0xFFFFFF,
text_font=cwd+"/fonts/Arial-Bold-24-Complete.bdf",
text_position=((90, 120), # VALUE location
(140, 160)), # DIRECTION location
text_color=(0x000000, # sugar text color
0x000000), # direction text color
text_wrap=(35, # characters to wrap for sugar
0), # no wrap for direction
text_maxlen=(180, 30), # max text size for sugar & direction
text_transform=(text_transform_bg,text_transform_direction),
)

# speed up projects with lots of text by preloading the font!
pyportal.preload_font(b'mg/dl012345789');
pyportal.preload_font((0x2191, 0x2192, 0x2193))
#pyportal.preload_font()

while True:
try:
value = pyportal.fetch()
pyportal.set_background(get_bg_color(value[0]))
print("Response is", value)
except RuntimeError as e:
print("Some error occured, retrying! -", e)
time.sleep(180)

I've put the code up at https://github.com/shanselman/NightscoutPyPortal. I want to get (make a custom?) a larger BDF (Bitmap Font) that is about twice the size AND includes 45 degree arrows ↗ and ↘ as the font I have is just 24 point and only includes arrows at 90 degrees. Still, great fun and took just an hour!

NOTE: I used the Chortkeh BDF Font viewer to look at the Bitmap Fonts on Windows. I still need to find a larger 48+ PT Arial.

What information would YOU display on a PyPortal?


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

What is Blazor and what is Razor Components?

March 19, '19 Comments [31] Posted in ASP.NET | Javascript | Open Source
Sponsored By

I've blogged a little about Blazor, showing examples like Compiling C# to WASM with Mono and Blazor then Debugging .NET Source with Remote Debugging in Chrome DevTools as well as very early on asking questions like .NET and WebAssembly - Is this the future of the front-end?

Let's back up and level-set.

What is Blazor?

Blazor is a single-page app framework for building interactive client-side Web apps with .NET. Blazor uses open web standards without plugins or code transpilation. Blazor works in all modern web browsers, including mobile browsers.

You write C# in case of JavaScript, and you can use most of the .NET ecosystem of open source libraries. For the most part, if it's .NET Standard, it'll run in the browser. (Of course if you called a Windows API or a Linux specific API and it didn't exist in the client-side browser S world, it's not gonna work, but you get the idea).

The .NET code runs inside the context of WebAssembly. You're running "a .NET" inside your browser on the client-side with no plugins, no Silverlight, Java, Flash, just open web standards.

WebAssembly is a compact bytecode format optimized for fast download and maximum execution speed.

Here's a great diagram from the Blazor docs.

Blazor runs inside your browser, no plugins needed

Here's where it could get a little confusing. Blazor is the client-side hosting model for Razor Components. I can write Razor Components. I can host them on the server or host them on the client with Blazor.

You may have written Razor in the past in .cshtml files, or more recently in .razor files. You can create and share components using Razor - which is a mix of standard C# and standard HTML, and you can host these Razor Components on either the client or the server.

In this diagram from the docs you can see that the Razor Components are running on the Server and SignalR (over Web Sockets, etc) is remoting them and updating the DOM on the client. This doesn't require Web Assembly on the client, the .NET code runs in the .NET Core CLR (Common Language Runtime) and has full compatibility - you can do anything you'd like as you are not longer limited by the browser's sandbox.

Here's Razor Components running on the server

Per the docs:

Razor Components decouples component rendering logic from how UI updates are applied. ASP.NET Core Razor Components in .NET Core 3.0 adds support for hosting Razor Components on the server in an ASP.NET Core app. UI updates are handled over a SignalR connection.

Here's the canonical "click a button update some HTML" example.

@page "/counter"

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
int currentCount = 0;

void IncrementCount()
{
currentCount++;
}
}

You can see this running entirely in the browser, with the C# .NET code running on the client side. .NET DLLs (assemblies) are downloaded and executed by the CLR that's been compiled into WASM and running entirely in the context of the browser.

Note also that I'm stopped at a BREAKPOINT in C# code, except the code is running in the browser and mapped back into JS/WASM world.

Debugging Razor Components on the Client Side

But if I host my app on the server as hosted Razor Components, the C# code runs entirely on the Server-side and the client-side DOM is updated over a SignalR link. Here I've clicked the button on the client side and hit the breakpoint on the server-side in Visual Studio. No there's no POST and no POST-back. This isn't WebForms - It's Razor Components. It's a SPA app written in C#, not JavaScript, and I can change the locations of the running logic, while the UI remains always standard HTML and CSS.

Debugging Razor Components on the Server Side

It's a pretty exciting time on the open web. There's a lot of great work happening in this space and I'm very interesting to see how frameworks like Razor Components/Blazor and Phoenix LiveView change (or don't) how we write apps for the web.


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Converting an Excel Worksheet into a JSON document with C# and .NET Core and ExcelDataReader

March 6, '19 Comments [22] Posted in DotNetCore | Open Source
Sponsored By

Excel isn't a database, except when it isI've been working on a little idea where I'd have an app (maybe a mobile app with Xamarin or maybe a SPA, I haven't decided yet) for the easily accessing and searching across the 500+ videos from https://azure.microsoft.com/en-us/resources/videos/azure-friday/

HOWEVER. I don't have access to the database that hosts the metadata and while I'm trying to get at least read-only access to it (long story) the best I can do is a giant Excel spreadsheet dump that I was given that has all the video details.

This, of course, is sub-optimal, but regardless of how you feel about it, it's a database. Or, a data source at the very least! Additionally, since it was always going to end up as JSON in a cached in-memory database regardless, it doesn't matter much to me.

In real-world business scenarios, sometimes the authoritative source is an Excel sheet, sometimes it's a SQL database, and sometimes it's a flat file. Who knows?

What's most important (after clean data) is that the process one builds around that authoritative source is reliable and repeatable. For example, if I want to build a little app or one page website, yes, ideally I'd have a direct connection to the SQL back end. Other alternative sources could be a JSON file sitting on a simple storage endpoint accessible with a single HTTP GET. If the Excel sheet is on OneDrive/SharePoint/DropBox/whatever, I could have a small serverless function run when the files changes (or on a daily schedule) that would convert the Excel sheet into a JSON file and drop that file onto storage. Hopefully you get the idea. The goal here is clean, reliable pragmatism. I'll deal with the larger business process issue and/or system architecture and/or permissions issue later. For now the "interface" for my app is JSON.

So I need some JSON and I have this Excel sheet.

Turns out there's a lovely open source project and NuGet package called ExcelDataReader. There's been ways to get data out of Excel for decades. Literally decades. One of my first jobs was automating Microsoft Excel with Visual Basic 3.0 with COM Automation. I even blogged about getting data out of Excel into ASP.NET 16 years ago!

Today I'll use ExcelDataReader. It's really nice and it took less than an hour to get exactly what I wanted. I haven't gone and made it super clean and generic, refactored out a bunch of helper functions, so I'm interested in your thoughts. After I get this tight and reliable I'll drop it into an Azure Function and then focus on getting the JSON directly from the source.

A few gotchas that surprised me. I got a "System.NotSupportedException: No data is available for encoding 1252." Windows-1252 or CP-1252 (code page) is an old school text encoding (it's effectively ISO 8859-1). Turns out newer .NETs like .NET Core need the System.Text.Encoding.CodePages package as well as a call to System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance); to set it up for success. Also, that extra call to reader.Read at the start to skip over the Title row had me pause a moment.

using System;
using System.IO;
using ExcelDataReader;
using System.Text;
using Newtonsoft.Json;

namespace AzureFridayToJson
{
class Program
{
static void Main(string[] args)
{
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);

var inFilePath = args[0];
var outFilePath = args[1];

using (var inFile = File.Open(inFilePath, FileMode.Open, FileAccess.Read))
using (var outFile = File.CreateText(outFilePath))
{
using (var reader = ExcelReaderFactory.CreateReader(inFile, new ExcelReaderConfiguration()
{ FallbackEncoding = Encoding.GetEncoding(1252) }))
using (var writer = new JsonTextWriter(outFile))
{
writer.Formatting = Formatting.Indented; //I likes it tidy
writer.WriteStartArray();
reader.Read(); //SKIP FIRST ROW, it's TITLES.
do
{
while (reader.Read())
{
//peek ahead? Bail before we start anything so we don't get an empty object
var status = reader.GetString(0);
if (string.IsNullOrEmpty(status)) break;

writer.WriteStartObject();
writer.WritePropertyName("Status");
writer.WriteValue(status);

writer.WritePropertyName("Title");
writer.WriteValue(reader.GetString(1));

writer.WritePropertyName("Host");
writer.WriteValue(reader.GetString(6));

writer.WritePropertyName("Guest");
writer.WriteValue(reader.GetString(7));

writer.WritePropertyName("Episode");
writer.WriteValue(Convert.ToInt32(reader.GetDouble(2)));

writer.WritePropertyName("Live");
writer.WriteValue(reader.GetDateTime(5));

writer.WritePropertyName("Url");
writer.WriteValue(reader.GetString(11));

writer.WritePropertyName("EmbedUrl");
writer.WriteValue($"{reader.GetString(11)}player");
/*
<iframe src="https://channel9.msdn.com/Shows/Azure-Friday/Erich-Gamma-introduces-us-to-Visual-Studio-Online-integrated-with-the-Windows-Azure-Portal-Part-1/player" width="960" height="540" allowFullScreen frameBorder="0"></iframe>
*/

writer.WriteEndObject();
}
} while (reader.NextResult());
writer.WriteEndArray();
}
}
}
}
}

The first pass is on GitHub at https://github.com/shanselman/AzureFridayToJson and the resulting JSON looks like this:

[
{
"Status": "Live",
"Title": "Introduction to Azure Integration Service Environment for Logic Apps",
"Host": "Scott Hanselman",
"Guest": "Kevin Lam",
"Episode": 528,
"Live": "2019-02-26T00:00:00",
"Url": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-introduction-to-azure-integration-service-environment-for-logic-apps",
"embedUrl": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-introduction-to-azure-integration-service-environment-for-logic-appsplayer"
},
{
"Status": "Live",
"Title": "An overview of Azure Integration Services",
"Host": "Lara Rubbelke",
"Guest": "Matthew Farmer",
"Episode": 527,
"Live": "2019-02-22T00:00:00",
"Url": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-an-overview-of-azure-integration-services",
"embedUrl": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-an-overview-of-azure-integration-servicesplayer"
},
...SNIP...

Thoughts? There's a dozen ways to have done this. How would you do this? Dump it into a DataSet and serialize objects to JSON, make an array and do the same, automate Excel itself (please don't do this), and on and on.

Certainly this would be easier if I could get a CSV file or something from the business person, but the issue is that I'm regularly getting new drops of this same sheet with new records added. Getting the suit to Save As | CSV reliably and regularly isn't sustainable.


Sponsor: Stop wasting time trying to track down the cause of bugs. Sentry.io provides full stack error tracking that lets you monitor and fix problems in real time. If you can program it, we can make it far easier to fix any errors you encounter with it.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Learning about .NET Core futures by poking around at David Fowler's GitHub

February 22, '19 Comments [9] Posted in ASP.NET | DotNetCore | Open Source
Sponsored By

A picture of David silently judging my code, but with loveDavid Fowler is the ASP.NET Core Architect (and an amazing highly technical public speaker) and I've learned a lot from watching him code. However, what's the best way for YOU to learn from folks like David if you can't sit on their shoulder? Why, look at their GitHub!

Since .NET Core (and most of Microsoft) is not only open source but also developed in the open now on GitHub, we can actually watch folks in their day to day work as they commit code to projects like the C# compiler, .NET Core, and ASP.NET Core.

Even more interestingly, we can look at David's github here https://github.com/davidfowl and then under Repositories see what he's up to, filter by language and type, and explore! Sometimes I just explore the Pull Requests on projects like ASP.NET Core.

You can have Private repositories on GitHub, as I do, and as I'm sure David does. But GitHub is a social network for code and it's more fun and a better learning experience when we can see each others code and read it. Read with a critical eye, but without judgment as you may not have all the context that the author does. If you went to my GitHub, https://github.com/shanselman you might be disappointed but you also may be missing the big picture. Just consider that as you Follow people and explore their code.

David is an advanced .NET developer, while, for example, I am comparatively intermediate. So I realize that not all of David's code is FOR me. It's a scratchpad, it's not educational how-to workshops. However, I can get pick up cool idioms, interesting directions the tech may be going, and more importantly - prototypes and spikes. Spikes are folks testing out technical ideas. They may not be complete. In fact, they may never be complete. But some my be harbingers of things to come.

Here's a few things I learned today.

gRPC for .NET Core

For example, at https://github.com/davidfowl/grpc-dotnet I can see David has forked (copied) gRPC for dotnet and his game is working with the gRPC folks to make a fully supported version of gRPC for production workloads with .NET Core! Here are the stated goals:

  • We plan to implement a fully-managed version of gRPC for .NET that will be built on top of ASP.NET Core HTTP/2 server.
  • Good integration with the rest of ASP.NET Core ecosystem
  • High-performance (we plan to utilize some of the cutting edge performance features from ASP.NET Core and in .NET plaform itself)

That sounds cool! I can go learn that gRPC is a modern (google sponsored) Remote Procedure Call framework that can run anywhere. It's used by Netflix and Square and supports basically any languaige and any environment. Nice for this microservice world we are entering and hopefully has learned from the sins of DCOM and CORBA and RMI, because I was there and it sucked.

Nothing to see here but moving to a new JSON serializer

This Web.Framework sounds fun, and I'll be sure to take the description to heart.

says "Lame name, just a prototype, nothing to see here (move along)"

You can see David and James Newton-King kicking ideas around as you explore the commit log. However, the most interesting commit IMHO is when David moves this little spike from using JSON.NET (the ubiquitous 3rd party JSON serializer) to the new emerging official System.Text.Json. Here is the commit with unified differences.

It's a small change but it also makes me feel good about the API underneath this new JSON API that's coming. My takeway is that it's not as scary as I'd assumed. Looks like a Good Thing(tm).

A diff of code shows that just one line is changed to move JSON serializers

 

Cool!

Multi-Protocol ASP.NET Core

This looks interesting.

"The following sample shows how you can host a TCP server and HTTP server in the same ASP.NET Core application. Under the covers, it's the same server (Kestrel) running different protocols on different ports. The ConnectionHandler is a new primitive introduced in ASP.NET Core 2.1 to support non-HTTP protocols."

I didn't know you could do that! Looks like this sample hasn't changed much since it was conceived of in 2018, but then in the last month it's been updated twice and it appears to be part of a larger, slow-moving architectural issue called Bedrock that's moving forward.

I learned that Kestral (the ASP.NET Core web server) has a "ListenLocalhost" option on its options object!

WebHost.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
// This shows how a custom framework could plug in an experience without using Kestrel APIs directly
services.AddFramework(new IPEndPoint(IPAddress.Loopback, 8009));
})
.UseKestrel(options =>
{
// TCP 8007
options.ListenLocalhost(8007, builder =>
{
builder.UseConnectionHandler<MyEchoConnectionHandler>();
});

// HTTP 5000
options.ListenLocalhost(5000);

// HTTPS 5001
options.ListenLocalhost(5001, builder =>
{
builder.UseHttps();
});
})
.UseStartup<Startup>();

I can see here that TCP port 8007 is customer and uses a custom ConnectionHandler which I also didn't know existed! I can then look at the implementation of that handler and it's cool how clean the API is. You can get the result cleanly off the Transport buffer. You're doing low-level TCP but it doesn't feel low level.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Connections;
using Microsoft.Extensions.Logging;

namespace KestrelTcpDemo
{
public class MyEchoConnectionHandler : ConnectionHandler
{
private readonly ILogger<MyEchoConnectionHandler> _logger;

public MyEchoConnectionHandler(ILogger<MyEchoConnectionHandler> logger)
{
_logger = logger;
}

public override async Task OnConnectedAsync(ConnectionContext connection)
{
_logger.LogInformation(connection.ConnectionId + " connected");

while (true)
{
var result = await connection.Transport.Input.ReadAsync();
var buffer = result.Buffer;

foreach (var segment in buffer)
{
await connection.Transport.Output.WriteAsync(segment);
}

if (result.IsCompleted)
{
break;
}

connection.Transport.Input.AdvanceTo(buffer.End);
}

_logger.LogInformation(connection.ConnectionId + " disconnected");
}
}
}

Pretty slick. This just echos what is sent to that port but not only has it educated me about a thing I didn't know about, it's something I can mentally file away until I need it!

All of these things I learned in just 30 minutes of exploring someone's public repository.

What kinds of code do you like to read and what have you learned from just poking around?


Sponsor: Get the latest JetBrains Rider for remote debugging via SSH, SQL injections, a new Search Everywhere popup, and improved Unity support.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb
Page 1 of 57 in the Open Source category Next Page

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.