Sarah Mei had a great series of tweets last week. She's a Founder of RailsBridge, Director of Ruby Central, and the Chief Consultant of DevMynd so she's experienced with work both "on the job" and "on the side." Like me, she organizes OSS projects, conferences, but she also has a life, as do I.
If you're reading this blog, it's likely that you have gone to a User Group or Conference, or in some way did some "on the side" tech activity. It could be that you have a blog, or you tweet, or you do videos, or you volunteer at a school.
With Sarah's permission, I want to take a moment and call out some of these tweets and share my thoughts about them. I think this is an important conversation to have.
My career has had a bunch of long cycles (months or years in length) of involvement & non-involvement in tech stuff outside of work.
This is vital. Life is cyclical. You aren't required or expected to be ON 130 hours a week your entire working life. It's unreasonable to expect that of yourself. Many of you have emailed me about this in the past. "How do you do _____, Scott?" How do you deal with balance, hang with your kids, do your work, do videos, etc.
I don't.
Sometimes I just chill. Sometimes I play video games. Last week I was in bed before 10pm two nights. I totally didn't answer email that night either. Balls were dropped and the world kept spinning.
Sometimes you need to be told it's OK to stop, Dear Reader. Slow down, breathe. Take a knee. Hell, take a day.
When we pathologize the non-involvement side of the cycle as "burnout," we imply that the involvement side is the positive, natural state.
Here's where it gets really real. We hear a lot about "burnout." Are you REALLY burnt? Maybe you just need to chill. Maybe going to three User Groups a month (or a week!) is too much? Maybe you're just not that into the tech today/this week/this month. Sometimes I'm so amped on 3D printing and sometimes I'm just...not.
Am I burned out? Nah. Just taking in a break.
But you know what? Your kids are only babies once (thank goodness). Those rocks won't climb themselves. Etc. And tech will still be here.
I realize that not everyone with children in their lives can get/afford a sitter but I do also want to point out that if you can, REST. RESET. My wife and I have Date Night. Not once a month, not occasionally. Every week. As we tell our kids: We were here before you and we'll be here after you leave, so this is our time to talk to each other. See ya!
Date Night, y'all. Every week. Self care. Take the time, schedule it, pay the sitter. You'll thank yourself.
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
A little Linux VM on Azure is like $13 a month. You can get little Linux machines all over for between $10-15 a month. On Linode they are about $10 a month so I figured it would be interesting to setup an ASP.NET Core website running on .NET Core. As you may know, .NET Core is free, open source, cross platform and runs basically everywhere.
Step 0 - Get a cheap host
I went to Linode (or anywhere) and got the cheapest Linux machine they offered. In this case it's an Ubuntu 14.04 LTS Profile, 64-bit, 4.6.5 Kernel.
Since I'm on Windows but I want to SSH into this Linux machine I'll need a SSH client. There's a bunch of options.
It's always a good idea to avoid being root. After logging into the system as root, I made a new user and give them sudo (super user do):
adduser scott usermod -aG sudo scott
Then I'll logout and go back in as scott.
Step 1 - Get .NET Core on your Linux Machine
Head over to http://dot.net to get .NET Core and follow the instructions. There's at least 8 Linuxes supported in 6 flavors so you should have no trouble. I followed the Ubuntu instructions.
To make sure it works after you've set it up, make a quick console app like this and run it.
mkdir testapp cd testapp dotnet new dotnet restore dotnet run
If it runs, then you've got .NET Core installed and you can move on to making a web app and exposing it to the internet.
Today, this default site uses npm, gulp, and bower to manage JavaScript and CSS dependencies. In the future there will be options that don't require as much extra stuff but for now, in order to dotnet restore this site I'll need npm and what not so I'll do this to get node, npm, etc.
Now I can dotnet restore easily and run my web app to test. It will startup on localhost:5000 usually.
$ dotnet restore $ dotnet run scott@ubuntu:~/dotnettest$ dotnet run Project dotnettest (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation. info: Microsoft.Extensions.DependencyInjection.DataProtectionServices[0] User profile is available. Using '/home/scott/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest. Hosting environment: Production Content root path: /home/scott/dotnettest Now listening on: http://localhost:5000
Of course, having something startup on localhost:5000 doesn't help me as I'm over here at home so I can't test a local website like this. I want to expose this site (via a port) to the outside world. I want something like http://mysupermachine -> inside my machine -> localhost:5000.
Step 3 - Expose your web app to the outside.
I could tell Kestrel - that's the .NET Web Server - to expose itself to Port 80, although you usually want to have another process between you and the outside world.
You can do this a few ways. You can open open Program.cs with a editor like "pico" and add a .UseUrls() call to the WebHostBuilder like this.
var host = new WebHostBuilder() .UseKestrel() .UseUrls("http://*:80") .UseContentRoot(Directory.GetCurrentDirectory()) .UseStartup<Startup>() .Build();
Here the * binds to all the network adapters and it listens on Port 80. Putting http://0.0.0.0:80 also works.
You might have permission issues doing this and need to elevate the dotnet process and webserver which is also a problem so let's just keep it at a high internal port and reverse proxy the traffic with something like Nginx or Apache. We'll pull out the hard-coded port from the code and change the Program.cs to use a .json config file.
public static void Main(string[] args) { var config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("hosting.json", optional: true) .Build();
var host = new WebHostBuilder() .UseKestrel() .UseConfiguration(config) .UseContentRoot(Directory.GetCurrentDirectory()) .UseStartup<Startup>() .Build();
NOTE: I'm doing this work a folder under my home folder ~ or now. I'll later compile and "publish" this website to something like /var/dotnettest when I want it seen.
sudo apt-get install nginx sudo service nginx start
I'm going to change the default Nginx site to point to my (future) running ASP.NET Core web app. I'll open and change /etc/nginx/sites-available/default and make it look like this. Note the port number. Nginx is a LOT more complex than this and has a lot of nuance, so when you are ready to go into Super Official Production, be sure to explore what the perfect Nginx Config File looks like and change it to your needs.
The website isn't up and running on localhost:5123 yet (unless you've run it yourself and kept it running!) so we'll need an app or a monitor to run it and keep it running. There's an app called Supervisor that is good at that so I'll add it.
sudo apt-get install supervisor
Here is where you/we/I/errbody needs to get the paths and names right, so be aware. I'm over in ~/testapp or something. I need to publish my site into a final location so I'm going to run dotnet publish, then copy the reuslts into /var/dotnettest where it will live.
dotnet publish publish: Published to /home/scott/dotnettest/bin/Debug/netcoreapp1.0/publish sudo cp -a /home/scott/dotnettest/bin/Debug/netcoreapp1.0/publish /var/dotnettest
Now I'm going to make a file (again, I use pico because I'm not as awesome as emacs or vim) called /src/supervisor/conf.d/dotnettest.conf to start my app and keep it running:
Now we start and stop Supervisor and watch/tail its logs to see our app startup!
sudo service supervisor stop sudo service supervisor start sudo tail -f /var/log/supervisor/supervisord.log #and the application logs if you like sudo tail -f /var/log/dotnettest.out.log
If all worked out (if it didn't, it'll be a name or a path so keep trying!) you'll see the supervisor log with dotnet starting up, running your app.
Remember the relationships.
Dotnet - runs your website
Nginx or Apache - Listens on Port 80 and forwards HTTP calls to your website
Supervisor - Keeps your app running
Next, I might want to setup a continuous integration build, or SCP/SFTP to handle deployment of my app. That way I can develop locally and push up to my Linux machine.
Of course, there are a dozen other ways to publish an ASP.NET Core site, not to mention Docker. I'll post about Docker another time, but for now, I was able to get my ASP.NET Core website published to a cheap $10 host in less than an hour. You can use the same tools to manage a .NET Core site that you use to manage any site be it PHP, nodejs, Ruby, or whatever makes you happy.
Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more. Developers can use their products to create, convert, modify, or manage files in almost any way. Aspose is a good company and they offer solid products. Check them out, and download a free evaluation.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
There's a lot of confusing terms in the Cloud space. And that's not counting the term "Cloud." ;)
IaaS (Infrastructure as a Services) - Virtual Machines and stuff on demand.
PaaS (Platform as a Service) - You deploy your apps but try not to think about the Virtual Machines underneath. They exist, but we pretend they don't until forced.
SaaS (Software as a Service) - Stuff like Office 365 and Gmail. You pay a subscription and you get email/whatever as a service. It Just Works.
"Serverless Computing" doesn't really mean there's no server. Serverless means there's no server you need to worry about. That might sound like PaaS, but it's higher level that than.
Serverless Computing is like this - Your code, a slider bar, and your credit card. You just have your function out there and it will scale as long as you can pay for it. It's as close to "cloudy" as The Cloud can get.
With Platform as a Service, you might make a Node or C# app, check it into Git, deploy it to a Web Site/Application, and then you've got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App) but it's not seamless. It's totally cool, to be clear, but you're always aware of the servers.
New cloud systems like Amazon Lambda and Azure Functions have you upload some code and it's running seconds later. You can have continuous jobs, functions that run on a triggered event, or make Web APIs or Webhooks that are just a function with a URL.
I'm going to see how quickly I can make a Web API with Serverless Computing.
I'll go to http://functions.azure.com and make a new function. If you don't have an account you can sign up free.
You can make a function in JavaScript or C#.
Once you're into the Azure Function Editor, click "New Function" and you've got dozens of templates and code examples for things like:
Find a face in an image and store the rectangle of where the face is.
Run a function and comment on a GitHub issue when a GitHub webhook is triggered
Update a storage blob when an HTTP Request comes in
Load entities from a database or storage table
I figured I'd change the first example. It is a trigger that sees an image in storage, calls a cognitive services API to get the location of the face, then stores the data. I wanted to change it to:
Take an image as input from an HTTP Post
Draw a rectangle around the face
Return the new image
You can do this work from Git/GitHub but for easy stuff I'm literally doing it all in the browser. Here's what it looks like.
I code and iterate and save and fail fast, fail often. Here's the starter code I based it on. Remember, that this is a starter function that runs on a triggered event, so note its Run()...I'm going to change this.
#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO;
public static async Task Run(Stream image, string name, IAsyncCollector<FaceRectangle> outTable, TraceWriter log)
{
var image = await req.Content.ReadAsStreamAsync();
string result = await CallVisionAPI(image); //STREAM
log.Info(result);
if (String.IsNullOrEmpty(result))
{
return req.CreateResponse(HttpStatusCode.BadRequest);
}
ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
foreach (Face face in imageData.Faces)
{
var faceRectangle = face.FaceRectangle;
faceRectangle.RowKey = Guid.NewGuid().ToString();
faceRectangle.PartitionKey = "Functions";
faceRectangle.ImageFile = name + ".jpg";
await outTable.AddAsync(faceRectangle);
}
return req.CreateResponse(HttpStatusCode.OK, "Nice Job");
}
static async Task<string> CallVisionAPI(Stream image)
{
using (var client = new HttpClient())
{
var content = new StreamContent(image);
var url = "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Faces";
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
var httpResponse = await client.PostAsync(url, content);
if (httpResponse.StatusCode == HttpStatusCode.OK){
return await httpResponse.Content.ReadAsStringAsync();
}
}
return null;
}
public class ImageData {
public List<Face> Faces { get; set; }
}
public class Face {
public int Age { get; set; }
public string Gender { get; set; }
public FaceRectangle FaceRectangle { get; set; }
}
public class FaceRectangle : TableEntity {
public string ImageFile { get; set; }
public int Left { get; set; }
public int Top { get; set; }
public int Width { get; set; }
public int Height { get; set; }
}
GOAL: I'll change this Run() and make this listen for an HTTP request that contains an image, read the image that's POSTed in (ya, I know, no validation), draw rectangle around detected faces, then return a new image.
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) { var image = await req.Content.ReadAsStreamAsync();
As for the body of this function, I'm 20% sure I'm using too many MemoryStreams but they are getting disposed so take this code as a initial proof of concept. However, I DO need at least the two I have. Regardless, happy to chat with those who know more, but it's more subtle than even I thought. That said, basically call out to the API, get back some face data that looks like this:
Then take that data and DRAW a Rectangle over the faces detected.
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
var image = await req.Content.ReadAsStreamAsync();
MemoryStream mem = new MemoryStream();
image.CopyTo(mem); //make a copy since one gets destroy in the other API. Lame, I know.
image.Position = 0;
mem.Position = 0;
string result = await CallVisionAPI(image);
log.Info(result);
if (String.IsNullOrEmpty(result)) {
return req.CreateResponse(HttpStatusCode.BadRequest);
}
ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
MemoryStream outputStream = new MemoryStream();
using(Image maybeFace = Image.FromStream(mem, true))
{
using (Graphics g = Graphics.FromImage(maybeFace))
{
Pen yellowPen = new Pen(Color.Yellow, 4);
foreach (Face face in imageData.Faces)
{
var faceRectangle = face.FaceRectangle;
g.DrawRectangle(yellowPen,
faceRectangle.Left, faceRectangle.Top,
faceRectangle.Width, faceRectangle.Height);
}
}
maybeFace.Save(outputStream, ImageFormat.Jpeg);
}
var response = new HttpResponseMessage()
{
Content = new ByteArrayContent(outputStream.ToArray()),
StatusCode = HttpStatusCode.OK,
};
response.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
return response;
}
I also added a reference to System. Drawing using this syntax at the top of the file and added a few namespaces with usings like System.Drawing and System.Drawing.Imaging. I also changed the input in the Integrate tab to "HTTP" as my input.
#r "System.Drawing
Now I go into Postman and POST an image to my new Azure Function endpoint. Here I uploaded a flattering picture of me and unflattering picture of The Oatmeal. He's pretty in real life just NOT HERE. ;)
So in just about 15 min with no idea and armed with just my browser, Postman (also my browser), Google/StackOverflow, and Azure Functions I've got a backend proof of concept.
Azure Functions supports Node.js, C#, F#, Python, PHP *and* Batch, Bash, and PowerShell, which really opens it up to basically anyone. You can use them for anything when you just want a function (or more) out there on the web. Send stuff to Slack, automate your house, update GitHub issues, act as a Webhook, etc. There's some great 3d party Azure Functions sample code in this GitHub repo as well. Inputs can be from basically anywhere and outputs can be basically anywhere. If those anywheres are also cloud services like Tables or Storage, you've got a "serverless backed" that is easy to scale.
I'm still learning, but I can see when I'd want a VM (total control) vs a Web App (near total control) vs a "Serverless" Azure Function (less control but I didn't need it anyway, just wanted a function in the cloud.)
Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more. Developers can use their products to create, convert, modify, or manage files in almost any way. Aspose is a good company and they offer solid products. Check them out, and download a free evaluation.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
I was really stressed out ten years ago. I felt that familiar pressure between my eyes and felt like all the things that remained undone were pressing on me. I called it "psychic weight." I have since then collected my Productivity Tips and written extensively on the topic of productivity and getting things done. I'm going to continue to remind YOU that Self-Care Matters in between my technical and coding topics. The essence of what I learned was to let go.
The Serenity Prayer:
God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.
Everyone has stress and everyone has pressure. There's no magic fix or silver bullet for stress, but I have found that some stressors have a common root cause. Things that stress me are things I think I need to do, handle, watch, take care of, worry about, sweat, deal with, or manage. These things press on me - right between my eyes - and the resulting feeling is what I call psychic weight.
For example: When the DVR (Digital Video Recorder) came out it was a gift from on high. What? A smart VCR that would just tape and hold all the TV Shows that I love? I don't have to watch shows when the time and day the shows come on? Sign me up. What a time saver!
Fast forward a few years and the magical DVR is now an infinite todo list of TV shows. It's a guilt pile. A failure queue. I still haven't watched The Wire. (I know.) It presses on me. I've actually had conversations with my wife like "ok, if we bang out the first season by staying up tonight until 4am, we should be all ready when Season 2 drops next week." Seriously. Yes, I know, Unwatched TV is a silly example.But you've binge-watched Netflix when you should have been working/reading/working out so you can likely relate.
But I'm letting go. I'll watch The Wire another time. I'll delete it from my DVR. I'm never going to watch the second season of Empire. (Sorry, Cookie. I love you Taraji!) I'm not going to read that pile of technical books on my desk. So I'm going to declare that to the universe and I'm going to remove the pile of books that's staring at me. This book stack, this failure pile is no more. And I'm not even mad. I'm OK with it.
Every deletion like this from your life opens up your time - and your mind -for the more important things you need to focus on.
What are your goals? What can you delete from your list (and I mean, DROP, not just postpone) that will free up your internal resources so you can focus on your goal?
Delete those emails. Declare email bankruptcy. They'll likely email you again. Delete a few seasons of shows you won't watch. Delete Pokemon Go. Make that stack of technical books on your desk shorter. Now, what positive thing will you fill those gaps with?
You deserve it. Remove psychic weight and lighten up. Then sound off in the comments!
* Image Copyright Shea Parikh / used under license from getcolorstock.com
Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more. Developers can use their products to create, convert, modify, or manage files in almost any way. Aspose is a good company and they offer solid products. Check them out, and download a free evaluation.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
Jeffrey Snover predicted internally in 2014 that PowerShell would eventually be open sourced but it was the advent of .NET Core and getting .NET Core working on multiple Linux distros that kickstarted the process. If you were paying attention it's possible you could have predicted this move yourself. Parts of PowerShell have been showing up as open source:
To be clear, I'm told this is are alpha quality builds as work continues with community support. An official Microsoft-supported "release" will come sometime later.
What's Possible?
This is my opinion and somewhat educated speculation, but it seems to me that they want to make it so you can manage anything from anywhere. Maybe you're a Unix person who has some Windows machines (either local or in Azure) that you need to manage. You can use PowerShell from Linux to do that. Maybe you've got some bash scripts at your company AND some PowerShell scripts. Use them both, interchangeably.
If you know PowerShell, you'll be able to use those skills on Linux as well. If you manage a hybrid environment, PowerShell isn't a replacement for bash but rather another tool in your toolkit. There are lots of shells (not just bash, zsh, but also ruby, python, etc) in the *nix world so PowerShell will be in excellent company.
Related Links
Be sure to check out the coverage from around the web and lots of blog posts from different perspectives!
Have fun! This open source thing is kind of catching on at Microsoft isn't it?
Sponsor: Do you deploy the same application multiple times for each of your end customers? The team at Octopus have been trying to take the pain out of multi-tenant deployments. Check out their 3.4 beta release.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.