Scott Hanselman

Connecting my Particle Photon Internet of Things device to the Azure IoT Hub

December 20, '16 Comments [27] Posted in Azure | Hardware
Sponsored By

Particle Photon connected to the cloudMy vacation continues. Yesterday I had shoulder surgery (adhesive capsulitis release) so today I'm messing around with Azure IoT Hub. I had some devices on my desk - some of which I had never really gotten around to exploring - and I thought I'd see if I could accomplish something.

I've got a Particle Photon here, as well as a Tessel 2, a LattePanda, Funduino, and Onion Omega. A few days ago I was able to get the Onion Omega to show my blood sugar on a small OLED screen, which was cool. Tonight I'm going to try to hook the Particle Photon up to the Azure IoT hub for monitoring.

The Photon is a tiny little device with Wi-Fi built-in. It's super easy to setup and it has a cloud-based IDE with tons of examples written in C and Node.js for you to use. Particle Photon also has a node.js based command line. From there you can list out your Photons, see their available functions, and even call functions over the internet! A hacker's delight, to be sure.

Here's a standard "blink an LED" Hello world on a Photon. This one creates a cloud function called "led" and binds it to the "ledToggle" method. Those cloud methods take a string, so there's no enum for the on/off command.

int led1 = D0;
int led2 = D7;
void setup() {
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
Spark.function("led",ledToggle);
digitalWrite(led1, LOW);
digitalWrite(led2, LOW);
}

void loop() {
}

int ledToggle(String command) {
if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);
return 1;
}
else if (command=="off") {
digitalWrite(led1,LOW);
digitalWrite(led2,LOW);
return 0;
}
else {
return -1;
}
}

From the command line I can use the Particle command line interface (CLI) to enumerate my devices:

C:\Users\scott>particle list
hansel_photon [390039000647xxxxxxxxxxx] (Photon) is online
Functions:
int led(String args)

See how it doesn't just enumerate devices, but also cloud methods that hang off devices? LOVE THIS.

I can get a secret API Key from the Particle Photon's cloud based Console. Then using my Device ID and auth token I can call the method...with an HTTP request! How much easier could this be?

C:\Users\scott\>curl https://api.particle.io/v1/devices/390039000647xxxxxxxxx/led -d access_token=31fa2e6f --insecure -d arg="on"
{
"id": "390039000647xxxxxxxxx",
"last_app": "",
"connected": true,
"return_value": 1
}

At this moment the LED on the Particle Photon turns on. I'm going to change the code a little and add some telemetry using the Particle's online code editor.

Editing Particle Photon Code online

They've got a great online code editor, but I could also edit and compile the code locally:

C:\Users\scott\Desktop>particle compile photon webconnected.ino

Compiling code for photon

Including:
webconnected.ino
attempting to compile firmware
downloading binary from: /v1/binaries/5858b74667ddf87fb2a2df8f
saving to: photon_firmware_1482209089877.bin
Memory use:
text data bss dec hex filename
6156 12 1488 7656 1de8
Compile succeeded.
Saved firmware to: C:\Users\scott\Desktop\photon_firmware_1482209089877.bin

I'll change the code to announce an "Event" when I turn on the LED.

if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);

String data = "Amazing! Some Data would be here! The light is on.";
Particle.publish("ledBlinked", data);

return 1;
}

I can head back over to the http://console.particle.io and see these events live on the web:

Particle Photon's have great online charts

Particle also supports integration with Google Cloud and Azure IoT Hub. Azure IoT Hub allows you to manage billions of devices and all their many billions of events. I just have a few, but we all have to start somewhere. ;)

I created a free Azure IoT Hub in my Azure Account...

Azure IoT Hub has charts and graphs built in

And made a shared access policy for my Particle Devices.

Be sure to set all the Access Policy Permissions you need

Then I told Particle about Azure in their Integrations system.

Particle has Azure IoT Hub integration built in

The Azure IoT SDKS on GitHub at https://github.com/Azure/azure-iot-sdks/releases have both a Windows-based Azure IoT Explorer and a command-line one called IoT Hub Explorer.

I logged in to the IoT Hub Explorer using the connection string from the Azure Portal:

iothub-explorer login "HostName=HanselIoT.azure-devices.net;SharedAccessKeyName=particle-iot-hub;SharedAccessKey=rdWUVMXs="

Then I'll run "iothub-explorer monitor-events" passing in the device ID and the connection string for the shared access policy. Monitor-events is cool because it'll hang and just output the events as they're flowing through the whole system.

IoTHub-Explorer monitor-events command line

So I'm able to call methods on the Particle using their cloud, and monitor events from within Azure IoT Hub. I can explore diagnostics data and query huge amounts of device-to-cloud data that would potentially flow in from my hardware devices.

The IoT Hub Limits are very generous for free/hobbyist users as we learn to develop. I haven't paid anything yet. However, it can scale to thousands of messages a second per unit! That means millions of messages a second if you need it.

I can definitely see how the the value an IoT Hub solution like this would add up quickly after you've got more than one device. Text files don't really scale. Even if I just IoT'ed up my house, it would be nice to have all that data flowing into a single hub I could manage and query securely.


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

NoSQL .NET Core development using an local Azure DocumentDB Emulator

December 4, '16 Comments [26] Posted in ASP.NET MVC | Azure
Sponsored By

I was hanging out with Miguel de Icaza in New York a few weeks ago and he was sharing with me his ongoing love affair with a NoSQL Database called Azure DocumentDB. I've looked at it a few times over the last year or so and though it was cool but I didn't feel like using it for a few reasons:

  • Can't develop locally - I'm often in low-bandwidth or airplane situations
  • No MongoDB support - I have existing apps written in Node that use Mongo
  • No .NET Core support - I'm doing mostly cross-platform .NET Core apps

Miguel told me to take a closer look. Looks like things have changed! DocumentDB now has:

  • Free local DocumentDB Emulator - I asked and this is the SAME code that runs in Azure with just changes like using the local file system for persistence, etc. It's an "emulator" but it's really the essential same core engine code. There is no cost and no sign in for the local DocumentDB emulator.
  • MongoDB protocol support - This is amazing. I literally took an existing Node app, downloaded MongoChef and copied my collection over into Azure using a standard MongoDB connection string, then pointed my app at DocumentDB and it just worked. It's using DocumentDB for storage though, which gives me
    • Better Latency
    • Turnkey global geo-replication (like literally a few clicks)
    • A performance SLA with <10ms read and <15ms write (Service Level Agreement)
    • Metrics and Resource Management like every Azure Service
  • DocumentDB .NET Core Preview SDK that has feature parity with the .NET Framework SDK.

There's also Node, .NET, Python, Java, and C++ SDKs for DocumentDB so it's nice for gaming on Unity, Web Apps, or any .NET App...including Xamarin mobile apps on iOS and Android which is why Miguel is so hype on it.

Azure DocumentDB Local Quick Start

I wanted to see how quickly I could get started. I spoke with the PM for the project on Azure Friday and downloaded and installed the local emulator. The lead on the project said it's Windows for now but they are looking for cross-platform solutions. After it was installed it popped up my web browser with a local web page - I wish more development tools would have such clean Quick Starts. There's also a nice quick start on using DocumentDB with ASP.NET MVC.

NOTE: This is a 0.1.0 release. Definitely Alpha level. For example, the sample included looks like it had the package name changed at some point so it didn't line up. I had to change "Microsoft.Azure.Documents.Client": "0.1.0" to "Microsoft.Azure.DocumentDB.Core": "0.1.0-preview" so a little attention to detail issue there. I believe the intent is for stuff to Just Work. ;)

Nice DocumentDB Quick Start

The sample app is a pretty standard "ToDo" app:

ASP.NET MVC ToDo App using Azure Document DB local emulator

The local Emulator also includes a web-based local Data Explorer:

image

A Todo Item is really just a POCO (Plain Old CLR Object) like this:

namespace todo.Models
{
    using Newtonsoft.Json;

    public class Item
    {
        [JsonProperty(PropertyName = "id")]
        public string Id { get; set; }

        [JsonProperty(PropertyName = "name")]
        public string Name { get; set; }

        [JsonProperty(PropertyName = "description")]
        public string Description { get; set; }

        [JsonProperty(PropertyName = "isComplete")]
        public bool Completed { get; set; }
    }
}

The MVC Controller in the sample uses an underlying repository pattern so the code is super simple at that layer - as an example:

[ActionName("Index")]
public async Task<IActionResult> Index()
{
var items = await DocumentDBRepository<Item>.GetItemsAsync(d => !d.Completed);
return View(items);
}

[HttpPost]
[ActionName("Create")]
[ValidateAntiForgeryToken]
public async Task<ActionResult> CreateAsync([Bind("Id,Name,Description,Completed")] Item item)
{
if (ModelState.IsValid)
{
await DocumentDBRepository<Item>.CreateItemAsync(item);
return RedirectToAction("Index");
}

return View(item);
}

The Repository itself that's abstracting away the complexities is itself not that complex. It's like 120 lines of code, and really more like 60 when you remove whitespace and curly braces. And half of that is just initialization and setup. It's also DocumentDBRepository<T> so it's a generic you can change to meet your tastes and use it however you'd like.

The only thing that stands out to me in this sample is the loop in GetItemsAsync that's hiding potential paging/chunking. It's nice you can pass in a predicate but I'll want to go and put in some paging logic for large collections.

public static async Task<T> GetItemAsync(string id)
{
    try
    {
        Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
        return (T)(dynamic)document;
    }
    catch (DocumentClientException e)
    {
        if (e.StatusCode == System.Net.HttpStatusCode.NotFound){
            return null;
        }
        else {
            throw;
        }
    }
}

public static async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<T, bool>> predicate)
{
    IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
        UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
        new FeedOptions { MaxItemCount = -1 })
        .Where(predicate)
        .AsDocumentQuery();

    List<T> results = new List<T>();
    while (query.HasMoreResults){
        results.AddRange(await query.ExecuteNextAsync<T>());
    }

    return results;
}

public static async Task<Document> CreateItemAsync(T item)
{
    return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
}

public static async Task<Document> UpdateItemAsync(string id, T item)
{
    return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
}

public static async Task DeleteItemAsync(string id)
{
    await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
}

I'm going to keep playing with this but so far I'm pretty happy I can get this far while on an airplane. It's really easy (given I'm preferring NoSQL over SQL lately) to just through objects at it and store them.

In another post I'm going to look at RavenDB, another great NoSQL Document Database that works on .NET Core that s also Open Source.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Publishing ASP.NET Core 1.1 applications to Azure using git deploy

November 23, '16 Comments [19] Posted in ASP.NET | Azure
Sponsored By

I took an ASP.NET Core 1.0 application and updated its package references to ASP.NET 1.1 today. I also updated to my SDK to the Current (not the LTS - Long Term Support version). Everything worked great building and running locally, but then I made a new Web App in Azure and did a quick git deploy.

Deploying ASP.NET Core 1.1 to Azure

Basically:

c:\aspnetcore11app> azure site create "aspnetcore11test" --location "West US" --git
c:\aspnetcore11app> git add .
c:\aspnetcore11app> git commit -m "initial"
c:\aspnetcore11app> git push azure master

Then I watched the logs go by. You can watch them at the command line or from within the Azure Portal under "Log Stream."

The deployment failed when Azure was building the app and got this error:

Build started 11/25/2016 6:51:54 AM.
2016-11-25T06:51:55         1>Project "D:\home\site\repository\aspnet1.1.xproj" on node 1 (Publish target(s)).
2016-11-25T06:51:55         1>D:\home\site\repository\aspnet1.1.xproj(7,3): error MSB4019: The imported project "D:\Program Files (x86)\dotnet\sdk\1.0.0-preview3-004056\Extensions\Microsoft\VisualStudio\v14.0\DotNet\Microsoft.DotNet.Props" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
2016-11-25T06:51:55         1>Done Building Project "D:\home\site\repository\aspnet1.1.xproj" (Publish target(s)) -- FAILED.
2016-11-25T06:51:55    

See where it says "1.0.0-preview3-004056" in there? Looks like Azure Web Apps has some build that isn't the one that I have. Theirs has the new csproj/msbuild stuff and I'm staying a few steps back waiting for things to bake.

Remember that unless you specify the SDK version, .NET Core will use whatever is the latest one on the box.

Well, what do I have locally?

C:\aspnetcore11app>dotnet --version
1.0.0-preview2-1-003177

OK, then that's what I need to use so I'll make a global.json at the root of my project, then move my code into a folder under "src" for example.

{
"projects": [ "src", "test" ],
"sdk": {
"version": "1.0.0-preview2-1-003177"
}
}

Here I'm "pinning" the same SDK version. This will tell Azure (or you, if I gave you the code) to use this SDK version when building.

Now I'll redeploy with a git add, commit, and push. It works.

I think this should be easier, but I'm not sure how it should be easier. Does this mean that everyone should have a global.json with a preferred version? If you have no preferred version should Azure give a smarter error if it has an incompatible newer one?

The learning here for me is that not having a global.json basically says "*.*" to any cloud build servers and you'll get whatever latest SDK they have. It could stop working any day.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

Exploring Application Insights for disconnected or connected deep telemetry in ASP.NET Apps

October 19, '16 Comments [27] Posted in Azure
Sponsored By

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool - ahem - insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it's a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful "SDK only" mode that's totally offline.

Click "Add Application Insights to project" and then under "Send telemetry to" you click "Install SDK only" and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json - "Microsoft.ApplicationInsights.AspNetCore": "1.0.0" and it'll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart's content.

if (env.IsDevelopment())
{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it's checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I'm using Visual Studio Community. That's the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it's pretty cool that this feature works just great on VS Community.

You'll see the Search window open up in VS. You can keep it running while you debug and it'll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here's what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I've found something interesting, I can explore around it with the full details of the HTTP Request. I find the "telemetry 5 minutes before and after" query to be very powerful.

Track Operations

Notice where it says "dependencies for this operation?" That's not dependencies like "Dependency Injection" - that's larger system-wide dependencies like "my app depends on this web service."

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system's dependency to  light up in AppInsights charts and reports. Here's a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();

var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I'm Tracking external Dependencies I can search for outliers, long durations, categorize them, and they'll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here's what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it's all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don't have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a "Status Monitor" app for instrumentation.

Application Insights Charts

There's a TON of good data here and it's REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online - Host your app locally and send telemetry to the cloud
  • Totally online - Host your app and telemetry in the cloud

All in all, I am pretty impressed. There's SDKs for Java, Node, Docker, and ASP.NET - There's a LOT here. I'm going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web

What is Serverless Computing? Exploring Azure Functions

August 27, '16 Comments [46] Posted in Azure | nodejs
Sponsored By

There's a lot of confusing terms in the Cloud space. And that's not counting the term "Cloud." ;)

  • IaaS (Infrastructure as a Services) - Virtual Machines and stuff on demand.
  • PaaS (Platform as a Service) - You deploy your apps but try not to think about the Virtual Machines underneath. They exist, but we pretend they don't until forced.
  • SaaS (Software as a Service) - Stuff like Office 365 and Gmail. You pay a subscription and you get email/whatever as a service. It Just Works.

"Serverless Computing" doesn't really mean there's no server. Serverless means there's no server you need to worry about. That might sound like PaaS, but it's higher level that than.

Serverless Computing is like this - Your code, a slider bar, and your credit card. You just have your function out there and it will scale as long as you can pay for it. It's as close to "cloudy" as The Cloud can get.

Serverless Computing is like this. Your code, a slider bar, and your credit card.

With Platform as a Service, you might make a Node or C# app, check it into Git, deploy it to a Web Site/Application, and then you've got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App) but it's not seamless. It's totally cool, to be clear, but you're always aware of the servers.

New cloud systems like Amazon Lambda and Azure Functions have you upload some code and it's running seconds later. You can have continuous jobs, functions that run on a triggered event, or make Web APIs or Webhooks that are just a function with a URL.

I'm going to see how quickly I can make a Web API with Serverless Computing.

I'll go to http://functions.azure.com and make a new function. If you don't have an account you can sign up free.

Getting started with Azure Functions

You can make a function in JavaScript or C#.

Getting started with Azure Functions - Create This Function

Once you're into the Azure Function Editor, click "New Function" and you've got dozens of templates and code examples for things like:

  • Find a face in an image and store the rectangle of where the face is.
  • Run a function and comment on a GitHub issue when a GitHub webhook is triggered
  • Update a storage blob when an HTTP Request comes in
  • Load entities from a database or storage table

I figured I'd change the first example. It is a trigger that sees an image in storage, calls a cognitive services API to get the location of the face, then stores the data. I wanted to change it to:

  • Take an image as input from an HTTP Post
  • Draw a rectangle around the face
  • Return the new image

You can do this work from Git/GitHub but for easy stuff I'm literally doing it all in the browser. Here's what it looks like.

Azure Functions can be done in the browser

I code and iterate and save and fail fast, fail often. Here's the starter code I based it on. Remember, that this is a starter function that runs on a triggered event, so note its Run()...I'm going to change this.

#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"

using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO; 

public static async Task Run(Stream image, string name, IAsyncCollector<FaceRectangle> outTable, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    
    string result = await CallVisionAPI(image); //STREAM
    log.Info(result); 

    if (String.IsNullOrEmpty(result))
    {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }

    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    foreach (Face face in imageData.Faces)
    {
        var faceRectangle = face.FaceRectangle;
        faceRectangle.RowKey = Guid.NewGuid().ToString();
        faceRectangle.PartitionKey = "Functions";
        faceRectangle.ImageFile = name + ".jpg";
        await outTable.AddAsync(faceRectangle); 
    }
    return req.CreateResponse(HttpStatusCode.OK, "Nice Job");  
}

static async Task<string> CallVisionAPI(Stream image)
{
    using (var client = new HttpClient())
    {
        var content = new StreamContent(image);
        var url = "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Faces";
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        var httpResponse = await client.PostAsync(url, content);

        if (httpResponse.StatusCode == HttpStatusCode.OK){
            return await httpResponse.Content.ReadAsStringAsync();
        }
    }
    return null;
}

public class ImageData {
    public List<Face> Faces { get; set; }
}

public class Face {
    public int Age { get; set; }
    public string Gender { get; set; }
    public FaceRectangle FaceRectangle { get; set; }
}

public class FaceRectangle : TableEntity {
    public string ImageFile { get; set; }
    public int Left { get; set; }
    public int Top { get; set; }
    public int Width { get; set; }
    public int Height { get; set; }
}

GOAL: I'll change this Run() and make this listen for an HTTP request that contains an image, read the image that's POSTed in (ya, I know, no validation), draw rectangle around detected faces, then return a new image.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) {
var image = await req.Content.ReadAsStreamAsync();

As for the body of this function, I'm 20% sure I'm using too many MemoryStreams but they are getting disposed so take this code as a initial proof of concept. However, I DO need at least the two I have. Regardless, happy to chat with those who know more, but it's more subtle than even I thought. That said, basically call out to the API, get back some face data that looks like this:

2016-08-26T23:59:26.741 {"requestId":"8be222ff-98cc-4019-8038-c22eeffa63ed","metadata":{"width":2808,"height":1872,"format":"Jpeg"},"faces":[{"age":41,"gender":"Male","faceRectangle":{"left":1059,"top":671,"width":466,"height":466}},{"age":41,"gender":"Male","faceRectangle":{"left":1916,"top":702,"width":448,"height":448}}]}

Then take that data and DRAW a Rectangle over the faces detected.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();

    MemoryStream mem = new MemoryStream();
    image.CopyTo(mem); //make a copy since one gets destroy in the other API. Lame, I know.
    image.Position = 0;
    mem.Position = 0;
    
    string result = await CallVisionAPI(image); 
    log.Info(result); 

    if (String.IsNullOrEmpty(result)) {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);

    MemoryStream outputStream = new MemoryStream();
    using(Image maybeFace = Image.FromStream(mem, true))
    {
        using (Graphics g = Graphics.FromImage(maybeFace))
        {
            Pen yellowPen = new Pen(Color.Yellow, 4);
            foreach (Face face in imageData.Faces)
            {
                var faceRectangle = face.FaceRectangle;
                g.DrawRectangle(yellowPen, 
                    faceRectangle.Left, faceRectangle.Top, 
                    faceRectangle.Width, faceRectangle.Height);
            }
        }
        maybeFace.Save(outputStream, ImageFormat.Jpeg);
    }
    
    var response = new HttpResponseMessage()
    {
        Content = new ByteArrayContent(outputStream.ToArray()),
        StatusCode = HttpStatusCode.OK,
    };
    response.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
    return response;
}

I also added a reference to System. Drawing using this syntax at the top of the file and added a few namespaces with usings like System.Drawing and System.Drawing.Imaging. I also changed the input in the Integrate tab to "HTTP" as my input.

#r "System.Drawing

Now I go into Postman and POST an image to my new Azure Function endpoint. Here I uploaded a flattering picture of me and unflattering picture of The Oatmeal. He's pretty in real life just NOT HERE. ;)

Image Recognition with Azure Functions

So in just about 15 min with no idea and armed with just my browser, Postman (also my browser), Google/StackOverflow, and Azure Functions I've got a backend proof of concept.

Azure Functions supports Node.js, C#, F#, Python, PHP *and* Batch, Bash, and PowerShell, which really opens it up to basically anyone. You can use them for anything when you just want a function (or more) out there on the web. Send stuff to Slack, automate your house, update GitHub issues, act as a Webhook, etc. There's some great 3d party Azure Functions sample code in this GitHub repo as well. Inputs can be from basically anywhere and outputs can be basically anywhere. If those anywheres are also cloud services like Tables or Storage, you've got a "serverless backed" that is easy to scale.

I'm still learning, but I can see when I'd want a VM (total control) vs a Web App (near total control) vs a "Serverless" Azure Function (less control but I didn't need it anyway, just wanted a function in the cloud.)


Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more.  Developers can use their products to create, convert, modify, or manage files in almost any way.  Aspose is a good company and they offer solid products.  Check them out, and download a free evaluation.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by ORCS Web
Page 1 of 9 in the Azure category Next Page

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.