Scott Hanselman

Hanselman's Newsletter of Wonderful Things: April 3rd, 2013

April 12, '13 Comments [10] Posted in Newsletter
Sponsored By

I have a "whenever I get around to doing it" Newsletter of Wonderful Things. Why a newsletter? I dunno. It seems more personal somehow. Fight me.

You can view all the previous newsletters here. You can sign up here Newsletter of Wonderful Things or just wait and get them later on the blog, which hopefully you have subscribed to. Email folks get it first!

Here's the MOST RECENT newsletter, delay-posted as I do.


Hi Interfriends,

Thanks again for signing up for this experiment. Here's some interesting things I've come upon this week. If you forwarded this (or if it was forwarded to you) a reminder: You can sign up at http://hanselman.com/newsletter and the archive of all previous Newsletters is here.

Scott Hanselman

(BTW, since you *love* email you can subscribe to my blog via email here: http://feeds.hanselman.com/ScottHanselman DO IT!)

P.P.S. You know you can forward this to your friends, right?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Windows task manager shows wrong CPU Speed when using Hyper-V

April 11, '13 Comments [21] Posted in Bugs | Tools
Sponsored By

My buddy Damian and I both recently bought the Lenovo X1 Carbon Touch. It's got Intel SpeedStep technology so it changes the CPU speed dynamically based on load. These two laptops of ours are identical. However, here's Damian's Task Manager when mostly idle.His speed is 0.60 GHz

Here's mine.

His speed is 2.49 GHz

What the heck is going on? His CPU is reporting 0.60 GHz of a potential speed of 2GHz, indicating that the chip has chilled out. Mine is reporting "full speed ahead!" at a speed that it doesn't even support, 2.49GHz!

We went around and around on this for a while until we realized that I had turned on Hyper-V Virtualization for Windows Phone Development and my Ubuntu VM. He hadn't.

We installed CPU-Z, a low level and very smart CPU utility and got the truth. In fact, both machines are stepping down, but my Kernel is running within the Hypervisor and it's CPU speed is being reported incorrectly to Task Manager. Task Manager is showing the MAX speed, and not the real (Hyper-V virtualized) speed.

cpuz

lenovocpu

NOTE: CPU-Z is lovely but the Download.com wrapper that they put around it is evil spyware and you need to really pay attention when you install or you'll end up installing a bunch of toolbars. Be warned.

I hope this helps someone! It wasted 30 minutes of my life.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Streaming Diagnostics Trace Logging from the Azure Command Line (plus Glimpse!)

April 5, '13 Comments [28] Posted in Azure | Open Source
Sponsored By
Streaming logs from Azure. That's insane!

I’ve long said when in doubt, turn on tracing. Sometimes "got here"-debugging is a great tool. I tend to use System.Diagnostics.Trace a lot in my code. Then I'll use ELMAH or Glimpse to get more insight.

Lately though, I've been doing a lot of Azure sites and have been wanting to get at trace data, sometimes at the Azure command line.

I'll do this to deploy (or deploy from Visual Studio):

azure site create mysite --git
git add .
git commit -m "initial deploy"
git push azure master

Then later if I want to restart, start, stop, etc I can certainly

azure site restart mysite

But I was talking to one of the devs a while back and said I really wanted

azure site log tail mysite

And they made it! Check this out. You can try it right now.

Add Tracing to your App

First, make an app that has some tracing. Here's mine. Any ASP.NET app is fine, MVC or Web Forms or Web Pages, doesn't matter. Note the Traces.

public class HomeController : Controller
{
public ActionResult Index()
{
ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application.";
System.Diagnostics.Trace.TraceError("ZOMG THIS IS BAD");
return View();
}

public ActionResult About()
{
ViewBag.Message = "Your app description page.";
System.Diagnostics.Trace.TraceInformation("Just chillin.");
return View();
}
}

Then, upload it to Azure. I did a Publish directly from VS in this case, just right click, Publish and Import the Publish Profile that you download from the portal. You can publish however you like.

Download Publish Profile

Local Tracing with Trace.axd

You likely know that you can enable tracing locally with trace.axd in your ASP.NET app (and MVC apps) by adding trace listeners to your web.config:

<system.diagnostics>
<trace>
<listeners>
<add name="WebPageTraceListener"
type="System.Web.WebPageTraceListener, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</listeners>
</trace>
</system.diagnostics>

So if I visit trace.axd locally, I see my traces:

Tracing shown via trace.axd

If you really wanted this remotely you could say so also:

<trace enabled="true" writeToDiagnosticsTrace="true" localOnly="false" mostRecent="true" pageOutput="false" />

Streaming Logs from the Azure Command Line

When my app is in Azure, I can get to the tracing info as well. From the management portal, I can see where the log files are, right?

The locations of my logging files

And I can FTP in and see them, as I always could. Notice I am using Explorer to FTP in. I can just copy paste the URL into Explorer itself, then enter my deployment credentials.

You can FTP in and get the logs with Explorer. Does anyone do that anymore?

I can also do this with my favorite FTP app, or the browser. Inside the Application Folder is where the tracing files are.

From the command line, I can do this, and the logs are streamed to me.

C:\>azure site log tail mysite
info: Executing command site log tail
2013-04-05T19:45:10 Welcome, you are now connected to log-streaming service.
2013-04-05T19:45:13 PID[2084] Error ZOMG THIS IS BAD

This works with both .NET apps and nodejs apps, by the way. All logs written to the  LogFiles folder can be streamed in real time. The Application trace logs collected under the LogFiles/Application folder are streamed out by default. you can also get at IIS logs written to the LogFiles/Http folder. Any files created in a custom folder e.g. LogFiles/<Custom> will have their contents streamed as well.

I can also filter for specific characters with --filter, so:

C:\>azure site log tail loggingtest --filter ZOMG
info: Executing command site log tail
2013-04-05T19:45:10 Welcome, you are now connected to log-streaming service.
2013-04-05T19:45:13 PID[2084] Error ZOMG THIS IS BAD

I can also turn on Web Server Logging:

Turning on Web Server Logging

If you are using node.js, you'll need to turn on logging in the iisnode.yml. Make sure logging is turned on in your iisnode.yml:
# For security reasons, logging, dev errors, and debugging
# should be disabled in production deployments:
loggingEnabled: false
debuggingEnabled: false
devErrorsEnabled: false
node_env: production

And stream the raw IIS logs as well!

C:\>azure site log tail loggingtest -p http
info: Executing command site log tail
2013-04-05T20:03:59 Welcome, you are now connected to log-streaming service.
2013-04-05 20:04:15 LOGGINGTEST GET / X-ARR-LOG-ID=5a267b3f-6c0e-4a1d-9cb6-d872e
31a2f2e 80 - 166.147.88.43 Mozilla/5.0+(Windows+NT+6.2;+WOW64)+AppleWebKit/537.3
1+(KHTML,+like+Gecko)+Chrome/26.0.1410.43+Safari/537.31 ARRAffinity=edd1561bc28b
b0ea9133780b878994b30ed4697656295364ebc8aadc14f54d2;+WAWebSiteSID=57051e6cd07a4

I can also just download the logs directly to disk from the command line.

C:\>azure site log download loggingtest
info: Executing command site log download
+ Downloading diagnostic log
info: Writing to diagnostics.zip
info: site log download command OK

This feature is in Azure today, and in a few days the UI will appear in the management portal as well. It will look like this. The best part of this UI is that it will allow you to turn it on and off plus change the logging level without recycling the app domain.

Changing the web.config causes an app restart. Since you often want to change your log levels without a restart, these Azure-specific trace settings are stored in /site/diagnostics/settings.json within your instance. You can FTP in and see if you like.

Azure will use your existing trace settings from web.config unless these overriding settings exist.

The new Application Diagnostics logging switch

Remember, you can view these streamed logs on the client using Windows Azure PowerShell (Windows) or Windows Azure Cross Platform Command Line Interface (Windows, Mac and Linux).

Things to be aware of

Turning logging on will turn it on only for 12 hours. You don't usually want logs on forever. Conveniently, if you connect a streaming client, then logging gets auto enabled.

The defaults are to split log files at 128k and keep your app logs under 1MB and the whole logs folder under 30MB. If you need more, you can override some advanced settings directly in the portal.

Here I'm setting the log file splits to 10k and the max Application log to 5MB.

Overriding configuration settings in the Azure Portal

Here's some advanced settings you can override:

  • DIAGNOSTICS_LASTRESORTFILE - "logging-errors.txt"
    • The name (or relative path to the LogDirectory) of the file where internal errors are logged, for troubleshooting the listener.
  • DIAGNOSTICS_LOGGINGSETTINGSFILE - "..\diagnostics\settings.json"
    • The settings file, relative to the web app root.
  • DIAGNOSTICS_TEXTTRACELOGDIRECTORY - "..\..\LogFiles\Application"
    • The log folder, relative to the web app root.
  • DIAGNOSTICS_TEXTTRACEMAXLOGFILESIZEBYTES - 128 * 1024 (bytes)
    • Default: 128 kb log file
  • DIAGNOSTICS_TEXTTRACEMAXLOGFOLDERSIZEBYTES - 1024 * 1024 (bytes)
    • Default: 1 MB Application Folder (30 MB entire Logs Folder)

In the future, I expect we'll see easy ways to put logs in Azure table storage as well as command line querying by time, pid, etc. It would also be nice to be able to get to these logs from inside of Visual Studio.

Routing More Data to Tracing with Glimpse

If you haven't used Glimpse, you're in for a treat. I'll post again about Glimpse next week. Glimpse is a client side debugging framework for your web app.

I used NuGet to bring in "Glimpse.Mvc4" (Be sure to get the right one for you, like Glimpse.Mvc3, or Glimpse.EF5, etc. Check out http://getglimpse.com for more details).

Glimpse doesn't do anything until you turn it on. Locally I hit http://localhost:xxxx/Glimpse.axd and turn it on. Now, I visit the Trace tab and the Trace from earlier is there.

There's my tracing in the Glimpse Trace tab

But if I go to the Timeline Tab, I get way more information, including all the ASP.NET events that are interesting to me. These "bracketing" events about befores and afters could be super useful if they were routed to System.Diagnostics.Trace.

Holy crap that Glimpse Timeline is full of good debugging info

How do I get this timeline view information routed to Tracing? Easy. I'll watch the Glimpse Timeline and route!

using Glimpse.Core.Extensibility;
using Glimpse.Core.Message;

public class TimelineTracer : IInspector
{
public void Setup(IInspectorContext context) {
context.MessageBroker.Subscribe<ITimelineMessage>(TraceMessage);
}

private void TraceMessage(ITimelineMessage message) {
var output = string.Format(
"{0} - {1} ms from beginning of request. Took {2} ms to execute.",
message.EventName,
message.Offset.Milliseconds,
message.Duration.Milliseconds);

System.Diagnostics.Trace.TraceInformation(output, message.EventCategory.Name);
}
}

Now I get lots of great Glimpse-supplied timing info in my Trace log as well that I can stream from the command line.

C:\>azure site log tail loggingtest
info: Executing command site log tail
2013-04-05T20:22:51 Welcome, you are now connected to log-streaming service.
2013-04-05T20:23:32 PID[1992] Information Start Request - 0 ms from beginning of request. Took 0 ms to execute.
2013-04-05T20:23:32 PID[1992] Information Authorization - Home:Index - 224 ms from beginning of request. Took 0 ms to execute.
2013-04-05T20:23:32 PID[1992] Information Action:Executing - Home:Index - 239 ms from beginning of request. Took 0 ms to execute.
2013-04-05T20:23:32 PID[1992] Error ZOMG THIS IS BAD
2013-04-05T20:23:32 PID[1992] Information InvokeActionMethod - Home:Index - 289 ms from beginning of request. Took 29 ms to execute.
2013-04-05T20:23:32 PID[1992] Information Action:Executed - Home:Index - 320 ms from beginning of request. Took 0 ms to execute.

I'm pretty stoked that it's was so easy to get subsystems like ASP.NET, Glimpse and now Web Sites on Azure to work together and share information.

I'm not sure which way I'll finally end up using them, but I'm definitely planning on instrumenting my code and calling System.Diagnostics.Trace more often since I can so easily route the results.

Finally, it's worth mentioning in case you didn't know, that all the Azure SDK is open source and is calling web services on the backend that you can call yourself. If you dig this log streaming feature, did you know you could have watched it get checked in from a Pull Request 3 months ago? Madness. It's a kinder, gentler Death Star over here at Microsoft.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Hanselman's Newsletter of Wonderful Things: March 19th, 2013

April 4, '13 Comments [6] Posted in Newsletter
Sponsored By

I have a "whenever I get around to doing it" Newsletter of Wonderful Things. Why a newsletter? I dunno. It seems more personal somehow. Fight me.

You can view all the previous newsletters here. You can sign up here Newsletter of Wonderful Things or just wait and get them later on the blog, which hopefully you have subscribed to.

Here's the LAST newsletter, delay-posted as I do.


Hi Interfriends,

Thanks again for signing up for this experiment. Here's some interesting things I've come upon this week. If you forwarded this (or if it was forwarded to you) a reminder: You can sign up at http://hanselman.com/newsletter  and the archive of all previous Newsletters is here.

Scott Hanselman

(BTW, since you *love* email you can subscribe to my blog via email here: http://feeds.hanselman.com/ScottHanselmanDO IT!)

P.P.S. You know you can forward this to your friends, right?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Pinching pennies when scaling in The Cloud

March 30, '13 Comments [44] Posted in Azure | Javascript
Sponsored By
The new Hanselminutes Site

Running a site in the cloud and paying for CPU time with pennies and fractions of pennies is a fascinating way to profile your app. I mean, we all should be profiling our apps, right? Really paying attention to what the CPU does and how many database connections are main, what memory usage is like, but we don't. If that code doesn't affect your pocketbook directly, you're less likely to bother.

Interestingly, I find myself performing optimizations driving my hybrid car or dealing with my smartphone's limited data plan. When resources are truly scarce and (most importantly) money is on the line, one finds ways - both clever and obvious - to optimize and cut costs.

Sloppy Code in the Cloud Costs Money

I have a MSDN Subscription which includes quite a bit of free Azure Cloud time. Make sure you've turned this on if you have MSDN yourself. This subscription is under my personal account and I pay if it goes over (I don't get a free pass just because I work for the cloud group) so I want to pay as little as possible if I can.

One of the classic and obvious rules of scaling a system is "do less, and you can do more of it." When you apply this idea to the money you're paying to host your system in the cloud, you want to do as little as possible to stretch your dollar. You pay pennies for CPU time, pennies for bandwidth and pennies for database access - but it adds up. If you're opening database connection a loop, transferring more data than is necessary, you'll pay.

I recently worked with designer Jin Yang and redesigned this blog, made a new home page, and the Hanselminutes Podcast site. In the process we had the idea for a more personal archives page that makes the eponymous show less about me and visually more about the guests. I'm in the process of going through 360+ shows and getting pictures of the guests for each one.

I launched the site and I think it looks great. However, I noticed immediately that the Data Out was REALLY high compared to the old site. I host the MP3s elsewhere, but the images put out almost 500 megs in just hours after the new site was launched.

You can guess from the figure when I launched the new site.

I used way too much bandwidth this day

I *rarely* have to pay for extra bandwidth, but this wasn't looking good. One of the nice things about Azure is that you can view your bill any day, not just at the end of the month. I could see that I was inching up on the outgoing bandwidth. At this rate, I was going to have to pay extra at the end of the month.

Almost out of outbound bandwidth

I thought about it, then realized, duh, I'm loading 360+ images every time someone hits the archives page. It's obvious, of course, in retrospect. But remember that I moved my site into the cloud for two reasons.

  • Save money
  • Scale quickly when I need to

I added caching for all the database calls, which was trivia, but thought about the images thing for a while. I could add paging, or I could make a "just in time" infinite scroll. I dislike paging in this instance as I think folks like to CTRL-F on a large page when the dataset isn't overwhelmingly large.

ASIDE: It's on my list to add "topic tagging" and client-side sorting and filtering for the shows on Hanselminutes. I think this will be a nice addition to the back catalog. I'm also looking at better ways to utilize the growing transcript library. Any thoughts on that?

The easy solution was to lazy load the images as the user scrolls, thereby only using bandwidth for the images you see. I looked at Mike Tupola's jQuery LazyLoad plugin as well as a number of other similar scripts. There's also Luis Almeida's lightweight Unveil if you want fewer bells and whistles. I ended up using the standard Lazy Load.

Implementation was trivial. Add the script:

<script src="//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js" type="text/javascript"></script>
<script src="/jquery.lazyload.min.js" type="text/javascript"></script>

Then, give the lazy load plugin a selector. You can say "just images in this div" or "just images with this class" however you like. I chose to do this:

<script>
$(function() {
$("img.lazy").lazyload({effect: "fadeIn"});
});
</script>

The most important part is that the img element that you generate should include a TINY image for the src. That src= will always be loaded, no matter what, since that's what browsers do. Don't believe any lazy loading solution that says otherwise. I use the 1px gray image from the github repo. Also, if you can, set the final height and width of the image  you're loading to ease layout.

<a href="/363/html5-javascript-chrome-and-the-web-platform-with-paul-irish" class="showCard">
<img data-original="/images/shows/363.jpg" class="lazy" src="/images/grey.gif" width="212" height="212" alt="HTML5, JavaScript, Chrome and the Web Platform with Paul Irish" />
<span class="shownumber">363</span>
<div class="overlay">HTML5, JavaScript, Chrome and the Web Platform with Paul Irish</div>
</a>

The image you're going to ultimately load is in the data-original attribute. It will be loaded when the area when the image is supposed to be enters the current viewport. You can set the threshold and cause the images to load a littler earlier if you prefer, like perhaps 200 pixels before it's visible.

$("img.lazy").lazyload({ threshold : 200 });

After I added this change, I let it run for a day and it chilled out my server completely. There isn't that intense burst of 300+ requests for images and bandwidth is way down.

image

10 Websites in 1 VM vs 10 Websites in Shared Mode

I'm running 10 websites on Azure, currently. One of the things that isn't obvious (unless you read) in the pricing calculator is that when you switch to a Reserved Instances on one of your Azure Websites, all of your other shared sites within that datacenter will jump into that reserved VM. You can actually run up to 100 (!) websites in that VM instance and this ends up saving you money.

Aside: It's nice also that these websites are 'in' that one VM, but I don't need to manage or ever think about that VM. Azure Websites sits on top of the VM and it's automatically updated and managed for me. I deploy to some of these websites with Git, some with Web Deploy, and some I update via FTP. I can also scale the VM dynamically and all the Websites get the benefit.

Think about it this way, running 1 Small VM on Azure with up to 100 websites (again, I have 10 today)...

1 VM for $57.60

...is cheaper than running those same 10 websites in Shared Mode.

10 shared websites for $93.60

This pricing is as of March 30, 2013 for those of you in the future.

The point here being, you can squeeze a LOT out of a small VM, and I plan to keep squeezing, caching, and optimizing as much as I can so I can pay as little as possible in the Cloud. If I can get this VM to do less, then I will be able to run more sites on it and save money.

The result of all this is that I'm starting to code differently now that I can see the pennies add up based on a specific changes.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.