My buddy Damian and I both recently bought the Lenovo X1 Carbon Touch. It's got Intel SpeedStep technology so it changes the CPU speed dynamically based on load. These two laptops of ours are identical. However, here's Damian's Task Manager when mostly idle.
Here's mine.
What the heck is going on? His CPU is reporting 0.60 GHz of a potential speed of 2GHz, indicating that the chip has chilled out. Mine is reporting "full speed ahead!" at a speed that it doesn't even support, 2.49GHz!
We went around and around on this for a while until we realized that I had turned on Hyper-V Virtualization for Windows Phone Development and my Ubuntu VM. He hadn't.
We installed CPU-Z, a low level and very smart CPU utility and got the truth. In fact, both machines are stepping down, but my Kernel is running within the Hypervisor and it's CPU speed is being reported incorrectly to Task Manager. Task Manager is showing the MAX speed, and not the real (Hyper-V virtualized) speed.
NOTE: CPU-Z is lovely but the Download.com wrapper that they put around it is evil spyware and you need to really pay attention when you install or you'll end up installing a bunch of toolbars. Be warned.
I hope this helps someone! It wasted 30 minutes of my life.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
I’ve long said when in doubt, turn on tracing. Sometimes "got here"-debugging is a great tool. I tend to use System.Diagnostics.Trace a lot in my code. Then I'll use ELMAH or Glimpse to get more insight.
Then later if I want to restart, start, stop, etc I can certainly
azure site restart mysite
But I was talking to one of the devs a while back and said I really wanted
azure site log tail mysite
And they made it! Check this out. You can try it right now.
Add Tracing to your App
First, make an app that has some tracing. Here's mine. Any ASP.NET app is fine, MVC or Web Forms or Web Pages, doesn't matter. Note the Traces.
public class HomeController : Controller { public ActionResult Index() { ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; System.Diagnostics.Trace.TraceError("ZOMG THIS IS BAD"); return View(); }
Then, upload it to Azure. I did a Publish directly from VS in this case, just right click, Publish and Import the Publish Profile that you download from the portal. You can publish however you like.
Local Tracing with Trace.axd
You likely know that you can enable tracing locally with trace.axd in your ASP.NET app (and MVC apps) by adding trace listeners to your web.config:
When my app is in Azure, I can get to the tracing info as well. From the management portal, I can see where the log files are, right?
And I can FTP in and see them, as I always could. Notice I am using Explorer to FTP in. I can just copy paste the URL into Explorer itself, then enter my deployment credentials.
I can also do this with my favorite FTP app, or the browser. Inside the Application Folder is where the tracing files are.
From the command line, I can do this, and the logs are streamed to me.
C:\>azure site log tail mysite info: Executing command site log tail 2013-04-05T19:45:10 Welcome, you are now connected to log-streaming service. 2013-04-05T19:45:13 PID[2084] Error ZOMG THIS IS BAD
This works with both .NET apps and nodejs apps, by the way. All logs written to the LogFiles folder can be streamed in real time. The Application trace logs collected under the LogFiles/Application folder are streamed out by default. you can also get at IIS logs written to the LogFiles/Http folder. Any files created in a custom folder e.g. LogFiles/<Custom> will have their contents streamed as well.
I can also filter for specific characters with --filter, so:
C:\>azure site log tail loggingtest --filter ZOMG info: Executing command site log tail 2013-04-05T19:45:10 Welcome, you are now connected to log-streaming service. 2013-04-05T19:45:13 PID[2084] Error ZOMG THIS IS BAD
I can also turn on Web Server Logging:
If you are using node.js, you'll need to turn on logging in the iisnode.yml. Make sure logging is turned on in your iisnode.yml:
# For security reasons, logging, dev errors, and debugging
# should be disabled in production deployments:
loggingEnabled: false
debuggingEnabled: false
devErrorsEnabled: false
node_env: production
And stream the raw IIS logs as well!
C:\>azure site log tail loggingtest -p http info: Executing command site log tail 2013-04-05T20:03:59 Welcome, you are now connected to log-streaming service. 2013-04-05 20:04:15 LOGGINGTEST GET / X-ARR-LOG-ID=5a267b3f-6c0e-4a1d-9cb6-d872e 31a2f2e 80 - 166.147.88.43 Mozilla/5.0+(Windows+NT+6.2;+WOW64)+AppleWebKit/537.3 1+(KHTML,+like+Gecko)+Chrome/26.0.1410.43+Safari/537.31 ARRAffinity=edd1561bc28b b0ea9133780b878994b30ed4697656295364ebc8aadc14f54d2;+WAWebSiteSID=57051e6cd07a4
I can also just download the logs directly to disk from the command line.
C:\>azure site log download loggingtest info: Executing command site log download + Downloading diagnostic log info: Writing to diagnostics.zip info: site log download command OK
This feature is in Azure today, and in a few days the UI will appear in the management portal as well. It will look like this. The best part of this UI is that it will allow you to turn it on and off plus change the logging level without recycling the app domain.
Changing the web.config causes an app restart. Since you often want to change your log levels without a restart, these Azure-specific trace settings are stored in /site/diagnostics/settings.json within your instance. You can FTP in and see if you like.
Azure will use your existing trace settings from web.config unless these overriding settings exist.
Remember, you can view these streamed logs on the client using Windows Azure PowerShell (Windows) or Windows Azure Cross Platform Command Line Interface (Windows, Mac and Linux).
Things to be aware of
Turning logging on will turn it on only for 12 hours. You don't usually want logs on forever. Conveniently, if you connect a streaming client, then logging gets auto enabled.
The defaults are to split log files at 128k and keep your app logs under 1MB and the whole logs folder under 30MB. If you need more, you can override some advanced settings directly in the portal.
Here I'm setting the log file splits to 10k and the max Application log to 5MB.
Here's some advanced settings you can override:
DIAGNOSTICS_LASTRESORTFILE - "logging-errors.txt"
The name (or relative path to the LogDirectory) of the file where internal errors are logged, for troubleshooting the listener.
In the future, I expect we'll see easy ways to put logs in Azure table storage as well as command line querying by time, pid, etc. It would also be nice to be able to get to these logs from inside of Visual Studio.
Routing More Data to Tracing with Glimpse
If you haven't used Glimpse, you're in for a treat. I'll post again about Glimpse next week. Glimpse is a client side debugging framework for your web app.
I used NuGet to bring in "Glimpse.Mvc4" (Be sure to get the right one for you, like Glimpse.Mvc3, or Glimpse.EF5, etc. Check out http://getglimpse.com for more details).
Glimpse doesn't do anything until you turn it on. Locally I hit http://localhost:xxxx/Glimpse.axd and turn it on. Now, I visit the Trace tab and the Trace from earlier is there.
But if I go to the Timeline Tab, I get way more information, including all the ASP.NET events that are interesting to me. These "bracketing" events about befores and afters could be super useful if they were routed to System.Diagnostics.Trace.
How do I get this timeline view information routed to Tracing? Easy. I'll watch the Glimpse Timeline and route!
using Glimpse.Core.Extensibility; using Glimpse.Core.Message;
public class TimelineTracer : IInspector { public void Setup(IInspectorContext context) { context.MessageBroker.Subscribe<ITimelineMessage>(TraceMessage); }
private void TraceMessage(ITimelineMessage message) { var output = string.Format( "{0} - {1} ms from beginning of request. Took {2} ms to execute.", message.EventName, message.Offset.Milliseconds, message.Duration.Milliseconds);
Now I get lots of great Glimpse-supplied timing info in my Trace log as well that I can stream from the command line.
C:\>azure site log tail loggingtest info: Executing command site log tail 2013-04-05T20:22:51 Welcome, you are now connected to log-streaming service. 2013-04-05T20:23:32 PID[1992] Information Start Request - 0 ms from beginning of request. Took 0 ms to execute. 2013-04-05T20:23:32 PID[1992] Information Authorization - Home:Index - 224 ms from beginning of request. Took 0 ms to execute. 2013-04-05T20:23:32 PID[1992] Information Action:Executing - Home:Index - 239 ms from beginning of request. Took 0 ms to execute. 2013-04-05T20:23:32 PID[1992] Error ZOMG THIS IS BAD 2013-04-05T20:23:32 PID[1992] Information InvokeActionMethod - Home:Index - 289 ms from beginning of request. Took 29 ms to execute. 2013-04-05T20:23:32 PID[1992] Information Action:Executed - Home:Index - 320 ms from beginning of request. Took 0 ms to execute.
I'm pretty stoked that it's was so easy to get subsystems like ASP.NET, Glimpse and now Web Sites on Azure to work together and share information.
I'm not sure which way I'll finally end up using them, but I'm definitely planning on instrumenting my code and calling System.Diagnostics.Trace more often since I can so easily route the results.
Finally, it's worth mentioning in case you didn't know, that all the Azure SDK is open source and is calling web services on the backend that you can call yourself. If you dig this log streaming feature, did you know you could have watched it get checked in from a Pull Request 3 months ago? Madness. It's a kinder, gentler Death Star over here at Microsoft.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
P.P.S. You know you can forward this to your friends, right?
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
Running a site in the cloud and paying for CPU time with pennies and fractions of pennies is a fascinating way to profile your app. I mean, we all should be profiling our apps, right? Really paying attention to what the CPU does and how many database connections are main, what memory usage is like, but we don't. If that code doesn't affect your pocketbook directly, you're less likely to bother.
Interestingly, I find myself performing optimizations driving my hybrid car or dealing with my smartphone's limited data plan. When resources are truly scarce and (most importantly) money is on the line, one finds ways - both clever and obvious - to optimize and cut costs.
Sloppy Code in the Cloud Costs Money
I have a MSDN Subscription which includes quite a bit of free Azure Cloud time. Make sure you've turned this on if you have MSDN yourself. This subscription is under my personal account and I pay if it goes over (I don't get a free pass just because I work for the cloud group) so I want to pay as little as possible if I can.
One of the classic and obvious rules of scaling a system is "do less, and you can do more of it." When you apply this idea to the money you're paying to host your system in the cloud, you want to do as little as possible to stretch your dollar. You pay pennies for CPU time, pennies for bandwidth and pennies for database access - but it adds up. If you're opening database connection a loop, transferring more data than is necessary, you'll pay.
I recently worked with designer Jin Yang and redesigned this blog, made a new home page, and the Hanselminutes Podcast site. In the process we had the idea for a more personal archives page that makes the eponymous show less about me and visually more about the guests. I'm in the process of going through 360+ shows and getting pictures of the guests for each one.
I launched the site and I think it looks great. However, I noticed immediately that the Data Out was REALLY high compared to the old site. I host the MP3s elsewhere, but the images put out almost 500 megs in just hours after the new site was launched.
You can guess from the figure when I launched the new site.
I *rarely* have to pay for extra bandwidth, but this wasn't looking good. One of the nice things about Azure is that you can view your bill any day, not just at the end of the month. I could see that I was inching up on the outgoing bandwidth. At this rate, I was going to have to pay extra at the end of the month.
I thought about it, then realized, duh, I'm loading 360+ images every time someone hits the archives page. It's obvious, of course, in retrospect. But remember that I moved my site into the cloud for two reasons.
Save money
Scale quickly when I need to
I added caching for all the database calls, which was trivia, but thought about the images thing for a while. I could add paging, or I could make a "just in time" infinite scroll. I dislike paging in this instance as I think folks like to CTRL-F on a large page when the dataset isn't overwhelmingly large.
ASIDE: It's on my list to add "topic tagging" and client-side sorting and filtering for the shows on Hanselminutes. I think this will be a nice addition to the back catalog. I'm also looking at better ways to utilize the growing transcript library. Any thoughts on that?
Then, give the lazy load plugin a selector. You can say "just images in this div" or "just images with this class" however you like. I chose to do this:
The most important part is that the img element that you generate should include a TINY image for the src. That src= will always be loaded, no matter what, since that's what browsers do. Don't believe any lazy loading solution that says otherwise. I use the 1px gray image from the github repo. Also, if you can, set the final height and width of the image you're loading to ease layout.
<a href="/363/html5-javascript-chrome-and-the-web-platform-with-paul-irish" class="showCard"> <img data-original="/images/shows/363.jpg" class="lazy" src="/images/grey.gif" width="212" height="212" alt="HTML5, JavaScript, Chrome and the Web Platform with Paul Irish" /> <span class="shownumber">363</span> <div class="overlay">HTML5, JavaScript, Chrome and the Web Platform with Paul Irish</div> </a>
The image you're going to ultimately load is in the data-original attribute. It will be loaded when the area when the image is supposed to be enters the current viewport. You can set the threshold and cause the images to load a littler earlier if you prefer, like perhaps 200 pixels before it's visible.
$("img.lazy").lazyload({ threshold : 200 });
After I added this change, I let it run for a day and it chilled out my server completely. There isn't that intense burst of 300+ requests for images and bandwidth is way down.
10 Websites in 1 VM vs 10 Websites in Shared Mode
I'm running 10 websites on Azure, currently. One of the things that isn't obvious (unless you read) in the pricing calculator is that when you switch to a Reserved Instances on one of your Azure Websites, all of your other shared sites within that datacenter will jump into that reserved VM. You can actually run up to 100 (!) websites in that VM instance and this ends up saving you money.
Aside: It's nice also that these websites are 'in' that one VM, but I don't need to manage or ever think about that VM. Azure Websites sits on top of the VM and it's automatically updated and managed for me. I deploy to some of these websites with Git, some with Web Deploy, and some I update via FTP. I can also scale the VM dynamically and all the Websites get the benefit.
Think about it this way, running 1 Small VM on Azure with up to 100 websites (again, I have 10 today)...
...is cheaper than running those same 10 websites in Shared Mode.
This pricing is as of March 30, 2013 for those of you in the future.
The point here being, you can squeeze a LOT out of a small VM, and I plan to keep squeezing, caching, and optimizing as much as I can so I can pay as little as possible in the Cloud. If I can get this VM to do less, then I will be able to run more sites on it and save money.
The result of all this is that I'm starting to code differently now that I can see the pennies add up based on a specific changes.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
I've been spending the evenings and weekends lately redesigning the blog and the Hanselminutes podcast site. I hadn't realized how cheesy looking the podcast site was all these years. I'd like to get the show expanded to a wider audience as I feel that listenership has kind of flattened lately. I am in the process of adding faces for ALL 360+ shows going back 6 years.
I also wanted a nicer in-browser audio experience so I assumed I'd just drop in the audio tag and be done, right?
The HTML5 Audio tag is wonderful, right? Just works. This is the dream:
<audio id="audioplayer" preload="metadata" type="audio/mp3" > <source src="http://s3.amazonaws.com/hanselminutes/hanselminutes_0363.mp3" type="audio/mp3"/> Your browser doesn't support the HTML audio tag. Be sad. </audio>
Ya, Firefox currently doesn't support MP3 audio so it just flashes once then disappears. Firefox will support MP3s in audio soon though by using the underlying operating system to play the stream rather than its own embedded code.
In Firefox 20 (the beta channel) on Windows 7 and above, you can test MP3 Audio support by turning on the preference media.windows-media-foundation.enabled in about:config.
The part I was disappointed in was more of an HTML5 specification issue. Notice that while I have fallback text present, I don't see it in Firefox. That's because fallback elements are only used if your browser doesn't support the audio tag at all.
It doesn't do what I would expect at all. What I want is "Can you support any of these audio sources? No? Fallback." This seems intuitive to me.
I talked to Chris Double via Christian Heilmann at Mozilla and he said "You'd need to raise the issue with WHATWG/W3C. It's been debated before in the past. " Indeed it has. From Oct 2009, more people saying that it's not intuitive to fall back in this way:
I expected (incorrectly, in this case) that if I only produced one source element (an MP4), Firefox would drop down to use the fallback content, as it does if I include an object element for a format not supported (for example, if I include a QuickTime object and QT is not installed, the user sees fallback content). As far as I can see, the only option in this situation is to rely on Javascript and the video element's canPlayType() function. - Kit Grose
This lack of an intuitive fallback means that I can't make an audio player that works everywhere using just HTML. I have to use JavaScript, which is a bummer for such a fundamental scenario.
Getting HTML5 audio to fall back correctly in all browsers
Instead you have to make an audio tag dynamically, then interrogate the tag. This applies to both audio and video tags. I ended up using some code from my friend Matt Coneybeare.
The AudioPlayer.embed at the end there is the WordPress AudioPlayer in standalone form. This way on Firefox I get the flash player since it answered false to canPlayType.
A Responsive and Touch-Friendly Audio Player in HTML5
However, the default audio player made by the <audio> tag is kind of lame, and I'd like it to better support touch, look great on tablets, etc. For this, I'll turn to Osvaldas Valutis's AudioPlayer. It's a nice little jQuery plugin that replaces the <audio> element with a lovely chunk of HTML. Since you can't actually style the HTML5 <audio> element, people just hide it, recreate it, then broker calls over to the hidden-but-still-working audio element.
This plugin, along with a little CSS styling of its default colors gives me a nice audio player that looks the same and works everywhere. Except Firefox 19/20 until the next version Firefox answers true to "canPlayType" and then it should just start working! Until then, it's the Flash fallback player, which works nicely as well.
The other problem is the QuickTime plugin that most Firefox users have installed. When styling with the Osvaldas' AudioPlayer, the JavaScript interrogation would cause Firefox will prompt folks to install it in some cases if it's not there, and it still doesn't work if it is installed.
I ended up modifying Matt's detection a little to work with this Osvaldas' styling. I realize the code could be more dynamic with less elements, but this was easier for me to read.
First, try the audio tag. Works? Great, style it with audioPlayer();
Can't do MP3 audio? Dynamically make a Flash player with that P. Hide the audio player (likely not needed.)
Unfortunately for readability, there's the ".audioPlayer" jQuery plugin that styles the HTML and there's the "AudioPlayer" flash embed. They are different but named the same. I didn't change them. ;)
<audio id="audioplayer" preload="auto" controls style="width:100%;" > <source src="your.mp3" type="audio/mp3"> Your browser doesn't support the HTML audio tag. You can still download the show, though! </audio> <p id="audioplayer_1"></p> <script type="text/javascript"> var audioTag = document.createElement('audio'); /* Do we not support MP3 audio? If not, dynamically made a Flash SWF player. */ if (!(!!(audioTag.canPlayType) && ("no" != audioTag.canPlayType("audio/mpeg")) && ("" != audioTag.canPlayType("audio/mpeg")))) { AudioPlayer.embed("audioplayer_1", {soundFile: "your.mp3", transparentpagebg: "yes"}); $( '#audioplayer').hide(); } else /* Ok, we do support MP3 audio, style the audio tag into a touch-friendly player */ { /* If we didn't do the "if mp3 supported" check above, this call would prompt Firefox install quicktime! */ $( '#audioplayer' ).audioPlayer(); } </script>
All in all, it works pretty well so far.
ODD BUG: Chrome does seem to have some kind of hang where this audio player is getting blocked while the comments load on my site. Any JavaScript experts want to weight in? If you load a page - like this one - and hit play before the page is loaded, the audio doesn't play. This only happens in Chrome. Thoughts?
While you're here, check out the new http://hanselminutes.com and consider subscribing! It's "Fresh Air for Developers."
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.