Scott Hanselman

Streaming Diagnostics Trace Logging from the Azure Command Line (plus Glimpse!)

April 5, '13 Comments [28] Posted in Azure | Open Source
Sponsored By
Streaming logs from Azure. That's insane!

I’ve long said when in doubt, turn on tracing. Sometimes "got here"-debugging is a great tool. I tend to use System.Diagnostics.Trace a lot in my code. Then I'll use ELMAH or Glimpse to get more insight.

Lately though, I've been doing a lot of Azure sites and have been wanting to get at trace data, sometimes at the Azure command line.

I'll do this to deploy (or deploy from Visual Studio):

azure site create mysite --git
git add .
git commit -m "initial deploy"
git push azure master

Then later if I want to restart, start, stop, etc I can certainly

azure site restart mysite

But I was talking to one of the devs a while back and said I really wanted

azure site log tail mysite

And they made it! Check this out. You can try it right now.

Add Tracing to your App

First, make an app that has some tracing. Here's mine. Any ASP.NET app is fine, MVC or Web Forms or Web Pages, doesn't matter. Note the Traces.

public class HomeController : Controller
{
public ActionResult Index()
{
ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application.";
System.Diagnostics.Trace.TraceError("ZOMG THIS IS BAD");
return View();
}

public ActionResult About()
{
ViewBag.Message = "Your app description page.";
System.Diagnostics.Trace.TraceInformation("Just chillin.");
return View();
}
}

Then, upload it to Azure. I did a Publish directly from VS in this case, just right click, Publish and Import the Publish Profile that you download from the portal. You can publish however you like.

Download Publish Profile

Local Tracing with Trace.axd

You likely know that you can enable tracing locally with trace.axd in your ASP.NET app (and MVC apps) by adding trace listeners to your web.config:

<system.diagnostics>
<trace>
<listeners>
<add name="WebPageTraceListener"
type="System.Web.WebPageTraceListener, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</listeners>
</trace>
</system.diagnostics>

So if I visit trace.axd locally, I see my traces:

Tracing shown via trace.axd

If you really wanted this remotely you could say so also:

<trace enabled="true" writeToDiagnosticsTrace="true" localOnly="false" mostRecent="true" pageOutput="false" />

Streaming Logs from the Azure Command Line

When my app is in Azure, I can get to the tracing info as well. From the management portal, I can see where the log files are, right?

The locations of my logging files

And I can FTP in and see them, as I always could. Notice I am using Explorer to FTP in. I can just copy paste the URL into Explorer itself, then enter my deployment credentials.

You can FTP in and get the logs with Explorer. Does anyone do that anymore?

I can also do this with my favorite FTP app, or the browser. Inside the Application Folder is where the tracing files are.

From the command line, I can do this, and the logs are streamed to me.

C:\>azure site log tail mysite
info: Executing command site log tail
2013-04-05T19:45:10 Welcome, you are now connected to log-streaming service.
2013-04-05T19:45:13 PID[2084] Error ZOMG THIS IS BAD

This works with both .NET apps and nodejs apps, by the way. All logs written to the  LogFiles folder can be streamed in real time. The Application trace logs collected under the LogFiles/Application folder are streamed out by default. you can also get at IIS logs written to the LogFiles/Http folder. Any files created in a custom folder e.g. LogFiles/<Custom> will have their contents streamed as well.

I can also filter for specific characters with --filter, so:

C:\>azure site log tail loggingtest --filter ZOMG
info: Executing command site log tail
2013-04-05T19:45:10 Welcome, you are now connected to log-streaming service.
2013-04-05T19:45:13 PID[2084] Error ZOMG THIS IS BAD

I can also turn on Web Server Logging:

Turning on Web Server Logging

If you are using node.js, you'll need to turn on logging in the iisnode.yml. Make sure logging is turned on in your iisnode.yml:
# For security reasons, logging, dev errors, and debugging
# should be disabled in production deployments:
loggingEnabled: false
debuggingEnabled: false
devErrorsEnabled: false
node_env: production

And stream the raw IIS logs as well!

C:\>azure site log tail loggingtest -p http
info: Executing command site log tail
2013-04-05T20:03:59 Welcome, you are now connected to log-streaming service.
2013-04-05 20:04:15 LOGGINGTEST GET / X-ARR-LOG-ID=5a267b3f-6c0e-4a1d-9cb6-d872e
31a2f2e 80 - 166.147.88.43 Mozilla/5.0+(Windows+NT+6.2;+WOW64)+AppleWebKit/537.3
1+(KHTML,+like+Gecko)+Chrome/26.0.1410.43+Safari/537.31 ARRAffinity=edd1561bc28b
b0ea9133780b878994b30ed4697656295364ebc8aadc14f54d2;+WAWebSiteSID=57051e6cd07a4

I can also just download the logs directly to disk from the command line.

C:\>azure site log download loggingtest
info: Executing command site log download
+ Downloading diagnostic log
info: Writing to diagnostics.zip
info: site log download command OK

This feature is in Azure today, and in a few days the UI will appear in the management portal as well. It will look like this. The best part of this UI is that it will allow you to turn it on and off plus change the logging level without recycling the app domain.

Changing the web.config causes an app restart. Since you often want to change your log levels without a restart, these Azure-specific trace settings are stored in /site/diagnostics/settings.json within your instance. You can FTP in and see if you like.

Azure will use your existing trace settings from web.config unless these overriding settings exist.

The new Application Diagnostics logging switch

Remember, you can view these streamed logs on the client using Windows Azure PowerShell (Windows) or Windows Azure Cross Platform Command Line Interface (Windows, Mac and Linux).

Things to be aware of

Turning logging on will turn it on only for 12 hours. You don't usually want logs on forever. Conveniently, if you connect a streaming client, then logging gets auto enabled.

The defaults are to split log files at 128k and keep your app logs under 1MB and the whole logs folder under 30MB. If you need more, you can override some advanced settings directly in the portal.

Here I'm setting the log file splits to 10k and the max Application log to 5MB.

Overriding configuration settings in the Azure Portal

Here's some advanced settings you can override:

  • DIAGNOSTICS_LASTRESORTFILE - "logging-errors.txt"
    • The name (or relative path to the LogDirectory) of the file where internal errors are logged, for troubleshooting the listener.
  • DIAGNOSTICS_LOGGINGSETTINGSFILE - "..\diagnostics\settings.json"
    • The settings file, relative to the web app root.
  • DIAGNOSTICS_TEXTTRACELOGDIRECTORY - "..\..\LogFiles\Application"
    • The log folder, relative to the web app root.
  • DIAGNOSTICS_TEXTTRACEMAXLOGFILESIZEBYTES - 128 * 1024 (bytes)
    • Default: 128 kb log file
  • DIAGNOSTICS_TEXTTRACEMAXLOGFOLDERSIZEBYTES - 1024 * 1024 (bytes)
    • Default: 1 MB Application Folder (30 MB entire Logs Folder)

In the future, I expect we'll see easy ways to put logs in Azure table storage as well as command line querying by time, pid, etc. It would also be nice to be able to get to these logs from inside of Visual Studio.

Routing More Data to Tracing with Glimpse

If you haven't used Glimpse, you're in for a treat. I'll post again about Glimpse next week. Glimpse is a client side debugging framework for your web app.

I used NuGet to bring in "Glimpse.Mvc4" (Be sure to get the right one for you, like Glimpse.Mvc3, or Glimpse.EF5, etc. Check out http://getglimpse.com for more details).

Glimpse doesn't do anything until you turn it on. Locally I hit http://localhost:xxxx/Glimpse.axd and turn it on. Now, I visit the Trace tab and the Trace from earlier is there.

There's my tracing in the Glimpse Trace tab

But if I go to the Timeline Tab, I get way more information, including all the ASP.NET events that are interesting to me. These "bracketing" events about befores and afters could be super useful if they were routed to System.Diagnostics.Trace.

Holy crap that Glimpse Timeline is full of good debugging info

How do I get this timeline view information routed to Tracing? Easy. I'll watch the Glimpse Timeline and route!

using Glimpse.Core.Extensibility;
using Glimpse.Core.Message;

public class TimelineTracer : IInspector
{
public void Setup(IInspectorContext context) {
context.MessageBroker.Subscribe<ITimelineMessage>(TraceMessage);
}

private void TraceMessage(ITimelineMessage message) {
var output = string.Format(
"{0} - {1} ms from beginning of request. Took {2} ms to execute.",
message.EventName,
message.Offset.Milliseconds,
message.Duration.Milliseconds);

System.Diagnostics.Trace.TraceInformation(output, message.EventCategory.Name);
}
}

Now I get lots of great Glimpse-supplied timing info in my Trace log as well that I can stream from the command line.

C:\>azure site log tail loggingtest
info: Executing command site log tail
2013-04-05T20:22:51 Welcome, you are now connected to log-streaming service.
2013-04-05T20:23:32 PID[1992] Information Start Request - 0 ms from beginning of request. Took 0 ms to execute.
2013-04-05T20:23:32 PID[1992] Information Authorization - Home:Index - 224 ms from beginning of request. Took 0 ms to execute.
2013-04-05T20:23:32 PID[1992] Information Action:Executing - Home:Index - 239 ms from beginning of request. Took 0 ms to execute.
2013-04-05T20:23:32 PID[1992] Error ZOMG THIS IS BAD
2013-04-05T20:23:32 PID[1992] Information InvokeActionMethod - Home:Index - 289 ms from beginning of request. Took 29 ms to execute.
2013-04-05T20:23:32 PID[1992] Information Action:Executed - Home:Index - 320 ms from beginning of request. Took 0 ms to execute.

I'm pretty stoked that it's was so easy to get subsystems like ASP.NET, Glimpse and now Web Sites on Azure to work together and share information.

I'm not sure which way I'll finally end up using them, but I'm definitely planning on instrumenting my code and calling System.Diagnostics.Trace more often since I can so easily route the results.

Finally, it's worth mentioning in case you didn't know, that all the Azure SDK is open source and is calling web services on the backend that you can call yourself. If you dig this log streaming feature, did you know you could have watched it get checked in from a Pull Request 3 months ago? Madness. It's a kinder, gentler Death Star over here at Microsoft.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Hanselman's Newsletter of Wonderful Things: March 19th, 2013

April 4, '13 Comments [6] Posted in Newsletter
Sponsored By

I have a "whenever I get around to doing it" Newsletter of Wonderful Things. Why a newsletter? I dunno. It seems more personal somehow. Fight me.

You can view all the previous newsletters here. You can sign up here Newsletter of Wonderful Things or just wait and get them later on the blog, which hopefully you have subscribed to.

Here's the LAST newsletter, delay-posted as I do.


Hi Interfriends,

Thanks again for signing up for this experiment. Here's some interesting things I've come upon this week. If you forwarded this (or if it was forwarded to you) a reminder: You can sign up at http://hanselman.com/newsletter  and the archive of all previous Newsletters is here.

Scott Hanselman

(BTW, since you *love* email you can subscribe to my blog via email here: http://feeds.hanselman.com/ScottHanselmanDO IT!)

P.P.S. You know you can forward this to your friends, right?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Pinching pennies when scaling in The Cloud

March 30, '13 Comments [44] Posted in Azure | Javascript
Sponsored By
The new Hanselminutes Site

Running a site in the cloud and paying for CPU time with pennies and fractions of pennies is a fascinating way to profile your app. I mean, we all should be profiling our apps, right? Really paying attention to what the CPU does and how many database connections are main, what memory usage is like, but we don't. If that code doesn't affect your pocketbook directly, you're less likely to bother.

Interestingly, I find myself performing optimizations driving my hybrid car or dealing with my smartphone's limited data plan. When resources are truly scarce and (most importantly) money is on the line, one finds ways - both clever and obvious - to optimize and cut costs.

Sloppy Code in the Cloud Costs Money

I have a MSDN Subscription which includes quite a bit of free Azure Cloud time. Make sure you've turned this on if you have MSDN yourself. This subscription is under my personal account and I pay if it goes over (I don't get a free pass just because I work for the cloud group) so I want to pay as little as possible if I can.

One of the classic and obvious rules of scaling a system is "do less, and you can do more of it." When you apply this idea to the money you're paying to host your system in the cloud, you want to do as little as possible to stretch your dollar. You pay pennies for CPU time, pennies for bandwidth and pennies for database access - but it adds up. If you're opening database connection a loop, transferring more data than is necessary, you'll pay.

I recently worked with designer Jin Yang and redesigned this blog, made a new home page, and the Hanselminutes Podcast site. In the process we had the idea for a more personal archives page that makes the eponymous show less about me and visually more about the guests. I'm in the process of going through 360+ shows and getting pictures of the guests for each one.

I launched the site and I think it looks great. However, I noticed immediately that the Data Out was REALLY high compared to the old site. I host the MP3s elsewhere, but the images put out almost 500 megs in just hours after the new site was launched.

You can guess from the figure when I launched the new site.

I used way too much bandwidth this day

I *rarely* have to pay for extra bandwidth, but this wasn't looking good. One of the nice things about Azure is that you can view your bill any day, not just at the end of the month. I could see that I was inching up on the outgoing bandwidth. At this rate, I was going to have to pay extra at the end of the month.

Almost out of outbound bandwidth

I thought about it, then realized, duh, I'm loading 360+ images every time someone hits the archives page. It's obvious, of course, in retrospect. But remember that I moved my site into the cloud for two reasons.

  • Save money
  • Scale quickly when I need to

I added caching for all the database calls, which was trivia, but thought about the images thing for a while. I could add paging, or I could make a "just in time" infinite scroll. I dislike paging in this instance as I think folks like to CTRL-F on a large page when the dataset isn't overwhelmingly large.

ASIDE: It's on my list to add "topic tagging" and client-side sorting and filtering for the shows on Hanselminutes. I think this will be a nice addition to the back catalog. I'm also looking at better ways to utilize the growing transcript library. Any thoughts on that?

The easy solution was to lazy load the images as the user scrolls, thereby only using bandwidth for the images you see. I looked at Mike Tupola's jQuery LazyLoad plugin as well as a number of other similar scripts. There's also Luis Almeida's lightweight Unveil if you want fewer bells and whistles. I ended up using the standard Lazy Load.

Implementation was trivial. Add the script:

<script src="//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js" type="text/javascript"></script>
<script src="/jquery.lazyload.min.js" type="text/javascript"></script>

Then, give the lazy load plugin a selector. You can say "just images in this div" or "just images with this class" however you like. I chose to do this:

<script>
$(function() {
$("img.lazy").lazyload({effect: "fadeIn"});
});
</script>

The most important part is that the img element that you generate should include a TINY image for the src. That src= will always be loaded, no matter what, since that's what browsers do. Don't believe any lazy loading solution that says otherwise. I use the 1px gray image from the github repo. Also, if you can, set the final height and width of the image  you're loading to ease layout.

<a href="/363/html5-javascript-chrome-and-the-web-platform-with-paul-irish" class="showCard">
<img data-original="/images/shows/363.jpg" class="lazy" src="/images/grey.gif" width="212" height="212" alt="HTML5, JavaScript, Chrome and the Web Platform with Paul Irish" />
<span class="shownumber">363</span>
<div class="overlay">HTML5, JavaScript, Chrome and the Web Platform with Paul Irish</div>
</a>

The image you're going to ultimately load is in the data-original attribute. It will be loaded when the area when the image is supposed to be enters the current viewport. You can set the threshold and cause the images to load a littler earlier if you prefer, like perhaps 200 pixels before it's visible.

$("img.lazy").lazyload({ threshold : 200 });

After I added this change, I let it run for a day and it chilled out my server completely. There isn't that intense burst of 300+ requests for images and bandwidth is way down.

image

10 Websites in 1 VM vs 10 Websites in Shared Mode

I'm running 10 websites on Azure, currently. One of the things that isn't obvious (unless you read) in the pricing calculator is that when you switch to a Reserved Instances on one of your Azure Websites, all of your other shared sites within that datacenter will jump into that reserved VM. You can actually run up to 100 (!) websites in that VM instance and this ends up saving you money.

Aside: It's nice also that these websites are 'in' that one VM, but I don't need to manage or ever think about that VM. Azure Websites sits on top of the VM and it's automatically updated and managed for me. I deploy to some of these websites with Git, some with Web Deploy, and some I update via FTP. I can also scale the VM dynamically and all the Websites get the benefit.

Think about it this way, running 1 Small VM on Azure with up to 100 websites (again, I have 10 today)...

1 VM for $57.60

...is cheaper than running those same 10 websites in Shared Mode.

10 shared websites for $93.60

This pricing is as of March 30, 2013 for those of you in the future.

The point here being, you can squeeze a LOT out of a small VM, and I plan to keep squeezing, caching, and optimizing as much as I can so I can pay as little as possible in the Cloud. If I can get this VM to do less, then I will be able to run more sites on it and save money.

The result of all this is that I'm starting to code differently now that I can see the pennies add up based on a specific changes.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Fallback HTML5 audio tags for a simple MP3 podcast are harder than you'd think

March 26, '13 Comments [33] Posted in HTML5 | Javascript
Sponsored By

I've been spending the evenings and weekends lately redesigning the blog and the Hanselminutes podcast site. I hadn't realized how cheesy looking the podcast site was all these years. I'd like to get the show expanded to a wider audience as I feel that listenership has kind of flattened lately. I am in the process of adding faces for ALL 360+ shows going back 6 years.

A big thanks to Lynsey Smith from Portland Girl Geek Dinners, by the way, for her hard work in finding pics for me!

I also wanted a nicer in-browser audio experience so I assumed I'd just drop in the audio tag and be done, right?

The HTML5 Audio tag is wonderful, right? Just works. This is the dream:

<audio id="audioplayer" preload="metadata" type="audio/mp3" >
<source src="http://s3.amazonaws.com/hanselminutes/hanselminutes_0363.mp3" type="audio/mp3"/>
Your browser doesn't support the HTML audio tag. Be sad.
</audio>

You can try that live at http://jsfiddle.net/CdxbX/ if you like.

Except it's not nearly that easy.

Here's what you'll see on IE9+:

image

Here's Chrome:

image

Here's Firefox, version 19:

Ya, Firefox currently doesn't support MP3 audio so it just flashes once then disappears. Firefox will support MP3s in audio soon though by using the underlying operating system to play the stream rather than its own embedded code.

In Firefox 20 (the beta channel) on Windows 7 and above, you can test MP3 Audio support by turning on the preference media.windows-media-foundation.enabled in about:config.

The part I was disappointed in was more of an HTML5 specification issue. Notice that while I have fallback text present, I don't see it in Firefox. That's because fallback elements are only used if your browser doesn't support the audio tag at all.

It doesn't do what I would expect at all. What I want is "Can you support any of these audio sources? No? Fallback." This seems intuitive to me.

I talked to Chris Double via Christian Heilmann at Mozilla and he said "You'd need to raise the issue with WHATWG/W3C. It's been debated before in the past. " Indeed it has. From Oct 2009, more people saying that it's not intuitive to fall back in this way:

I expected (incorrectly, in this case) that if I only produced one source element (an MP4), Firefox would drop down to use the fallback content, as it does if I include an object element for a format not supported (for example, if I include a QuickTime object and QT is not installed, the user sees fallback content). As far as I can see, the only option in this situation is to rely on Javascript and the video element's canPlayType() function. - Kit Grose

This lack of an intuitive fallback means that I can't make an audio player that works everywhere using just HTML. I have to use JavaScript, which is a bummer for such a fundamental scenario.

Getting HTML5 audio to fall back correctly in all browsers

Instead you have to make an audio tag dynamically, then interrogate the tag. This applies to both audio and video tags. I ended up using some code from my friend Matt Coneybeare.

<audio id="audioplayer" preload controls loop>
<source src="audio.mp3">
</audio>
<script type="text/javascript">
var audioTag = document.createElement('audio');
if (!(!!(audioTag.canPlayType) && ("no" != audioTag.canPlayType("audio/mpeg")) && ("" != audioTag.canPlayType("audio/mpeg")))) {
AudioPlayer.embed("audioplayer", {soundFile: "audio.mp3"});
}
</script>

The AudioPlayer.embed at the end there is the WordPress AudioPlayer in standalone form. This way on Firefox I get the flash player since it answered false to canPlayType.

Flash audio player in Firefox

A Responsive and Touch-Friendly Audio Player in HTML5

However, the default audio player made by the <audio> tag is kind of lame, and I'd like it to better support touch, look great on tablets, etc. For this, I'll turn to Osvaldas Valutis's AudioPlayer. It's a nice little jQuery plugin that replaces the <audio> element with a lovely chunk of HTML. Since you can't actually style the HTML5 <audio> element, people just hide it, recreate it, then broker calls over to the hidden-but-still-working audio element.

This plugin, along with a little CSS styling of its default colors gives me a nice audio player that looks the same and works everywhere. Except Firefox 19/20 until the next version Firefox answers true to "canPlayType" and then it should just start working! Until then, it's the Flash fallback player, which works nicely as well.

image

The other problem is the QuickTime plugin that most Firefox users have installed. When styling with the Osvaldas' AudioPlayer, the JavaScript interrogation would cause Firefox will prompt folks to install it in some cases if it's not there, and it still doesn't work if it is installed.

I ended up modifying Matt's detection a little to work with this Osvaldas' styling. I realize the code could be more dynamic with less elements, but this was easier for me to read.

  • First, try the audio tag. Works? Great, style it with audioPlayer();
  • Can't do MP3 audio? Dynamically make a Flash player with that P. Hide the audio player (likely not needed.)

Unfortunately for readability, there's the ".audioPlayer" jQuery plugin that styles the HTML and there's the "AudioPlayer" flash embed. They are different but named the same. I didn't change them. ;)

<audio id="audioplayer" preload="auto" controls style="width:100%;" >
<source src="your.mp3" type="audio/mp3">
Your browser doesn't support the HTML audio tag. You can still download the show, though!
</audio>
<p id="audioplayer_1"></p>
<script type="text/javascript">
var audioTag = document.createElement('audio');
/* Do we not support MP3 audio? If not, dynamically made a Flash SWF player. */
if (!(!!(audioTag.canPlayType) && ("no" != audioTag.canPlayType("audio/mpeg")) && ("" != audioTag.canPlayType("audio/mpeg")))) {
AudioPlayer.embed("audioplayer_1", {soundFile: "your.mp3", transparentpagebg: "yes"});
$( '#audioplayer').hide();
}
else /* Ok, we do support MP3 audio, style the audio tag into a touch-friendly player */
{
/* If we didn't do the "if mp3 supported" check above, this call would prompt Firefox install quicktime! */
$( '#audioplayer' ).audioPlayer();
}
</script>

All in all, it works pretty well so far.

ODD BUG: Chrome does seem to have some kind of hang where this audio player is getting blocked while the comments load on my site. Any JavaScript experts want to weight in? If you load a page - like this one - and hit play before the page is loaded, the audio doesn't play. This only happens in Chrome. Thoughts?

While you're here, check out the new http://hanselminutes.com and consider subscribing! It's "Fresh Air for Developers."

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Changing ASP.NET web.config inheritance when mixing versions of child applications

March 26, '13 Comments [19] Posted in ASP.NET | Bugs | IIS
Sponsored By

Mixed Application Pools

My blog and all the sites in and around it are a mix of .NET 2.0, 3.5 and 4. This blog engine is currently .NET 3.5 and runs at http://hanselman.com/blog, but the application at http://hanselman.com/ (the root) is .NET 4.

You can happily mix and match applications across .NET versions on a single IIS instance. You can see how mixed my system is in the screenshot at right there.

However, things got messy when I changed the parent / application to .NET 4, but kept the child /blog as .NET 3.5 (the 2.0 CLR). I got lots of errors like

  • Unrecognized attribute ‘targetFramework’. Note that attribute names are case-sensitive. The targetFramework attribute was inherited from the root .NET 4 Web.config file in the Default Web Site root using ASP.NET configuration inheritance and confused the /blog .NET 2 application.

I didn't want to change the /blog applications' web.config. I just wanted to stop it from inheriting the settings from the parent application. Turns out you can wrap whole sections in a location tag, and then tell that scoped tag to prevent child applications from inheriting.

What you do is change the parent .NET 4 app's web.config to indicate its settings shouldn't flow down to the children, like the .NET 2/3.5 /blog app.

<location path="." inheritInChildApplications="false">
<system.web>
...your system.web stuff goes here
</system.web>
</location>

You can actually read about this in detail in the ASP.NET 4 "breaking changes" documentation. Of course YOU read those closely, don't you? ;)

I chose to change this settings for all of System.Web, but you could do it on a per-section basis if you preferred.

Hope this helps you!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.