Scott Hanselman

Start a movement! Donate all your Google AdSense Revenue to Earthquake Relief

December 28, '04 Comments [5] Posted in Musings
Sponsored By

Greg Hughes and I were talking about this idea. The power of blogging isn't citizen journalism, it's the power to start a movement.

Nick Bradbury is donating his profits to the Red Cross. Kudos Nick. Let's ALL take our passive Google Adsense Revenue for the year and donate it directly to earthquake relief. Mine so far is US$-omittedduetogooglepolicy- since I started ads in June. I'm sure hundreds of thousands, even millions could be raised quickly in this manner.

To that end, let's pressure Google into allowing us to automatically donate our revenue from their side! Spread the word and trackback this link.

To me, spreading an idea like this is the power of blogging, more than citizen journalism.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

More on Assembly Binding, Strong Naming, the GAC, Publisher Policy and Dynamically Loaded Assemblies

December 28, '04 Comments [0] Posted in ASP.NET | NCover
Sponsored By

Certainly Suzanne Cook is the definitive source for details on Assembly.Load and Binding Contexts, a black art tantamount to voodoo that few people understand. Patrick and I have been hip-deep in it lately, and have discovered/uncovered/madecleartoourselves how some of this voodoo works. Here's a (annotated) writeup from Patrick that was sent out internally. Another great resource is Mike Gunderloy's article on Binding Policy in .NET.

  • Assemblies will only EVER be loaded from the GAC based on a full bind (name, version, and public key token).  A partial bind with name and public key token only WON’T load from the GAC. 
    • If you reference an assembly with VS.NET you're asking for a full bind. If you say Assembly.Load("foo") you're asking for a partial bind.
  • However, the way this usually works is…
    • You do a partial bind on assembly name, or name and public key token with Assembly.Load
    • Fusion (the code name for the Assembly Loader/Binder) starts walking the probing path looking for an assembly that matches the partial bind.
    • Counter Intuititive: If it finds one while probing (the first one) it will then attempt to use the strong name of the one it found to do a full bind against the GAC.
    • If it’s in the GAC, that’s the one that gets loaded.
    • Any of that loaded assemblies will try to load from the GAC first without going to the probing path, since the embedded references constitute a full bind.
    • If they aren’t found in the GAC, then it will start probing.
    • It’ll grab the first one it finds in the probing path.  If the versions don’t match, Fusion fails.  If they do match, Fusion loads that one.
    • So, if you specify a partial name, and the file is in the GAC, but not the probing path, the load fails, since there’s no way to do a full bind.  

All this is mostly an issue for plugins that we load dynamically.  It shouldn’t be an issue for compile-time dependencies, since they use full binds.  One way to make sure you get what you expect is to specify a full bind in your config files via an Assembly Qualified Name (QN) like:  "Foo.Bar, Version=2.0.205.0, Culture=neutral, PublicKeyToken=59388ae2d2746794" and doing something like this:

  100 public static object CreateInstance(string assemblyandtype)
101 {
102 Type type = Type.GetType(assemblyandtype);
103 object instance = null;
104 instance = type.InvokeMember(String.Empty,BindingFlags.CreateInstance, null, null, null);
105 return instance;
106 }

Thanks to Patrick for the writeup!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

[OT] 22,000 dead, and This White Guy - Western Media and the Devaluing of Brown People

December 27, '04 Comments [19] Posted in Musings
Sponsored By

earthquake1.jpg

This is a technical blog, so note that I don't do this all the time so forgive me ahead of time if you don't like my rants.

I'm looking at the home page of CNN.com.  There are currently over 22,000 brown people dead, and apparently this white guy. I feel for his family, but I feel more for the countless hundreds of thousands of displaced and suffering others.

I am tired of the American Media (last night it was ABC News) who spend 5 minutes on a massive natural disasters, and then gloss over it when we are told "and no Americans were injured." Notice the text of this CNN blurb - 22,000 dead and 27 Western People, which details the counts of British, French and Italians.

This is unspeakably ethnocentric and it makes me a little ill.  I'm not trying to be P.C. here, but these are humans, and whether it was a hundred Somali Fisherman, or this guy from Illinois, I expect more from a leading news organization. This is like a home-town newspaper concerned about its native son away on mission.

We must never forget that tommorow isn't promised to us. One day there will be an earthquake off the coast of Oregon. No doubt that will get media coverage.

God help us all, but thanks for the time I've had.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Lutz Roeder's C# XML Documenter lives on in Travis Illig's CodeRush Plugin "CR_Documentor"

December 21, '04 Comments [0] Posted in ASP.NET | XML | CodeRush | Tools
Sponsored By

CR_Documentor_smTravis has rev'ed CR_Documentor to 1.1.0.1220 including these new features:

  • Has been updated for NDoc 1.3 tags
  • Provides the option for what level of "tag compatibility" to follow (Microsoft tags only or NDoc 1.3)
  • Provides the option for how to handle "unrecognized" tags
  • Has updated styles to match NDoc 1.3
  • Has been updated to work with CodeRush 1.1.6 / DXCore 1.1.8 or higher

If you're not familiar with this project, Travis talked to Lutz and informally "took over" the applet originally know as Documentor that let a developer see what the compiled results of XML Documentation comment would look like.  Travis has since extended documented and made it into a CodeRush plugin that runs in a toolwindow within Visual Studio. It will let you see a preview in realtime as you type of your comments.

If you are trying to do extensive C# doc or if you use NDoc, this is a great tool for you. Thanks Travis!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

ASP.NET Performance Tuning - Making dasBlog faster...

December 18, '04 Comments [2] Posted in ASP.NET | DasBlog | XML | Bugs | Tools
Sponsored By

Greg Hughes and I stayed up late last night tuning and installing a custom build of dasBlog. If you remember, dasBlog is Clemens' rewrite/imagining of Chris Anderson's BlogX codebase that has been moved over to GotDotNet and is under the supervision of Omar Shahine.

ORCSWeb, my awesome ISP, and Scott Forsyth (ASP.NET MVP who works there) had complained to me that as my traffic increased, my website instance was being a poor citizen on the shared server. My site is on a server with something like a dozen other sites. While I'd survived slashdotting, my traffic lately has been getting big enough to bother the server.

ScottF had noticed that my blog had these unfortunate characteristics (remember these are bad):

  • CPU Threads that were taking minutes to complete their work
  • Caused 9,538,000 disk reads during a time period while another site on the same server with twice as many visitors had 47,000 reads.
  • My process was tied for CPU time with "system."
  • I used 2 hours, 20 seconds of CPU time one day. My nearest competitor had used only 20 seconds.
  • I was 2nd for disk reads, and 11th for disk writes (the writes weren't bad)
  • In a day, I surpassed even the backup process which was running for a WEEK.

These bullets, of course, are quite BAD. So, during my recent burst of creativity when I added a number of features to dasBlog including a comment spam solution, a referral spam solution, and an IP address blacklist, I did some formal performance work.

If you're familiar with dasBlog, I yanked the need for entryCache.xml, categoryCache.xml and blogData.xml, which were older BlogX hold overs, and move them into thread-safe in memory storage. I change the EntryIDCache and other internal caches, and added outputcaching for RSS, Atom, and Permalinks.

According to ScottF and the folks at Orcsweb and their initial measurements "from what I can tell today, this *is* 250% better. CPU used only 20 minutes [as opposed to nearly 2.5 hours] of time by the end of the day and disk IO was much less than normal." This is early, but we'll see if these numbers hold.

I seem to have a few other bugs to work out, so hollar at me if the site's goofed, but otherwise I hope to get Omar to integrate these changes into his own great new stuff coming in dasBlog 1.7.

During this perf test, I used Perfmon, CLR Profiler and other tools, but mostly I thought. Just literally say down and thought about it. I tried to understand the full call stack of a single request. Once you really know what's going on, and can visualize it, you're in a much better position to profile.

Since you are a technical group, here's a few tidbits I found during this process.

  • If some condition can allow you to avoid accesses to expensive resources and bail early, do. For this blog, if an entry isn't found (based on the GUID in the URL) in my cache, now I won't even look in the XML database. Additionally, I'll send a 404, use Response.SupressContent and End the Response.
        1 if (WeblogEntryId.Length == 0) //example condition

    2 {
    3 Response.StatusCode = 404;
    4 Response.SuppressContent = true;
    5 Response.End();
    6 return null; //save us all the time
    7 }
  • Lock things only as long as needed and be smart about threading/locking.
  • If you're serving content, caching even for a few seconds or a minute can save you time. Not caching is just wasting time. Certainly if I update a post, I can wait 60 seconds or so before it's seen updated on the site. However, if a post is hit hard, either by slashdot'ing or a DoS attack, caching for a minute will save mucho CPU.
    <%@ OutputCache Duration="60" VaryByParam="*" %>
    at the top of one of my pages will cache the page for a minute using all combinations of URL parameters. To be thorough, but use more memory, one would add VaryByHeader for Accept-Languages and Accept-Encoding, but this is handled in my base page.

 

 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook twitter subscribe
About   Newsletter
Sponsored By
Hosting By
Dedicated Windows Server Hosting by SherWeb

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.