Scott Hanselman

More on Assembly Binding, Strong Naming, the GAC, Publisher Policy and Dynamically Loaded Assemblies

December 27, 2004 Comment on this post [0] Posted in ASP.NET | NCover
Sponsored By

Certainly Suzanne Cook is the definitive source for details on Assembly.Load and Binding Contexts, a black art tantamount to voodoo that few people understand. Patrick and I have been hip-deep in it lately, and have discovered/uncovered/madecleartoourselves how some of this voodoo works. Here's a (annotated) writeup from Patrick that was sent out internally. Another great resource is Mike Gunderloy's article on Binding Policy in .NET.

  • Assemblies will only EVER be loaded from the GAC based on a full bind (name, version, and public key token).  A partial bind with name and public key token only WON’T load from the GAC. 
    • If you reference an assembly with VS.NET you're asking for a full bind. If you say Assembly.Load("foo") you're asking for a partial bind.
  • However, the way this usually works is…
    • You do a partial bind on assembly name, or name and public key token with Assembly.Load
    • Fusion (the code name for the Assembly Loader/Binder) starts walking the probing path looking for an assembly that matches the partial bind.
    • Counter Intuititive: If it finds one while probing (the first one) it will then attempt to use the strong name of the one it found to do a full bind against the GAC.
    • If it’s in the GAC, that’s the one that gets loaded.
    • Any of that loaded assemblies will try to load from the GAC first without going to the probing path, since the embedded references constitute a full bind.
    • If they aren’t found in the GAC, then it will start probing.
    • It’ll grab the first one it finds in the probing path.  If the versions don’t match, Fusion fails.  If they do match, Fusion loads that one.
    • So, if you specify a partial name, and the file is in the GAC, but not the probing path, the load fails, since there’s no way to do a full bind.  

All this is mostly an issue for plugins that we load dynamically.  It shouldn’t be an issue for compile-time dependencies, since they use full binds.  One way to make sure you get what you expect is to specify a full bind in your config files via an Assembly Qualified Name (QN) like:  "Foo.Bar, Version=2.0.205.0, Culture=neutral, PublicKeyToken=59388ae2d2746794" and doing something like this:

  100 public static object CreateInstance(string assemblyandtype)
101 {
102 Type type = Type.GetType(assemblyandtype);
103 object instance = null;
104 instance = type.InvokeMember(String.Empty,BindingFlags.CreateInstance, null, null, null);
105 return instance;
106 }

Thanks to Patrick for the writeup!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

[OT] 22,000 dead, and This White Guy - Western Media and the Devaluing of Brown People

December 27, 2004 Comment on this post [19] Posted in Musings
Sponsored By

earthquake1.jpg

This is a technical blog, so note that I don't do this all the time so forgive me ahead of time if you don't like my rants.

I'm looking at the home page of CNN.com.  There are currently over 22,000 brown people dead, and apparently this white guy. I feel for his family, but I feel more for the countless hundreds of thousands of displaced and suffering others.

I am tired of the American Media (last night it was ABC News) who spend 5 minutes on a massive natural disasters, and then gloss over it when we are told "and no Americans were injured." Notice the text of this CNN blurb - 22,000 dead and 27 Western People, which details the counts of British, French and Italians.

This is unspeakably ethnocentric and it makes me a little ill.  I'm not trying to be P.C. here, but these are humans, and whether it was a hundred Somali Fisherman, or this guy from Illinois, I expect more from a leading news organization. This is like a home-town newspaper concerned about its native son away on mission.

We must never forget that tommorow isn't promised to us. One day there will be an earthquake off the coast of Oregon. No doubt that will get media coverage.

God help us all, but thanks for the time I've had.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Lutz Roeder's C# XML Documenter lives on in Travis Illig's CodeRush Plugin "CR_Documentor"

December 21, 2004 Comment on this post [0] Posted in ASP.NET | XML | CodeRush | Tools
Sponsored By

CR_Documentor_smTravis has rev'ed CR_Documentor to 1.1.0.1220 including these new features:

  • Has been updated for NDoc 1.3 tags
  • Provides the option for what level of "tag compatibility" to follow (Microsoft tags only or NDoc 1.3)
  • Provides the option for how to handle "unrecognized" tags
  • Has updated styles to match NDoc 1.3
  • Has been updated to work with CodeRush 1.1.6 / DXCore 1.1.8 or higher

If you're not familiar with this project, Travis talked to Lutz and informally "took over" the applet originally know as Documentor that let a developer see what the compiled results of XML Documentation comment would look like.  Travis has since extended documented and made it into a CodeRush plugin that runs in a toolwindow within Visual Studio. It will let you see a preview in realtime as you type of your comments.

If you are trying to do extensive C# doc or if you use NDoc, this is a great tool for you. Thanks Travis!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

ASP.NET Performance Tuning - Making dasBlog faster...

December 18, 2004 Comment on this post [2] Posted in ASP.NET | DasBlog | XML | Bugs | Tools
Sponsored By

Greg Hughes and I stayed up late last night tuning and installing a custom build of dasBlog. If you remember, dasBlog is Clemens' rewrite/imagining of Chris Anderson's BlogX codebase that has been moved over to GotDotNet and is under the supervision of Omar Shahine.

ORCSWeb, my awesome ISP, and Scott Forsyth (ASP.NET MVP who works there) had complained to me that as my traffic increased, my website instance was being a poor citizen on the shared server. My site is on a server with something like a dozen other sites. While I'd survived slashdotting, my traffic lately has been getting big enough to bother the server.

ScottF had noticed that my blog had these unfortunate characteristics (remember these are bad):

  • CPU Threads that were taking minutes to complete their work
  • Caused 9,538,000 disk reads during a time period while another site on the same server with twice as many visitors had 47,000 reads.
  • My process was tied for CPU time with "system."
  • I used 2 hours, 20 seconds of CPU time one day. My nearest competitor had used only 20 seconds.
  • I was 2nd for disk reads, and 11th for disk writes (the writes weren't bad)
  • In a day, I surpassed even the backup process which was running for a WEEK.

These bullets, of course, are quite BAD. So, during my recent burst of creativity when I added a number of features to dasBlog including a comment spam solution, a referral spam solution, and an IP address blacklist, I did some formal performance work.

If you're familiar with dasBlog, I yanked the need for entryCache.xml, categoryCache.xml and blogData.xml, which were older BlogX hold overs, and move them into thread-safe in memory storage. I change the EntryIDCache and other internal caches, and added outputcaching for RSS, Atom, and Permalinks.

According to ScottF and the folks at Orcsweb and their initial measurements "from what I can tell today, this *is* 250% better. CPU used only 20 minutes [as opposed to nearly 2.5 hours] of time by the end of the day and disk IO was much less than normal." This is early, but we'll see if these numbers hold.

I seem to have a few other bugs to work out, so hollar at me if the site's goofed, but otherwise I hope to get Omar to integrate these changes into his own great new stuff coming in dasBlog 1.7.

During this perf test, I used Perfmon, CLR Profiler and other tools, but mostly I thought. Just literally say down and thought about it. I tried to understand the full call stack of a single request. Once you really know what's going on, and can visualize it, you're in a much better position to profile.

Since you are a technical group, here's a few tidbits I found during this process.

  • If some condition can allow you to avoid accesses to expensive resources and bail early, do. For this blog, if an entry isn't found (based on the GUID in the URL) in my cache, now I won't even look in the XML database. Additionally, I'll send a 404, use Response.SupressContent and End the Response.
        1 if (WeblogEntryId.Length == 0) //example condition

    2 {
    3 Response.StatusCode = 404;
    4 Response.SuppressContent = true;
    5 Response.End();
    6 return null; //save us all the time
    7 }
  • Lock things only as long as needed and be smart about threading/locking.
  • If you're serving content, caching even for a few seconds or a minute can save you time. Not caching is just wasting time. Certainly if I update a post, I can wait 60 seconds or so before it's seen updated on the site. However, if a post is hit hard, either by slashdot'ing or a DoS attack, caching for a minute will save mucho CPU.
    <%@ OutputCache Duration="60" VaryByParam="*" %>
    at the top of one of my pages will cache the page for a minute using all combinations of URL parameters. To be thorough, but use more memory, one would add VaryByHeader for Accept-Languages and Accept-Encoding, but this is handled in my base page.

 

 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Brute force check to ensure access to the TEMP directory in order to use the XmlSerializer

December 18, 2004 Comment on this post [5] Posted in ASP.NET | Web Services | XmlSerializer
Sponsored By

If you want to use the XmlSerilializer, you'll (ASPNET user) need write access to the Windows/Temp folder. Otherwise you may see this as the temporary assembly fails to be saved:

File or assembly name zmp0husw.dll, or one of its dependencies, was
not found.
Description: An unhandled exception occurred during the execution of
the current web request. Please review the stack trace for more
information about the error and where it originated in the code.

Exception Details: System.IO.FileNotFoundException: File or assembly
name zmp0husw.dll, or one of its dependencies, was not found.

Internally, there are Security Demands, to see if you have the "right" from a Code Access Security point of view, but noone actually CHECKS to see if you have the ACL rights.

So, here's a brute force way to find out, once per AppDomain, to check if you have access. I reflectored into XmlSerializer to find out what they were doing to find our what path to write to. They were using GetTempPath from kernel32.dll, so we could PInvoke as well. (Update: However, Kevin Dente points out that Path.GetTempPath() will do the PInvoke for you. I was mirroring XmlSerializer's code, but as long as we do the same thing in essense, we're OK. Edits below, line numbers changed. Thanks Kevin!)

    1 public sealed class SomeUtilityThingieClass
2 {
3 const string ErrorMessage = "We neede write access to: {0}";
6
7 private static bool tempFileAccess = false;
8 private static object tempFileAccessLock = new object();
9
10 public static bool EnsureTempFileAccess()
11 {
12 if (tempFileAccess == true)
13 {
14 return true;
15 }
16
17 if(tempFileAccess == false)
18 {
19 lock(tempFileAccessLock)
20 {
21 if(tempFileAccess == false)
22 {
29 string tempFile = Path.Combine(Path.GetTempPath(),"WriteTest.txt");
30 try
31 {
32 using(StreamWriter file = File.CreateText(tempFile))
33 {
34 file.Write("This is a test to see if we can write
to this TEMP folder and consequently make XmlSerializer
assemblies without trouble.");
35 }
36 }
37 catch(System.IO.IOException ex)
38 {
39 throw new System.IO.IOException(
string.Format(ErrorMessage,tempFullPath),ex);
40 }
41 catch(UnauthorizedAccessException ex)
42 {
43 throw new UnauthorizedAccessException(
string.Format(ErrorMessage,tempFullPath),ex);
44 }
45
46 if (File.Exists(tempFile))
47 {
48 File.Delete(tempFile);
49 }
50 tempFileAccess = true;
51 }
52 }
53 }
54 return tempFileAccess;
55 }
56 }

Once the write has worked once, we catch the success in a static and return that. If multiple threads get in here together, only one will make it past the lock. The others will wait. After we learn of success or failure from the first thread, the threads that were waiting for the lock will check the (now change) tempFileAccess boolean, and find it changed. The file write will happen only once per AppDomain. Rather than calling "throw;" or not catching the exceptions at all, I add a little extra polite message and wrap and throw. They won't know the line number that went wrong, but they WILL get a nice message.

The most interesting stuff to the beginner is the classic check/lock/doublecheck threadsafety. Note also that we have an explicit object tempFileAccessLock that exists ONLY for the purposes of locking.

 

 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.