Scott Hanselman

[OT] 22,000 dead, and This White Guy - Western Media and the Devaluing of Brown People

December 28, 2004 Comment on this post [19] Posted in Musings
Sponsored By

earthquake1.jpg

This is a technical blog, so note that I don't do this all the time so forgive me ahead of time if you don't like my rants.

I'm looking at the home page of CNN.com.  There are currently over 22,000 brown people dead, and apparently this white guy. I feel for his family, but I feel more for the countless hundreds of thousands of displaced and suffering others.

I am tired of the American Media (last night it was ABC News) who spend 5 minutes on a massive natural disasters, and then gloss over it when we are told "and no Americans were injured." Notice the text of this CNN blurb - 22,000 dead and 27 Western People, which details the counts of British, French and Italians.

This is unspeakably ethnocentric and it makes me a little ill.  I'm not trying to be P.C. here, but these are humans, and whether it was a hundred Somali Fisherman, or this guy from Illinois, I expect more from a leading news organization. This is like a home-town newspaper concerned about its native son away on mission.

We must never forget that tommorow isn't promised to us. One day there will be an earthquake off the coast of Oregon. No doubt that will get media coverage.

God help us all, but thanks for the time I've had.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Lutz Roeder's C# XML Documenter lives on in Travis Illig's CodeRush Plugin "CR_Documentor"

December 22, 2004 Comment on this post [0] Posted in ASP.NET | XML | CodeRush | Tools
Sponsored By

CR_Documentor_smTravis has rev'ed CR_Documentor to 1.1.0.1220 including these new features:

  • Has been updated for NDoc 1.3 tags
  • Provides the option for what level of "tag compatibility" to follow (Microsoft tags only or NDoc 1.3)
  • Provides the option for how to handle "unrecognized" tags
  • Has updated styles to match NDoc 1.3
  • Has been updated to work with CodeRush 1.1.6 / DXCore 1.1.8 or higher

If you're not familiar with this project, Travis talked to Lutz and informally "took over" the applet originally know as Documentor that let a developer see what the compiled results of XML Documentation comment would look like.  Travis has since extended documented and made it into a CodeRush plugin that runs in a toolwindow within Visual Studio. It will let you see a preview in realtime as you type of your comments.

If you are trying to do extensive C# doc or if you use NDoc, this is a great tool for you. Thanks Travis!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

ASP.NET Performance Tuning - Making dasBlog faster...

December 18, 2004 Comment on this post [2] Posted in ASP.NET | DasBlog | XML | Bugs | Tools
Sponsored By

Greg Hughes and I stayed up late last night tuning and installing a custom build of dasBlog. If you remember, dasBlog is Clemens' rewrite/imagining of Chris Anderson's BlogX codebase that has been moved over to GotDotNet and is under the supervision of Omar Shahine.

ORCSWeb, my awesome ISP, and Scott Forsyth (ASP.NET MVP who works there) had complained to me that as my traffic increased, my website instance was being a poor citizen on the shared server. My site is on a server with something like a dozen other sites. While I'd survived slashdotting, my traffic lately has been getting big enough to bother the server.

ScottF had noticed that my blog had these unfortunate characteristics (remember these are bad):

  • CPU Threads that were taking minutes to complete their work
  • Caused 9,538,000 disk reads during a time period while another site on the same server with twice as many visitors had 47,000 reads.
  • My process was tied for CPU time with "system."
  • I used 2 hours, 20 seconds of CPU time one day. My nearest competitor had used only 20 seconds.
  • I was 2nd for disk reads, and 11th for disk writes (the writes weren't bad)
  • In a day, I surpassed even the backup process which was running for a WEEK.

These bullets, of course, are quite BAD. So, during my recent burst of creativity when I added a number of features to dasBlog including a comment spam solution, a referral spam solution, and an IP address blacklist, I did some formal performance work.

If you're familiar with dasBlog, I yanked the need for entryCache.xml, categoryCache.xml and blogData.xml, which were older BlogX hold overs, and move them into thread-safe in memory storage. I change the EntryIDCache and other internal caches, and added outputcaching for RSS, Atom, and Permalinks.

According to ScottF and the folks at Orcsweb and their initial measurements "from what I can tell today, this *is* 250% better. CPU used only 20 minutes [as opposed to nearly 2.5 hours] of time by the end of the day and disk IO was much less than normal." This is early, but we'll see if these numbers hold.

I seem to have a few other bugs to work out, so hollar at me if the site's goofed, but otherwise I hope to get Omar to integrate these changes into his own great new stuff coming in dasBlog 1.7.

During this perf test, I used Perfmon, CLR Profiler and other tools, but mostly I thought. Just literally say down and thought about it. I tried to understand the full call stack of a single request. Once you really know what's going on, and can visualize it, you're in a much better position to profile.

Since you are a technical group, here's a few tidbits I found during this process.

  • If some condition can allow you to avoid accesses to expensive resources and bail early, do. For this blog, if an entry isn't found (based on the GUID in the URL) in my cache, now I won't even look in the XML database. Additionally, I'll send a 404, use Response.SupressContent and End the Response.
        1 if (WeblogEntryId.Length == 0) //example condition

    2 {
    3 Response.StatusCode = 404;
    4 Response.SuppressContent = true;
    5 Response.End();
    6 return null; //save us all the time
    7 }
  • Lock things only as long as needed and be smart about threading/locking.
  • If you're serving content, caching even for a few seconds or a minute can save you time. Not caching is just wasting time. Certainly if I update a post, I can wait 60 seconds or so before it's seen updated on the site. However, if a post is hit hard, either by slashdot'ing or a DoS attack, caching for a minute will save mucho CPU.
    <%@ OutputCache Duration="60" VaryByParam="*" %>
    at the top of one of my pages will cache the page for a minute using all combinations of URL parameters. To be thorough, but use more memory, one would add VaryByHeader for Accept-Languages and Accept-Encoding, but this is handled in my base page.

 

 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Brute force check to ensure access to the TEMP directory in order to use the XmlSerializer

December 18, 2004 Comment on this post [5] Posted in ASP.NET | Web Services | XmlSerializer
Sponsored By

If you want to use the XmlSerilializer, you'll (ASPNET user) need write access to the Windows/Temp folder. Otherwise you may see this as the temporary assembly fails to be saved:

File or assembly name zmp0husw.dll, or one of its dependencies, was
not found.
Description: An unhandled exception occurred during the execution of
the current web request. Please review the stack trace for more
information about the error and where it originated in the code.

Exception Details: System.IO.FileNotFoundException: File or assembly
name zmp0husw.dll, or one of its dependencies, was not found.

Internally, there are Security Demands, to see if you have the "right" from a Code Access Security point of view, but noone actually CHECKS to see if you have the ACL rights.

So, here's a brute force way to find out, once per AppDomain, to check if you have access. I reflectored into XmlSerializer to find out what they were doing to find our what path to write to. They were using GetTempPath from kernel32.dll, so we could PInvoke as well. (Update: However, Kevin Dente points out that Path.GetTempPath() will do the PInvoke for you. I was mirroring XmlSerializer's code, but as long as we do the same thing in essense, we're OK. Edits below, line numbers changed. Thanks Kevin!)

    1 public sealed class SomeUtilityThingieClass
2 {
3 const string ErrorMessage = "We neede write access to: {0}";
6
7 private static bool tempFileAccess = false;
8 private static object tempFileAccessLock = new object();
9
10 public static bool EnsureTempFileAccess()
11 {
12 if (tempFileAccess == true)
13 {
14 return true;
15 }
16
17 if(tempFileAccess == false)
18 {
19 lock(tempFileAccessLock)
20 {
21 if(tempFileAccess == false)
22 {
29 string tempFile = Path.Combine(Path.GetTempPath(),"WriteTest.txt");
30 try
31 {
32 using(StreamWriter file = File.CreateText(tempFile))
33 {
34 file.Write("This is a test to see if we can write
to this TEMP folder and consequently make XmlSerializer
assemblies without trouble.");
35 }
36 }
37 catch(System.IO.IOException ex)
38 {
39 throw new System.IO.IOException(
string.Format(ErrorMessage,tempFullPath),ex);
40 }
41 catch(UnauthorizedAccessException ex)
42 {
43 throw new UnauthorizedAccessException(
string.Format(ErrorMessage,tempFullPath),ex);
44 }
45
46 if (File.Exists(tempFile))
47 {
48 File.Delete(tempFile);
49 }
50 tempFileAccess = true;
51 }
52 }
53 }
54 return tempFileAccess;
55 }
56 }

Once the write has worked once, we catch the success in a static and return that. If multiple threads get in here together, only one will make it past the lock. The others will wait. After we learn of success or failure from the first thread, the threads that were waiting for the lock will check the (now change) tempFileAccess boolean, and find it changed. The file write will happen only once per AppDomain. Rather than calling "throw;" or not catching the exceptions at all, I add a little extra polite message and wrap and throw. They won't know the line number that went wrong, but they WILL get a nice message.

The most interesting stuff to the beginner is the classic check/lock/doublecheck threadsafety. Note also that we have an explicit object tempFileAccessLock that exists ONLY for the purposes of locking.

 

 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

ASP.NET Params Collection vs. QueryString, Forms vs. Request["index"] and Double Decoding

December 17, 2004 Comment on this post [4] Posted in ASP.NET
Sponsored By

In ASP.NET you can yank a value out of the QueryString like this, where QueryString is of type NameValueCollection but is internally an HttpValueCollection that includes some extra helper methods.

string foo = Request.QueryString["foo"];

But you can also go like this:

string foo = Request["foo"];

And folks know (passed through myth and legend) that the line above will search through the QueryString, Form, Cookies, and ServerVariables collections. However, it's important (for performance) to know what order the collections are searched. Here's the code from Reflector:  

    1 public string get_Item(string key)
    2 {
    3     string text1 = this.QueryString[key];
    4     if (text1 != null)
    5     {
    6         return text1;
    7     }
    8     text1 = this.Form[key];
    9     if (text1 != null)
   10     {
   11         return text1;
   12     }
   13     HttpCookie cookie1 = this.Cookies[key];
   14     if (cookie1 != null)
   15     {
   16         return cookie1.Value;
   17     }
   18     text1 = this.ServerVariables[key];
   19     if (text1 != null)
   20     {
   21         return text1;
   22     }
   23     return null;
   24 }

So you can see what order things are searched in. However, personally, I don't like this default Item indexer. I prefer to be more explicit. I'd hate to accidentally retrieve a Cookie because a QueryString variable was missing. It's always better to be explicit and ask for what you want.

Interestingly, there is ANOTHER collection of QueryString, Form, Cookies, and ServerVariables, but rather than a "pseudo-collection" as we see above, this is an actual combined collection.

  432 public NameValueCollection Params
433 {
434 get
435 {
436 InternalSecurityPermissions.AspNetHostingPermissionLevelLow.Demand();
437 if (this._params == null)
438 {
439 this._params = new HttpValueCollection();
440 this.FillInParamsCollection();
441 this._params.MakeReadOnly();
442 }
443 return this._params;
444 }
445 }
446
447 private void FillInParamsCollection()
448 {
449 this._params.Add(this.QueryString);
450 this._params.Add(this.Form);
451 this._params.Add(this.Cookies);
452 this._params.Add(this.ServerVariables);
453 }
454

The internal collection "_params" inside is a special derived NameValueCollection of type HttpValueCollection, and is exposed as NameValueCollection.

Important Note: The constructor for HttpRequest will parse the actual string QueryString and UrlDecode the values for you. Be careful not to DOUBLE DECODE. Know what's encoded, when, and who does the decoding.  Likely it's not you that needs to do anything. If you double decode you can get into some weird situations. Ben Suter reminded me that if you pass in /somepage.aspx?someParam=A%2bB you expect to get "A+B" as that param is the equivalent of HttpUtility.UrlEncode("A+B"). But, if you make a mistake and do HttpUtility.UrlDecode(Request.Params("someParam")), you'll get "A B" as the + was double-decoded as a space.

Here's the trick though. If you have BOTH a QueryString parameter "Foo=Bar1" AND a Forms item called "Foo=Bar2" if you say string foo = Request.Params["Foo"]; you'll get a string back "Bar1,Bar2"! It's a collection, not a HashTable. So, never make assumptions when you use HttpRequest.Params, or you will get in trouble. If there's a chance you could get multiple values back, you need to consider using an explicit collection or be smart about your string.Split() code.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.