Scott Hanselman

Spontaneous New York Ethiopian Nerd Dinner

March 31, 2008 Comment on this post [17] Posted in Internationalization | Musings
Sponsored By

sheba_11 I'm going to be in NYC for 3 short days (travel, talk, travel) and Dmitry Lyalin and I were thinking to do a dinner thing on Tuesday evening.

I've got an early flight out at 9am on Weds, so we'll be at Queen of Sheba NYC on Tuesday around 6:30pm. Hopefully we won't get kicked out for not having a reservation.

Every time I go to ANY town, anywhere in the world for the last 10 years, I always go to the nearest Ethiopian Restaurant. Consequently, if your town has a habesha me'gub beyt I've probably eaten there.

Ethiopian food is my grub. I could eat it all day long. I'm also into the Amharic Language, and recently Aleme from Beteseb.com read about my interest on my blog and was kind enough to send me a Fidel (The Ethiopian Alphabet is often hung as art), bringing my collection of Fidel to three. Time to find a better place to hang them. (Note, my wife is Ndebele, not Habesha, but I learned the language before I met her).

So, if you've never had Ethiopian Food, here's a good enough opportunity to come hang out and try it. (Of course, we're all going Dutch.)

RSVP in the Comments!

Related Links

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Hanselminutes Podcast 106 - Inside Outsourcing

March 31, 2008 Comment on this post [12] Posted in Podcast
Sponsored By

windia08 My one-hundred-and-sixth podcast is up. This was an unusual show as I was at Mix and saw two Regional Director buddies of mine, Vinod Unny (profile) and Venkatarangan TNC (profile) and we started chatting. Another person listening in thought the topic was interesting and said we ought to record it, so I busted out the recording gear and we did an on-the-spot recording on the effects of outsourcing from both the American and Indian perspective.

Subscribe: Subscribe to Hanselminutes Subscribe to my Podcast in iTunes

If you have trouble downloading, or your download is slow, do try the torrent with µtorrent or another BitTorrent Downloader.

Do also remember the complete archives are always up and they have PDF Transcripts, a little known feature that show up a few weeks after each show.

Telerik is our sponsor for this show.

Check out their UI Suite of controls for ASP.NET. It's very hardcore stuff. One of the things I appreciate about Telerik is their commitment to completeness. For example, they have a page about their Right-to-Left support while some vendors have zero support, or don't bother testing. They also are committed to XHTML compliance and publish their roadmap. It's nice when your controls vendor is very transparent.

As I've said before this show comes to you with the audio expertise and stewardship of Carl Franklin. The name comes from Travis Illig, but the goal of the show is simple. Avoid wasting the listener's time. (and make the commute less boring)

Enjoy. Who knows what'll happen in the next show?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Hanselminutes Podcast 105 - Rocky Lhotka on Data Access Mania, LINQ and CSLA.NET

March 28, 2008 Comment on this post [7] Posted in LINQ | Podcast | Programming
Sponsored By

rocky0005-120wMy one-hundred-and-fifth podcast is up. I got a chance to sit down with Rocky Lhotka (blog) and talk about the direction data access, business objects and multi-tier development are going, as well as where he things LINQ fits into his view of CSLA.NET. CSLA.NET is Rocky's application development framework that supports his multi-tiered view of business application development.

Subscribe: Subscribe to Hanselminutes Subscribe to my Podcast in iTunes

If you have trouble downloading, or your download is slow, do try the torrent with µtorrent or another BitTorrent Downloader.

Do also remember the complete archives are always up and they have PDF Transcripts, a little known feature that show up a few weeks after each show.

Telerik is our sponsor for this show.

Check out their UI Suite of controls for ASP.NET. It's very hardcore stuff. One of the things I appreciate about Telerik is their commitment to completeness. For example, they have a page about their Right-to-Left support while some vendors have zero support, or don't bother testing. They also are committed to XHTML compliance and publish their roadmap. It's nice when your controls vendor is very transparent.

As I've said before this show comes to you with the audio expertise and stewardship of Carl Franklin. The name comes from Travis Illig, but the goal of the show is simple. Avoid wasting the listener's time. (and make the commute less boring)

Enjoy. Who knows what'll happen in the next show?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

The Weekly Source Code 22 - C# and VB .NET Libraries to Digg, Flickr, Facebook, YouTube, Twitter, Live Services, Google and other Web 2.0 APIs

March 27, 2008 Comment on this post [31] Posted in ASP.NET | Programming | Source Code | Web Services | XML
Sponsored By

Someone emailed me recently saying that they couldn’t find enough examples in .NET for talking to the recent proliferation of “Web 2.0 APIs” so I thought I’d put together a list and look at some source. I think that a nice API wrapper is usually a useful thing, but since these APIs are so transparent and basic, there's not really a huge need given LINQ to XML but I understand the knee-jerk reaction to hunt for a wrapper when faced with the word "API."

One thing to point out is that 99.9% of these APIs are calling

HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);

under the covers the doing something with the resulting string. Some hide the URL creation, some use XmlDocuments, others use XmlSerialization. When you use a random API you find on the net you're usually getting what you pay for. You're getting a one person's free view on how they perceived a certain API should be called. Some will like be more performant than others. Some might be better thought out than others.

I'll try to juxtapose a few differences between them, but I just want you to remember that we're talking about pushing Angle Brackets around, and little else. You can always do it yourself.

And so, Dear Reader, I present to you twenty-first in a infinite number of posts of "The Weekly Source Code."

Digg

Digg is a community-voted controlled explosion of news stories. Their API is "REST" and speaks XML or JSON on the wire.

DiggApiNET is a .NET Wrapper for the Digg API. It has no releases, so you'll have to get the source code. It was last updated in May of 2007. There's also another at CodeProject called, creatively, digg API.NET.

Let's talk philosophy of design and look a the first library. Here's some snippets pulled from all over the code. This API builds the URL and loads the results of the call into an XmlDocument, holds it for a second and SelectNodes the values into Digg-specific objects. These objects know about the existence of System.Xml.

private const string get_popular = "http://services.digg.com/stories/popular/comments/{0}";

public DiggComments GetPopular()
{
return GetPopular(new Hashtable());
}
public DiggComments GetPopular(Hashtable args)
{
string uri = String.Format(get_popular, HttpBuildUrl(args));
return new DiggComments(Request(uri));
}
public DiggComments(XmlDocument xml_doc) : base(xml_doc, "events")
{
_comments = new List();
if (xml_doc.SelectSingleNode("events") == null
|| xml_doc.SelectSingleNode("events").SelectNodes("comment") == null) {
throw new DiggApiException("XML response appears to be malformed, or contains unexpected data.");
}
foreach (XmlNode node in xml_doc.SelectSingleNode("events").SelectNodes("comment")) {
_comments.Add(new DiggComment(node));
}
}

This is a pretty straight-forward if not totally "clean" way to do it. SelectSingleNode and SelectNodes aren't too fast, but we're looking at tiny chunks of data, probably under 100k. I'd probably do it with either XmlReader or XmlSerializer, or more likely, LINQ to XML. I'd make a service that handle the wire protocol, and make the objects know less.

Facebook

Facebook has a very sophisticated and deep API and there's lots of support for it on .NET that is well explained by Nikhil. You can develop for Facebook using the free Express Visual Studio editions.

There's quite a few available:

Nikhil's Facebook client APIs feel well factored, with separate services for each major Facebook service and a FacebookSession object proving contextual state. Requests are pulled out into FacebookRequest and include asynchronous options, which is thoughtful.

Here's an edited (for brevity) example of a WinForm that allows you to set your Facebook status. I like the IsPermissionGranted call, which I think is clean and clever, given that there is a large enum of permissions.

    public partial class StatusForm : Form {

private const string _apiKey = "[Your API Key]";
private const string _secret = "[Your Secret]";

private FacebookService _fbService;
private bool _loggingIn;

private void LoadStatus() {
_nameLabel.Text = "Loading...";

User user = _fbService.Users.GetUser(null, "name,status");
if (user != null) {
_nameLabel.Text = user.Name;

_statusTextBox.Text = user.Status.Message;
_dateLabel.Text = user.Status.UpdateDate.ToLocalTime().ToString("g");
}

bool canSetStatus = _fbService.Permissions.IsPermissionGranted(Permission.SetStatus);
_permissionsLink.Visible = !canSetStatus;
_updateButton.Enabled = canSetStatus;
_statusTextBox.ReadOnly = !canSetStatus;
}

protected override void OnActivated(EventArgs e) {
base.OnActivated(e);

if ((_fbService == null) && (_loggingIn == false)) {
_loggingIn = true;

try {
FacebookClientSession fbSession = new FacebookClientSession(_apiKey, _secret);
if (fbSession.Initialize(this)) {
_fbService = new FacebookService(fbSession);
LoadStatus();
}
}
finally {
_loggingIn = false;
}
}
}

private void OnUpdateButtonClick(object sender, EventArgs e) {
string text = _statusTextBox.Text.Trim();

_fbService.Users.SetStatus(text, /* includesVerb */ true);
LoadStatus();
}
}
}

Interestingly, the Facebook API also includes it's own JsonReader and JsonWriter, rather than using the new JsonSerializer, presumably because the lib was written a year ago.

Windows Live Services

There's a bunch of info on http://dev.live.com/ and a bunch of complete sample apps with source as well as a Live SDK interactive site. The Live Contacts API, for example . Unfortunately with the Contact's API there's no .NET samples I can find that includes wrappers around the angle brackets, so you'll be parsing in whatever way you prefer.

The objects that are provided in the Alpha SDK are really focused initially on security and permissions. For example, before I was able to access my contacts programmatically, I had to explicitly allow access and chose a length of time to allow it. I allowed it for a day to be extra secure.

Once you've retrieved some data, it's very simple so a request like https://cumulus.services.live.com/wlddemo@hotmail.com/LiveContacts would give you:

<LiveContacts> 
   <Owner>
       <FirstName/>
       <LastName/>
       <WindowsLiveID/>                 
   </Owner>                       
   <Contacts>         
<Contact>            
<ID>{ContactID}</ID>            
<WindowsLiveID>{Passport Member Name}</WindowsLiveID>
       <Comment>comment here</Comment>            
<Profiles/>            
<Emails/>            
<Phones/>           
<Locations/>        
</Contact>     
</Contacts> </LiveContacts>

The Live Search API speaks SOAP and has samples in six languages including C#, VB, Ruby, PHP, Python, and Java.

YouTube

YouTube has two different versions of their API, but the original/old version is officially deprecated. Now that they are Google, the YouTube APIs are all GData style, replacing their REST/XML-RPC APIs.

There is a .NET Library that speaks the GData XML format and querying YouTube with C# is fairly simple from there. You can even upload videos programmatically to YouTube like this gentleman.

This fellow eschews GData's uber libraries and uses a StringBuilder to build the GData payload and that's OK. :)

private string GetHeader(string title, string description, Catagory catagory,
                         string keywords, string videoFileName)
{
    StringBuilder xml = new StringBuilder();
    xml.Append(boundary + lineTerm + "Content-Type: application/atom+xml; charset=UTF-8" + lineTerm + lineTerm);
    xml.Append("<?xml version=\"1.0\"?><entry xmlns=\"http://www.w3.org/2005/Atom\" ");
    xml.Append("xmlns:media=\"http://search.yahoo.com/mrss/\" xmlns:yt=\"http://gdata.youtube.com/schemas/2007\">");
    xml.AppendFormat("<media:group><media:title type=\"plain\">{0}</media:title>", title);
    xml.AppendFormat("<media:description type=\"plain\">{0}</media:description>", description);
    xml.AppendFormat("<media:category scheme=\"http://gdata.youtube.com/schemas/2007/categories.cat\">{0}</media:category>", catagory);
    xml.AppendFormat("<media:keywords>{0}</media:keywords>", keywords);
    xml.Append("</media:group></entry>" + lineTerm);
    xml.Append(boundary + lineTerm + "Content-Type: video/*" + lineTerm + "Content-Transfer-Encoding: binary" + lineTerm + lineTerm);
    return xml.ToString();
}

GData

GData is Google's standard protocol for moving data around via XML and HTTP. There are GData endpoints for Blogger, Google Calendar, Notebook, Spreadsheets, Documents, Picassa, etc. From their site:

NET Developer Guides exist for specific Data APIs. They can be found under the page for each Data API

The GData C# client is written by Google, so I was really interested to read their code as their interview process is legendary and I assume everyone is a 17 year old PhD. The code is exceedingly object oriented with more than 165 files over 10 folders (not counting unit tests and project stuff). It's also VERY well commented, but interestingly, not always commented using the standard XML comments most MSFT Programmers use, but rather a different format I'm not familiar with.

All the APIs are fairly similar. Here's a GData sample that Queries the Calendar for events within a date range.

static void DateRangeQuery(CalendarService service, DateTime startTime, DateTime endTime)
{
EventQuery myQuery = new EventQuery(feedUri);
myQuery.StartTime = startTime;
myQuery.EndTime = endTime;

EventFeed myResultsFeed = service.Query(myQuery) as EventFeed;

Console.WriteLine("Matching events from {0} to {1}:",
startTime.ToShortDateString(),
endTime.ToShortDateString());
Console.WriteLine();
for (int i = 0; i < myResultsFeed.Entries.Count; i++)
{
Console.WriteLine(myResultsFeed.Entries[i].Title.Text);
}
Console.WriteLine();
}

Here's an example that downloads all the pictures from a specific username in Picassa using C#. Everything in GData is an "AtomEntry" and many have extensions. You can handle the GData types or use specific sub-classes like PhotoQuery, or whatever, to make thing easier.

private static void DownAlbum(string UserN, string AlbumN)
{
string fileName;
Uri uriPath;
WebClient HttpClient = new WebClient();
// Three important elements of PicasaWeb API are
// PhotoQuery, PicasaService and PicasaFeed
PhotoQuery query = new PhotoQuery();
query.Uri = new Uri(PhotoQuery.CreatePicasaUri(UserN, AlbumN));
PicasaService service = new PicasaService("Sams PicasaWeb Explorer");
PicasaFeed feed = (PicasaFeed)service.Query(query);

Directory.SetCurrentDirectory("c:\\");
foreach (AtomEntry aentry in feed.Entries)
{
uriPath = new Uri(aentry.Content.Src.ToString());
fileName = uriPic.LocalPath.Substring(uriPath.LocalPath.LastIndexOf('/')+1);
try {
Console.WriteLine("Downloading: " + fileName);
HttpClient.DownloadFile(aentry.Content.Src.ToString(), fileName);
Console.WriteLine("Download Complete");
}
catch (WebException we)
{ Console.WriteLine(we.Message); }
}
}

You can also certainly use any standard System.Xml APIs if you like.

GData is an extension of the Atom Pub protocol. Atom Pub is used by Astoria (ADO.NET Data Extensions) which can be accessed basically via "LINQ to REST."

Flickr

Flickr has a nice API and WackyLabs has a CodePlex project for their FlickrNET API Library written in C#. It's also confirmed to work on Compact Framework and Mono as well as .NET 1.1 and up. There's a fine Coding4Fun article on this library.

This API couldn't be much easier to use. For example, this searches for photos tagged blue and sky and makes sure it returns the DateTaken and OriginalFormat.

PhotosSearchOptions options = new PhotosSearchOptions();
options.Tags = "blue,sky";
options.Extras |= PhotoSearchExtras.DateTaken | PhotoSearchExtras.OriginalFormat;
Photos photos = flickr.PhotosSearch(options);

The PhotosSearch() method includes dozens of overloads taking date ranges, paging and other options. All the real work happens in GetResponse() via GetResponseCache(). The URL is built all in one method, the response is retrieved and deserialized via XmlSerializer. This API is the closest to the way I'd do it. It's pragmatic, uses as much of the underlying libraries as possible. It's not really extensible or overly OO, but it gets the job done cleanly.

Since Flickr is a data intensive thing, this library also includes a thread safe PersisitentCache for storing all that data. I'd probably just have used System.Web.Cache because it can live in any application, even ones outside ASP.NET. However, theirs is a Persistent one, saving huge chunks of data to a configurable location. It's actually an interesting enough class that it could be used outside of this lib, methinks. It stores everything in a super "poor man's database," basically a serialized Hashtable of blobs, ala (gasp) OLE Structured Storage.

WordPress and XML-RPC based Blogs

Most blogs use either the Blogger or MetaWeblog APIs and they are easy to call with .NET.  That includes MSN Spaces, DasBlog, SubText, etc. There's samples deep on MSDN on how to call XML-RPC with C# or VB.

Windows Live Writer and BlogJet use these APIs to talk to blogs when you're authoring a post, so I'm using .NET and XML-RPC right now. ;)

A very simple example in VB.NET using the very awesome XML-RPC.NET library is here. Here's a more complete example and here's a mini blogging client.

DasBlog uses this library to be an XML-RPC Server.

In this sample, the type "IWP" derives from XmlRpcProxy and uses the category structure. The library handles all the mappings an deserializaiton such that calling XML-RPC feels ;like using any Web Service, even though XML-RPC is a precursor to SOAP and not the SOAP you're used it.

Dim proxy As IWP = XmlRpcProxyGen.Create(Of IWP)()
Dim args() As String = {“http://myblog.blogstogo.com”, _
“username”, “password”}
Dim categories() As category
categories = proxy.getCategories(args)

You can also use WCF to talk XML-RPC

Twitter

I've talked about Twitter before and they have a Twitter API that is at least an order of magnitude more important than their site. There is a pile of source out there to talk to Twitter.

Last year Alan Le blogged about his adventures in creating a library around Twitter's API and Witty is a actively developed WPF C# application that fronts Twitter. You can browse their source and see their simple TwitterLib.

TwitterNet.cs is the meat of it and just builds up objects using XmlDocuments and does what I called "left hand/right hand" code. That's where you've got an object on the left and some other object/bag/pileOdata on the right and you spend a lot of lines just going "left side, right side, left side, right side.

For (trimmed) example:

 public UserCollection GetFriends(int userId)
{
UserCollection users = new UserCollection();

// Twitter expects http://twitter.com/statuses/friends/12345.xml
string requestURL = FriendsUrl + "/" + userId + Format;

int friendsCount = 0;

// Since the API docs state "Returns up to 100 of the authenticating user's friends", we need
// to use the page param and to fetch ALL of the users friends. We can find out how many pages
// we need by dividing the # of friends by 100 and rounding any remainder up.
// merging the responses from each request may be tricky.
if (currentLoggedInUser != null && currentLoggedInUser.Id == userId)
{
friendsCount = CurrentlyLoggedInUser.FollowingCount;
}
else
{
// need to make an extra call to twitter
User user = GetUser(userId);
friendsCount = user.FollowingCount;
}

int numberOfPagesToFetch = (friendsCount / 100) + 1;

string pageRequestUrl = requestURL;

for (int count = 1; count <= numberOfPagesToFetch; count++)
{
pageRequestUrl = requestURL + "?page=" + count;
HttpWebRequest request = WebRequest.Create(pageRequestUrl) as HttpWebRequest;
request.Credentials = new NetworkCredential(username, password);

try
{
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
StreamReader reader = new StreamReader(response.GetResponseStream());
XmlDocument doc = new XmlDocument();
doc.Load(reader);
XmlNodeList nodes = doc.SelectNodes("/users/user");

foreach (XmlNode node in nodes)
{
User user = new User();
user.Id = int.Parse(node.SelectSingleNode("id").InnerText);
user.Name = node.SelectSingleNode("name").InnerText;
user.ScreenName = node.SelectSingleNode("screen_name").InnerText;
user.ImageUrl = node.SelectSingleNode("profile_image_url").InnerText;
user.SiteUrl = node.SelectSingleNode("url").InnerText;
user.Location = node.SelectSingleNode("location").InnerText;
user.Description = node.SelectSingleNode("description").InnerText;

users.Add(user);
}

}
}
catch (WebException webExcp)
{
// SNIPPED BY SCOTT
}
}
return users;
}

So far, there's a .NET lib for every Web 2.0 application I've wanted to use. I even banged a .NET Client out for Wesabe last year then did it again in IronRuby.

Enjoy. Which (of the hundreds) did I miss?

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

7 Blogging Statistics Rules - There is Life After Page Views

March 26, 2008 Comment on this post [50] Posted in Musings
Sponsored By

A lot of folks spend a lot of time analyzing blog traffic. Josh Bancroft wrote a very good article in January about "Site Statistics I Care About as a Blogger" where he talks about the various and sundry statistics that Google Analytics provides and how you should read them. Ultimately this all comes down to two things:

  • Do you care who reads your blog?

and if so

  • Will you change your behavior given statistics on who reads you blog?

I used to care deeply about my blog, the way one cares about tending a garden. I'd watch it every day and revel in each new visitor. Now, after almost 6 years of pretty active blogging, I now think more about people than pageviews. You can't trust a referrer or a trackback.

Rule #1 of blogging stats: The only way to know if a human is reading your blog is if they are talking with you.

Given that realization, I look at my stats maybe twice a month, and I'm most interested in seeing what posts folks really liked that month. I used to (maybe 3 years ago) look at every referrer and stats daily, but then I realized that my personal litmus test for my blog's success or failure is comments and other folks' blog posts, and nothing else.

I feel like we've (that means me and you, Dear Reader) have a little community here. When you comment, I am happy because I feel more connected to the conversation as this blog is my 3rd place. I blog to be social, not to have a soapbox. I'm even happier when the comments are better and more substantive than the post itself. I would take half the traffic and twice the comments any day. If you're a "lurker," why not join the conversation?

Anyway, some blogs use their stats as a measuring stick (to measure all sorts of thing) and some keep them secret. I was thinking I should just publish mine occasionally, and perhaps others would do the same. You can't trust stats, usually, as on never knows how many bots are visiting their site spidering. I know that Google Analytics and any analytics package worth its salt filters out spiders. DasBlog, for example, doesn't do this, so the statistics you'll get from DasBlog (any many other blogging engines) will be artificially inflated. The same thing happens if you just run a script over your web server logs looking for HTTP GETs.

Rule #2 of blogging stats: HTTP GETs don't equal warm bodies.

I was "tweeting" with Brendan Tomkins of CodeBetter about this and he thought it would foster a sense of openness and give everyone in our tiny slice of the blogosphere an idea of who's out here.

There's a little FeedBurner chicklet up there in my blog that shows a ballpark number of how many subscribers I have. Here's more on how FeedBurner comes up with that number. That number goes up and down based from day to day by 10-20%, depending on such mundane things as whether your computer was on to make the request.

I have only had Google Analytics on since March 3rd so I'm not sure how accurate this data is, but here's the stats since then. There seems to have been some kind of ramping up process, so this is about a 2.5 to 3 week (not a full month) slice, as I'm not sure how to count the ramp-up days.

 image

Notice the regular dips? Those are weekends. The peaks? Mondays. Folks love to read on Mondays.

Here's another rollup:

image

Rule #3 of blogging stats: PageViews don't equal warm bodies.

See the difference between Visits and PageViews? You can't take a number like PageViews and correlate that directly to "# of humans" although you'll see that a lot when folks quote stats.image

Rule #4 of blogging stats: You have a worldwide audience!

(Hi Sri Lanka!)

Folks come from all over!

image

...using lots of different OS's...

image

Rule #5 of blogging stats: If it can browse, someone will visit you with it. 

Not sure what to do with the 2,200 visits by 800x600 people. I have made an effort to make the site mobile friendly though.

image

Rule #6 of blogging stats: People like what they like

This I thought was really interesting - the number of URLs (posts/comments URIs) views vs. number of views, and the top pages for this ~3 week period. The Programmer Themes Gallery is popular this month, as is the tools list and my Outlook GTD post. Also, I can see that folks do enjoy the Weekly Source Code, so I know I'll keep doing that. I can also see that referrals via search sent 94,850 total visits via 64,239 keywords over this period.

It's funny, the posts that I like writing, the deep technical stuff, programming languages stuff, it seems like no one cares about. I think this is the Digg influence. If you post a Gallery or a List or anything post with a Prime Number and the word "Rules" in the title, you'll get traffic. You post smart, compelling content, you need to be wicked smart before folks take note. That said, here's rule #6.5

Rule #6.5 of blogging stats: Blog for you.

You can certainly use these statistics make decisions on what to blog and only blog things that the largest number of people would like, but "meh." Would you really want to do that? I continue to blog about Baby Sign Language and Diabetes and I get no traffic for those topics. Ultimately, I blog for me, and that's why I keep this blog on my own server where the content is my own.

image

I also use FeedBurner, which provides RSS-specific and site specific stats, and it sometimes offers differing stats. This might have to do with how many people browse with Javascript turned off (gasp!) or use an Ad Blocker like IE7Pro or AdBlock for Firefox. FeedBurner has an interesting view that breaks down the details of how many folks subscribe in what reader.

image 

Rule #7 from Mark Twain: There are three kinds of lies: lies, damned lies, and statistics.

Don't trust any of these values. If you've got an engaged audience, they'll comment, blog, talk, chat, twitter, email and generally engage in the conversation. All else is poo.

I've only been using Google Analytics for a few weeks, as you can see, but I think I'll install Microsoft adCenter's Analytics Package side-by-side and do some comparisons and see what kinds of stats I can get out out of it.

As Josh so rightly said, and I'll steal borrow from him, if you ever want to flatter me, just subscribe to my feed (and leave comments!) 

Well, that's all I've got, so Dear Reader, Blog your Stats and let's learn from each other what works.

Related Posts

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.