When Linking, Use The Actual Link

posted on 03/06/06 at 11:56:41 pm by Joel Ross

Sounds obvious right? It did to me too, but I've realized over the past few days that you should make sure you are linking directly to a post, and not to the redirect URL.

So what, exactly, am I talking about? Feedburner. I'm sure there are others, but Feedburner is where I noticed it first. I linked to a post about ASP.NET picking up speed based on a post by O'Reilly. The original O'Reilly post picked up quite a bit of steam, and ended up on most of the meme tracking sites, but, despite the fact that 1.) my site has been on those trackers before, and 2.) I was pretty early in linking to it, no where did my site show up - not even in Technorati, even though my post was indexed by them.

So I dug deeper. It's not that I thought I should have been on the meme trackers, per se. I wanted to read other's takes on the post, so I thought I'd check Technorati. After reading through some of the results, I realized my site wasn't listed at all. That's when I realized that, while I linked to his item, I copied the link from the title in FeedDemon. Because the link I copied came from a FeedBurner feed, it was pointing to feeds.feedburner.com/[FeedName]?m=[postnumber]. But the meme sites and Technorati were all tracking to the actual URL. Which is why my post remained in obscurity.

Yes, this is partly about traffic and ego. But part of it's just about being discovered. Lots of people are looking for content, and if there's someone looking for my content, I should do everything in my power to make that easier.

Categories: General


 

Tourneytopia's New Look

posted on 03/06/06 at 11:45:35 pm by Joel Ross

A couple of weeks ago, we decided we couldn't wait for the final logo to launch Tourneytopia.com, so we went with a nice little text-based header, with plans to switch once we had a real header. Well, today, our designer, who's been awesome for us thus far, delivered again, sending a few different sizes we can use. So, with that, I'm happy to show you guys our new logo:

Tourneytopia.com

I'm very happy with how it turned out! What do you think?

Technorati Tags:

Categories: Develomatic


 

In Honor Of ESPN...

posted on 03/06/06 at 09:56:07 pm by Joel Ross

...this site will be called "RossCode The Blog" from now on.

Technorati Tags:

Categories: Sports


 

Telling The Telcos To Shove It

posted on 03/06/06 at 09:40:35 pm by Joel Ross

So, the telcos are at it again. First, they said they wanted to charge companies like Google and Yahoo for the right to deliver content through their networks to people like you and me. Well, even a company like AT&T doesn't really want to get into a turf war with Microsoft and Google, so when they pushed back, guess what happened?

The telcos decided they'd rather be the big fish in the game, and charge you and me. Surprise, surprise, right? I guess they figure we won't put up a fight, but they're mistaken. Didn't AOL already go through the whole "internet toll road" thing and decide that "all-you-can-eat" was a much better model? Just like everyone else does - even AT&T offers unlimited long distance packages - yuo'd think they'd understand this stuff by now.

Anyway, Andrew Connell is calling for us to stand up against the telcos?and show them that we are united against this. He mentions a badge that we can all throw on our sites showing that we won't stand for this type of treatment. I agree!

Any designers out there want to create a nice image we can use? It would be nice to make it trackable - put it in a post, and point it to a central location, where we can trackback against it, and be able to use that as proof of the support behind the consumer movement.

Technorati Tags: | |

Categories: General


 

AIM Opens Up

posted on 03/06/06 at 02:43:25 pm by Joel Ross

Back in 2001, a friend pointed out Trillian, a great IM client that allows you to connect to IRC (does anyone still use that?), ICQ, Yahoo, AIM, and MSN at the same time. Since I had 4 IM clients loaded at the time, it made perfect sense for me to use it.

But, for those first six to twelve months that I used it, there was no way I would recommend it to anyone else, unless they were technical in nature. Why? You had to upgrade it just about every day - all because AOL kept changing up the protocol that AIM used to communicate. And they did it specifically because of tools like Trillian.

If you would have asked me then if I thought I would ever see this announcement, and I would have said "No." I'm not that excitable really, so just a simple no would be all you'd get. But, here we are, in 2006, in a time when openness is the new "thing" and AOL is opening up their IM network. This means now a company like Trillian can go through?a supported channel to integrate with AIM.

Once again, though, Google is forcing the revolution. They were the first company to offer a major chat client built on an open network (Jabber), and since then, IM walls have been coming down. MSN and Yahoo have promised integration, AIM is now an open network. What is the world coming to? In case you were wondering, Google forced along another revolution when they offered a 1GB email account for free - find me a web-based email client that offers you less than 5 MB. Well, no, don't. I don't need it, but the point is that it's a lot tougher now than it was two years ago.

Technorati Tags: | |

Categories: Software


 

Differing Opinions on Differencing Disks

posted on 03/06/06 at 12:13:25 am by Joel Ross

After reading my post on how I use virtual machines, Andrew Connell pointed me to an article he wrote about his extensive use of Virtual PC, including how he uses Differencing disks to his advantage. He also says he doesn't see the huge differencing disks that I'd heard about in the past. I'm now rethinking my plan, and when I update my base image to have Visual Studio 2005, I may give it a try.

My biggest hang up with using differencing disks wasn't the size - I really do find it hard to believe that a disk plus two differencing disks would be larger than 3 seperate disks?- it was the patching. But, whenever I need a new disk, I could always create a new differencing disk on top of my existing base disk which just has the patches and any updated software I want to install. Then I'd use that new disk as my new base. Right now, my base disk does not have Visual Studio 2005 on it, and as soon as I have a new project, I'll probably give it a try.

Technorati Tags: |

Categories: Development


 

Nant Standards

posted on 03/05/06 at 01:00:00 am by Joel Ross

TheServerSide.NET has an older article that's a nice introduction article to Nant, which, if you've read this blog for a while, you'll know I swear by. If you're not familiar with Nant, go read the article. It'll give you a good feel for what you can do with it.

But that's not the point of this post. I've posted enough about Nant introduction articles. I'm still looking for the killer Nant article - one that gets into details of how to do some fairly complicated things with it, but this one does have something I haven't seen anywhere else: standard target names. The article lists out seven of them:

  1. Init - Use this to get everything set up. You know, initialize things. Create folders. Get the latest version from your source control software. Things like that.
  2. Build - The heart of a build, at least initially. My experience has been that early in a project, this is the most important part of your build, but as the project progresses, it becomes less important. Not because it really is, but because once you get the build part working, you tend to forget. Then as everyone gets used to it (especially if you use continuous integration), it's pretty solid, and people assume that it works.
  3. Docs - What's documentation again? Just kidding. This would be the place to make your NDoc help files.
  4. Test - If you're sold on unit testing, then this would be the place to run those.
  5. Dist - Create all of your distributable files. Early in the project, this might not even be used. You have to build something before you can distribute, so early on, this could be missing, or it could be pretty simple (zip up a few files). Later, as you get a feel for how you plan to deploy your project, this becomes more and more important (this is why the Build target loses importance over time). This is also the "show off" target, in my opinion. When someone asks what you're doing with Nant, this is the target you brag about. For example, on one project, this target does the following: Clean up the source folders, get latest or by label, build in release mode, copy the files to a new location, get the correct config files, and zip it all up so it can be copied to the production server. On another, we built the software, obfuscated it, signed it, created an installer, zipped the files, and copied the zip and installer to a distribution folder, where it could be downloaded. Those are more exciting to talk about than calling the csc compiler.
  6. Clean?- Seems pretty obvious. This cleans up all directories that you using - typically the output directories (bin folders, where files get copied to for distribution, etc.).
  7. All - This runs all of the tasks - Init, Clean, Build, Docs, Test, and then Dist. This ensures a clean build where, by the end, everything is ready to go.

Like I've said in the past, I do all of these things, but I've never known what to name them. I have my own naming standard, but it's changed over time, so if you look at my old build files, you won't pick out a pattern. Only recently have I really settled on a standard, but having seen this, it might be better to go with something like this so that maybe, someday, if everyone does it, build files will become standard, so you can read any one of them and know what's going on...

Categories: Development


 

Bracket Filling Logic

posted on 03/05/06 at 12:36:25 am by Joel Ross

Now that the Tourney Bracket Control is in the wild,?lets talk about how a bracket should be laid out. This is one of the things we changed in the Tourney Bracket Control (TBC) 2.0 - we've extracted away the complexities of how brackets should be created and allowed developers to focus on what they want to do with the data coming from the bracket. We're confident that this will make developing with the Tourney Bracket Control much simpler, and you'll be able to get a bracket up and running a LOT faster than before because you don't have to think about the details ahead of time. And in case you don't beleive it, we went ahead and did just that. From our Quickstarts, we created a page where you can quickly add and remove teams from a bracket. The code to do this is quite simple, and just so you know it's not smoke and mirrors, here it is:

protected void AddButton_Click(object sender, EventArgs e)
{
?? Bracket1.Competitors.Add(new BracketCompetitor());
}

protected void Page_PreRender(object sender, EventArgs e)
{
?? if (Bracket1.Competitors.Count <= 2)
??????RemoveButton.Enabled = false;
else
????? RemoveButton.Enabled = true;
}

protected void RemoveButton_Click(object sender, EventArgs e)
{
?? Bracket1.Competitors.Remove(Bracket1.Competitors[Bracket1.Competitors.Count - 1]);
}

As you can see, it's dirt simple to do. The PreRender code isn't technically needed - this just ensures you don't ever drop below two competitors (and seriously, what good is a bracket for one team?). But since it's so simple to build, of course that means that we're worrying the details. Now, when you're working with the TBC, you won't really need to know how we figured out the layout and how teams are assigned. You'll just need to know where the competitors are assigned, and go from there. But I'm sure some out there will be curious about how we're doing it under the hood.

And I'm here to help.

The simple case is pretty easy. This is when the number of teams is a power of two - 2, 4, 8, 16, 32, etc. Why is this the easy case? Well, the first round is always full. There aren't any byes. Things get a little tricky when you have byes in the first round. Let's say you have 13 teams. This means that three teams get a bye in the first round. So, where do you assign teams and where do you give byes? The easiest way would be to assign two teams to each of the first round games, and the bottom three would be byes, but that's not right. Why? Well, in the second round, you'd end up with two teams who got byes playing each other. Teams usually earn byes, which means that teams 1-3 would have earned the right to take a round off. Making team 1 and team 2 play in the second round would be unfair to both of them.

So, what's the proper way to create the match ups? Well, it turns out to be an iterative process. Instead of starting in the first round, you actually start with the final round and work backwards. The final game has two "feeding" games. Game 1 feeds Game 2 if the winner of Game 1 becomes one of the competitors in Game 2. Creating the feeding games is pretty straight forward. You don't have much of a choice. The next two match ups are either the upper feeder or the lower feeder. The next set is more complicated. You have two match ups that each have two feeders. Like I mentioned before, you can't just do it in order - if you only had two match ups left, you'd end up with the top match up having two feeders and the other having none - an unfair situation.

So, you number the open slots 1 through 4. Start with one, then go to 4, back to two and finally slot 3. The next round you have 8 feeder slots for 4 match ups. The process is the same, except now a pattern starts to develop. If you number the slots 1 through 8, the slot assignment order works out like this: 1, 8, 4, 5, 2, 7, 3, 6. The sum of each pair is one plus the number of slots. But the pattern is more than just summing up to 9. It also can be continued for the next round. You get 16 slots, and the easiest way to determine what the slot assignment order for the next round is to take the assignment order from the previous round and add your numbers into it. Treat each element as one half of the pair, and remember that each pair adds up to 17 (16 slots + 1). So the order becomes: 1, 16 (17-1), 8, 9 (17-8), 4, 13 (17-4 - you get the idea right?), 5, 12, 2, 15, 7, 10, 3, 14, 6, 11. Yes, these are the same seed pairings as you'll see in the NCAA tourney.

The advantage of doing it?this way is that you can stop at any point and have a valid bracket. If you only have thirteen teams for 16 slots, then slots 14, 6 and 11 are open. This makes the bye situation much?more reasonable.

Now, from a Tourney Bracket Control perspective, we make things a little more friendly. Instead of having match up IDs (which are generated) that jump all over the place, we go through and re-assign them, so they are in order. For a 64 team tourney, round 1 will be MatchUp1 through MatchUp32, Round 2 will be MatchUp33 through MatchUp48, etc. Then we use the assignment order for the first round to determine where the team slots are, but teams are put into the slot in order also - so when you bind up to the TBC, you can order your teams in such a way that you can determine the initial match ups fairly easily.

So, how does all this work in a double elimination bracket? Good try. I've got it worked out, but because we wanted to get it released, we pulled it out. We'll finish that up this summer, and then I'll talk about how double elimination brackets are created and teams are assigned.

Categories: Develomatic


 

Upgrading Large ASP.NET Projects

posted on 03/04/06 at 11:12:19 pm by Joel Ross

We made a decision on a project I've been on for over?a year now to upgrade to ASP.NET 2.0. This is my first major upgrade project, and to be honest, it was fairly smooth.

Now that we're on .NET 2.0, we have?a few issues, but we'll get to that. First, the upgrade process. We upgraded two projects that shared a library. We ran a test ahead of time, and did it disconnected from source control, and ran into one immediate issue: the first project upgraded just fine, but as soon as we tried to open the second project, it bombed because one of the projects had already been upgraded.

Easy enough to fix. We actually have about five or six projects, and every one of them uses the same shared library. Temporarily, we took a compiled version of the?shared library, checked it in, and then on all but one solution, removed the project. We then modified each project that had a reference to it to reference the DLL rather than the project, and ensured everything built just fine. Then we checked it all in that way.

Now we were ready for the "real" conversion. We made one obvious mistake?when we did it. Well, it was obvious afterwards, and I'm guessing there's some best practice document out there that says so, and I think I remember hearing something about it at Tech Ed, but we didn't listen. We upgraded while connected to source control - and not just that, but while files where checked in. That caused some issues for us. The biggest being that the project files were all modifed on the machine we did the upgrade on, but were never checked out. Again, easy enough to fix. We just checked out the project files on the upgrade machine without overwriting the local file and then checked in.

Looking back, there's one other thing I would have done differently too. I would go through and delete any file from source control that was not part of our web project. In VS.NET 2003, with a project file, you could exclude files and have them laying around the file system still. But with VS 2005's model, every file is part of the project. By removing erroneous files ahead of time, it would have been smoother for us. We ended up with a few files that weren't upgraded and weren't intended to be part of the project. But because they were in Visual SourceSafe, they were now part of our project.

Which brings me to my biggest gripe so far with Visual Studio 2005: The website project in a team environment. I love that Visual Studio now handles renaming and deletng files, and syncs that back to your source control provider, but if I delete a file, it's still on the other developers' machines. So when they go to check in, they could inadvertently re-add that file.

I know, I know. You're supposed to check what you're checking in (and - gasp - comment on what that check-in is for), but not everyone does that. I've known quite a few developers who right-click on the solution and go to Check-In (recursive). They assume that everything that's being checked in is a result of an action they've taken (they added the file, they checked it out, etc.) and not because of an action they didn't take (they didn't delete the file that someone else removed from the project). While you should know exactly what you are checking in, it's not really a bad assumption.

I know the web application model should take care of this, but it's still beta. I'm not sure I'm ready to recommend it to clients until I've had a chance to play with it myself - a good case for using it over at Tourney Logic!

Have you upgraded a project yet? How did it go? Any major gotchas you ran into?

Technorati Tags: |

Categories: ASP.NET


 

Use C# To Write Javascript?

posted on 03/04/06 at 10:14:11 pm by Joel Ross

Ever since I started writing web apps, I've realized that there's a huge need to be able to write solid Javascript. Even when ASP.NET launched, there was still a huge need to know Javascript - despite the claims you heard. I'm hearing a lot of the same claims now that ASP.NET 2.0 is out - not quite as much, but they're still there, despite the proliferation of AJAX. Obviously, the need for Javascript is not going away.

Well, given that Javascript seems to be such a touchy language, and is dependant (to an extent) on the browser the user is using, why isn't there an extension to the .NET framework that allows us to write our Javascript functions in a language familiar to us, such as C#? It seems to me that it wouldn't be that difficult to manage. The framework already does this for things right now - the Atlas framework handles the differences between the XmlHttp objects in IE and Firefox. Why can't this be extended to allow us to write our own "client-side" methods from the server - and have the framework translate to javascript for me?

Here's an example of what I'm talking about:

[ClientMethod]
public void ChangeText(string text)
{
?? MyLabel.Text = text;
}

Then on a button:

MyButton.OnClientClick += new ClientMethodEventHandler(ChangeText(MyTextBox.Text));

Obviously, you would have MyTextBox on the page somewhere.

On the client side, this would all get translated. First, the method:

function ChangeText(text)
{
?? document.getElementById('MyLabel').value = text;
}

This works in most cases, which is good, but what if MyLabel is in another control? By translating on the server side, it would be able to do that for you, so if MyLabel was in a user control (MyUserControl), it would spit out this:

function ChangeText(text)
{
?? document.getElementById('MyUserControl_MyLabel).value = textl
}

And yes, I know you should check to see if it can find the control first, but I'm just illustrating a point. As for the the client click event, it would render a button like so:

<input type='button' id='MyButton' onClick="javascript:ChangeText(document.getElementById('MyTextBox').value);" Text="Click Me" />

And of course, if MyTextBox is in MyUserControl, it would render as 'MyUserControl_MyTextBox'.

To me, this seems like a much easier way to write client code - for a couple of reasons. First, you're writing it in a language that you are more familiar with, so it'll be easier to write in the first place. Second, you're protected against change. When .NET 5.0 and IE 13 are using JoelScript as the scripting language, your code still works - because the framework generates the client script for you - in the best client script for the task!

Thoughts? Yes? No? Should I send this to the Framework team?

Technorati Tags: | |

Categories: ASP.NET


 

<< 1 ... 42 43 44 45 46 47 48 49 50 51 52 ... 124 >>