I finally succumbed to Apple's pleas to update Tiger on my powerbook to 10.4.4. Maven was dumping hotspot errors and Eclipse was misbehaving, so an update seemed in order. Well, when the system came up, my menu bar items (clock, battery status, wifi status, speaker volume, etc) were gone! The network settings were goofed up and I had this profound flash of regret that I hadn't done a backup before doing the update.
Thankfully, Mike Hoover and davidx (co-horts at Technorati) were on hand to assist and dig up the following factoid:
/System/Library/CoreServices/Search.bundle
sudo mv /System/Library/CoreServices/Search.bundle /var/tmp/
, then rebootedapple powerbook java eclipse maven macosx technorati
( Jan 27 2006, 11:39:48 AM PST ) PermalinkA lightweight build system should be able to run a project's test harness quickly so that developers can validate their work and promptly move on to the next thing. Each test should, in theory, stand alone and not require the outcome of prior tests. But if testing the application requires setting up a lot of data to run against, the can run into a fundamental conflict with the practical. How does it go? "The difference between theory and practice is different in theory and in practice."
Recently I've been developing a caching subsystem that should support fixed size LRU's, expiration and so forth. I'd rather re-use the data that I already have in the other test's data set -- there are existing tests that exercise the data writer and reader classes. For my cache manager class, I started off the testing with a simple test case that creates a synthetic entity, a mock object, and validates that it can store and fetch as well as store and lazily expire the object. Great, that was easy!
What about the case of putting a lot of objects in the cache and expiring the oldest entries? What about putting a lot of objects in the cache and test fetching them while the expiration thread is concurrently removing expired entries? Testing the multi-threaded behavior is already a sufficient PITA, having to synthesize a legion of mock objects means more code to maintain -- elsewhere in the build system I have classes that the tests verify can access legions of objects, why not use that? The best code is the code that you don't have to maintain.
<sigh />
I want to be agile, I want to reuse and maintain less code and I want the test harness to run quickly. Is that too much to ask?
My take on this is that agile methodologies are composed of a set practices and principles that promote (among other things) flexible, confident and collaborative development. Working in a small startup, as I do at Technorati, all three are vital to our technical execution. I have a dogma about confidence:
Lately I've been favoring maven for build management (complete with all of it's project lifecycle goodies). Maven gives me less build code to maintain (less build.xml stuff). However, one thing that's really jumped in my way is that in the project.xml file, there's only one way and one place to define how to run the tests. This is a problem that highlights one of the key tensions with TDD: from a purist standpoint, that's correct; there should be one test harness that runs each test case in isolation of the rest. But in my experience, projects usually have different levels of capability and integration that require a choice, either:
I ended up writing an ant test runner that maven invokes after the database is setup. Each set of tests that transitions the data to a known state lays the ground work for the next set of tests. Perhaps I'd feel differently about it if I had more success with DBUnit or had a mock-object generator that could materialize classes pre-populated with desired data states. In the meantime, my test harness runs three times faster and there's less build plumbing (which is code) to maintain had I adhered to the TDD dogma.
ant maven tdd refactoring unit testing agile java technorati
( Jan 26 2006, 06:58:22 PM PST ) PermalinkTo scale the SQL query load on a database, it's a common practice to do writes to the master but query replication slaves for reads. If you're not sure what that's about and you have a pressing need to scale your MySQL query load, then stop what you're doing and buy Jeremy Zawodny's book High Performance MySQL.
If you've used MySQL replication and written application code that dispatches INSERTS, UPDATES and DELETES to the master while sending SELECTS to the slaves (exception for transactional operations where those have to go to the master), you know how it can add another wrinkle of complexity. Well, apparently there's been a little help in the MySQL JDBC driver for a while and I'm just learning of it now. The ReplicationConnection class in the MySQL Connector/J jar (as of v3.1.7) provides the dispatching pretty transparently.When the state of the readOnly flag is flipped on the ReplicationConnection, it changes the connection accordingly. It will even load balance across multiple slaves. Where a normal JDBC connection to a MySQL database might look like this
Class.forName("com.mysql.jdbc.Driver"); Connection conn = DriverManager. getConnection("jdbc:mysql://localhost/test", "scott", "tiger");You'd connect with ReplicationDriver this way
ReplicationDriver driver = new ReplicationDriver(); Connection conn = driver.connect("jdbc:mysql://master,slave1,slave2,slave3/test", props); conn.setReadOnly(false); // do stuff on the master conn.setReadOnly(true); // now do SELECTs on the slavesand ReplicationDriver handles all of the magic of dispatching. The full deal is in the Connector/J docs, I was just pleased to finally find it!
I know of similar efforts in Perl like DBD::Multiplex and Class::DBI::Replication but I haven't had time or opportunity. Brad Fitzpatrick has discussed how LiveJournal handles connection management (there was a slide mentioning this at OSCON last August). LiveJournal definitely takes advantage of using MySQL as a distributed database but I haven't dug into LJ's code looking for it either. In the ebb and flow of my use Perl, it is definitely ebbing these days.
mysql database replication jdbc perl DBI
( Jan 17 2006, 11:42:09 PM PST ) PermalinkThere is widespread frustration with standards that try to boil the ocean of software problems that are out there to solve. Tim Bray has a sound advice:
If you're going to be designing a new XML language, first of all, consider not doing it.In his discussion of Minimalism vs. Completeness he quotes Gall's Law:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.The tendency to inflate standards is similar to software development featuritus. I'm oft heard to utter the refrain, "Let's practice getting above the atmosphere before shooting for the moon." The scope of what is "complete" is likely to change 20% along the way towards getting there. The basic idea is to aim for sufficiency, not completeness; simplicity and extensibility are usually divergent. Part of the engineering art is to find as much of both as possible.
On the flip side, where completeness is an explicit upfront goal, there are internal tensions there as well. Either building for as many of the anticipated needs as possible or a profound commitment to refactoring has to be reckoned with. The danger of only implementing the simplist thing without a commitment to refactoring is that expediency tends to lead people, particularly if they haven't solved that type of problem before, to do the easy but counter-productive thing: taking short cuts, cutting and pasting and hard coding random magic doodads. As long as there is a commitment to refactoring, software atrophy can be combatted. Reducing duplication, separating concerns and coding to interfaces enables software to grow without declining in comprehensibility. Throw in a little test-driven development and you've got a lot of the standard shtick for agility.
Even though there's a project at work that I've been working on mostly solo, it's built for agility. The build system is relatively minimal thanks to maven. The core APIs and service interfaces (which favors simplicity: REST) are unit tested and the whole thing is monitored under CruiseControl to keep it all honest. This actually saved us the other day when a collaborator needed additional data included in the API's return values. He did the simplest thing (good) but I promptly got an email from CruiseControl that the build was broken. I reviewed his check-in and refactored it by moving the code that was put in-line in the method and moving it do it's own. I wrote a test for the method that fetches the additional data. And then wrote one for the original method's responses to include the additional data. The original method then acquired a flag to indicate whether the responses should be dressed up with this additional data; not all clients need it and it requires a round-trip to another data repository, making it a parameter makes sense since the applications that don't need it are performance sensitive. Afterwards, the code enjoyed additional benefits in that the caching had granularity that matched the distibution of the data sources. Getting the next mail from CruiseControl that it was happy with the build was very gratifying. I need to test-infect my colleagues so they learn to enjoy the same pavlovian response.
Anyway. I'm short on sleep and long on rambles this morning.
There are times when simple problems are mired in seemingly endless hand wringing and you have to stand up to shout JFDI. The Java software world, like RDF theorists and other parochial ivory tower clubs, seems to have a bad case of specificationitus. There are over 300 JSR's. Do we need all of those? On the other hand, great software is generally not created in a burst of a hackathon. There's no doubt that when a project has fallen into quicksand, getting all parties around a table and getting it out is an important way to clear the path. Rapid prototyping is often best accomplished in a focused push. I like prototyping to be used as a warm up exercise. If you can practice getting lift-off on a problem and you can attain high altitudes with some simple efforts, you're likelihood of making it to the moon increases.
agile refactoring technorati maven unit testing
( Jan 10 2006, 07:59:45 AM PST ) PermalinkLooks like I better hasten my effort to upgrade to Roller 2.x. This (v1.1) installation hit an OutOfMemoryError a little while ago and crashed the JVM in all of its hotspot glory. I'm suspicious of the caching implementation in Roller (IIRC, it's OSCache). For a non-clustered installation, plain-old-filesystem caches JFW. For distributed caches, JFW applies to memcached. We've been using the Java clients (and Perl and Python) for memcached productively for a long time now. Interestingly, some one was inspired to write a Java port of the memcached server. Crazy! And I think to myself, what a wonderful world.
( Jan 09 2006, 10:20:03 PM PST ) PermalinkThe levers and dials of character set encoding can be overwhelming, just looking at the matrix supported by J2SE 1.4.2 gives me vertigo. Java's encoding conversion support is simple enough, if not garrulous:
String iso88591String = request.getParameter("q"); String utf8String = new String(iso88591String.getBytes("UTF-8"));But what do you do if you don't know what encoding you're dealing with to begin with? It looks as though there are a couple of ways to do it:
String q_unknown_japanese = request.getParameter(q); String q_unicode = new String(q_unknown.getBytes("ISO8859_1"),"JISAutoDetect");
Let's call the CGI specification what it is: a burned out and anemic teenager. While it seems kinda cool that Apache 2.2's is going to get mod_proxy_fcgi, I've long wondered about using AJP13 to interface with web application runtimes other than servlet containers.
Brian McCallister did a kick butt cut-to-the-chase preso on Ruby on Rails at ApacheCon in San Diego. I can imagine why he's gung-ho to get a FastCGI support upto date, it seems to be the the way to run RoR. But since learning that AJP13 was going to be (and now is) built in to Apache 2.2's mod_proxy framework, I've been thinking how much nicer it'd be for other application frameworks to also be able to run outside the HTTP request handling process/thread.
We have some services that run under mod_perl that I've been taking second (and third) looks at. Wouldn't it be nice to deploy that application independent of the HTTP server runtime as one can with a Java webapp? Essentially, when it's boiled down to bare metal, perhaps that's all FastCGI is but it, it... it's CGI! Isn't it just setting/getting global environment variables? STDIN/STDOUT/STDERR? Isn't that so, well, 1994? Maybe I need to think about it some more but that was my take away last time I built anything with FastCGI (admittedly, in the 1990's).
I found what looks like AJP13 protocol support for Perl. Even though I don't read Japanese I'll infer from the context that he was/is interested in the same thing. Though whenever I see "use threads" in Perl, I fear the worst. Anyway, the likelihood of me finding myself with the time on my hands to implement AJP13 in Ruby is low; first, I still need to learn Ruby enough to get crafty.
rubyonrails ruby java apache cgi fastcgi ajp13 perl mod_perl
( Jan 07 2006, 01:20:50 PM PST ) PermalinkNo, not a typo. OSDL is something else. I'm interested in OSLD. I've used Language::Guess to detect languages in arbitrary text with Perl, it works pretty well. But how are folks solving the problem in Java?
It looks like Oracle has language detection as part of their "Globalization Development Kit" ... but what about open source? Sadly, the Nutch Language Identifier Plugin only supports European languages, no CJK. What are the other options?
opensource open source i18n language java perl nutch oracle
( Jan 06 2006, 02:22:54 PM PST ) PermalinkI ran a test to prove to myself that for simple XML documents, the best way to parse them may be to skip capital P parsing altogether and just use a plain-old regular expression pattern match.
The XML format I wanted to test is the response from the Technorati /bloginfo API. I threw together a Perl based benchmark quickly enough and here are the results:
Benchmark: timing 10000 iterations of regexp, xpath... regexp: 0 wallclock secs ( 0.13 usr + 0.00 sys = 0.13 CPU) @ 76923.08/s (n=10000) (warning: too few iterations for a reliable count) xpath: 137 wallclock secs (136.17 usr + 0.04 sys = 136.21 CPU) @ 73.42/s (n=10000)... the regexp parse was three orders of magnitude faster than the XPath parse. I'm curious now what the comparison would be for Java's regexp support versus, say, Jaxen and JDOM (which is how I usually do XPath in Java). In my dabblings with timings, Java regexp's are very fast. Apparently, Tim Bray found this as well.
Here's the Perl code:
#!/usr/bin/perl use XML::XPath; use XML::XPath::XMLParser; use XML::Parser; use Benchmark qw(:all) ; my $X = new XML::Parser(ParseParamEnt => 0); # non-validating parsing, please timethese(10000, { 'xpath' => \&xpath, 'regexp' => \®exp }); sub xpath { my $b = getBlog(); my $parser = XML::XPath::XMLParser->new(parser => $X); my $root_node = $parser->parse($b); my $xp = XML::XPath->new(context => $root_node); my $nodeset = $xp->find('/tapi/document/result/weblog/author'); die if ! defined($nodeset); } sub regexp { my $b = getBlog(); my ($author) = $b =~ m{<author>(.*)</author>}sm; die if ! defined($author); } sub getBlog { return q{<?xml version="1.0" encoding="utf-8"?> <!-- generator="Technorati API version 1.0 /bloginfo" --> <!DOCTYPE tapi PUBLIC "-//Technorati, Inc.//DTD TAPI 0.02//EN" "http://api.technorati.com/dtd/tapi-002.xml"> <tapi version="1.0"> <document> <result> <url>http://www.arachna.com/roller/page/spidaman</url> <weblog> <name>What's That Noise?! [Ian Kallen's Weblog]</name> <url>http://www.arachna.com/roller/page/spidaman</url> <rssurl>http://www.arachna.com/roller/rss/spidaman</rssurl> <atomurl></atomurl> <inboundblogs>6</inboundblogs> <inboundlinks>8</inboundlinks> <lastupdate>2006-01-02 18:38:03</lastupdate> <lastupdate-unixtime>1136255883</lastupdate-unixtime> <created>2004-02-23 12:04:51</created> <created-unixtime>1077566691</created-unixtime> <rank>false</rank> <lat>0.0</lat> <lon>0.0</lon> <lang>26110</lang> <author> <username>spidaman</username> <firstname>Ian</firstname> <lastname>Kallen</lastname> <thumbnailpicture>http://static.technorati.com/progimages/photo.jpg?uid=11648</thumbnailpicture> </author> </weblog> <inboundblogs>6</inboundblogs> <inboundlinks>8</inboundlinks> </result> </document> </tapi> }; }
For some of the messaging infrastructure at Technorati where the messages are real simple name/value constructs, we've been passing on using XML at all. Using a designated-character-delimited format string (say, tabs) that can be rapidly transformed into a java.util.Map (or a Perl hash, a Python dictionary, yadda yadda yea) and passing messages that way buys a lot of cheap milage. We like cheap milage.
xpath regexp perl java messaging technorati
( Jan 05 2006, 11:26:28 AM PST ) PermalinkNow that I'm messing around with a roller implementation from within the last 7 months (migrated from Roller 0.98 to 1.1), I'm going to work on closing the gap to 2.0. Migrating all of my apps from an old (3.x) version of MySQL to 4.1.x wasn't too bad. But it appears that somewhere along the way to Roller 2.0, somewhere in the MySQL upgrade cycle perhaps, the post <-> category mappings got mangled and that was resulting in NPE's when the system tries to fetch the categories.
In the meantime, I implemented embedding cosmos links in my posts by patching WEB-INF/classes/weblog.vm
(from the 1.1.2 release):
479,486c479 < #end < < #macro( showCosmosLink $entry ) < <a href="http://technorati.com/search/$absBaseURL/page/$userName/#formatDate($plainFormat $entry.PubTime )"><img < src="http://static.technorati.com/pix/icn-talkbubble.gif" < border="0" < title="Links to this Post" /></a> < #end --- > #endIn the velocity template, I just added:
#foreach( $entry in $entries ) <a name="$utilities.encode($entry.anchor)" id="$utilities.encode($entry.anchor)"></a> <b>$entry.title</b> #showEntryText($entry) <span class="dateStamp">(#showTimestamp($entry.pubTime))</span> #showEntryPermalink( $entry ) #showCosmosLink( $entry ) #showCommentsPageLink( $entry ) <br/> <br/> #endI think the POJO's and macros are different in 2.0 but I'll post a cosmos link update when I get there.
technorati roller velocity mysql
( Jan 04 2006, 07:29:26 AM PST ) Permalink