Saturday, November 22, 2003

Thick or thin

Danno Ferrin blogged on the overuse of thin clients when sometimes thick are more appropriate. He describes a number of useful criteria to use when starting to think about which client style to use.

But he stops too soon. While he mentions at the end that WebStart is a new friend, he didn't attach this to thoughts on the scale of deployment. A lot of people use thin clients to avoid sneakernet install issues for think clients, but in reality JavaWebStart is solving this problem now for think clients.

But one of the larger problems in my mind, that Danno didn't mention, is UIs for which the data presented is driven by events other than the user. Stock trading consoles that want to display dynamically updating quote information. Power grid managment wants to display real-time loading, current,voltage, frequency samples. Sure you can solve these problems with auto-refresh, but that has its own cost in extra reads (solving for accuracy) or out of date data (solving for bandwidth). Not to mention that the actual data that is updated is way way smaller than the whole page refresh. Again, bandwidth.

Another case was a web calendar I was looking at lately. Setting up a group for cross referencing calendars to find availability, required flipping through piles of screens to find a person's calendar, go to another to add it to the group, go to the search again to find the next person. Came to something like 4 screens per person added. Each with HTTP page loading times. Took forever. Totally crazy. Compare to some fat clients doing similar things in one dialog.

Some thick clients are a joy to use, while their thin counterparts make me want to use another product.


[Comments from my previous blog]


1. a reader left...
Sunday, 23 November 2003 2:12 pm
 
Fat vs Thin is certainly a valid argument to have, but I think that most discussions rest of certain assumptions that dont have to hold true. The biggest of these is the assumption that every user action in a thin client requires a HTTP round-trip. Smart use of javascript and HTML makes it relatively easy to avoid these things - the web calendar you mention is a classic. The real argument there is, as Danno puts it, the amount of data coming down the pipe. In my spare time atm I'm working on a training diary aimed at rowers, available at http://training.coxless.com, which has a MS Money style interface to viewing a diary, and works (I think) very well.

Having said all that, there are plenty of times when a thick client works wonders - but the biggest argument from my point of view is the number of users. Having not too long ago finished a contract where a thick client was deployed to 60-70 users, and even at those numbers, deployment was an issue (mind you they refused to accept my advice regarding webstart).

Dmitri Colebatch
2. glen martin left...
Monday, 24 November 2003 10:18 am
 
Well, ok, there's javascript, but I think that is blending the two styles of interface. If there's behaviour running on the client, it can't precisely be called thin, can it?

Though I might be accused of splitting hairs here. Many seem to say thin when they really mean broswer-based. The problem is, if javascript (or whatever javascript alternative you care to name)isn't present or is disabled, what happens? True thin clients wouldn't have client-side behaviours.

But I think your closing thought perfectly underscores my thinking: Despite well known solution to the sneakernet problem, folks continue to choose thin vs thick for reasons of installation, rather than for more credible ones.

One reason of which is size of data in the pipe.

Given JavaWebStart, folks have the opportunity to make choices for more relevant reasons, which should in theory result in greater diversity of interfaces out there.

Tuesday, November 11, 2003

Derivation vs. residual knowledge

I am not a lawyer, and these comments do not constitute legal opinion, my own, my employer's, or any others'.
 
This post is not related directly and makes no comment on other timely happenings. I'm just following a train of thought.
 
I find myself wondering at the intersection of 'derived work' and 'residual knowledge'. SFAIK, on changing employment the future use of residual knowledge (eg. skills, not proprietary facts) is protected in at least some jurisdictions. LGPL source (to pick an example not precisely at random) is proprietary in the sense that there is an owner, and used under license.

But does 'derivation' require that the source be in front of you? Or that you are remembering the original source? Or thinking about it? Surely once you get to the point that your are remembering that laying out a solution using some pattern is useful, that is only residual knowledge, even if you conceptualised that pattern through looking at or hearing about some source.

Supposing residual knowledge rules could apply, and if only *some* jurisdictions protect residual knowledge, this could lead to a truly unpleasant situation in which *where* the code is typed becomes important. Blech!
The application of residual knowledge law in cases in which there isn't an employment contract is best left to those with more fortitude than myself.

Monday, September 15, 2003

Openness, public domain and SCO

Anne Thomas Manes was writing about various shades of gray in the openness of
Java and C#, and wrote [ed: this link is broken, Anne's old blog is apparently no more. Her new blog doesn't seem to have the old posts, reminiscent of my own blog migration pain.]

Public domain means open. It is the opposite of proprietary. Open source isn't nearly as open as public domain -- as illustrated by the SCO lawsuit regarding Linux. The fact that there is a license -- even an open source license -- means that someone owns the intellectual property in Linux. SCO is claiming that it owns some of that intellectual property, and it is demanding that companies pay for the right to use it.
While I think she has here and elsewhere in her comment accurately reflected the difference between public domain and proprietary,  the way I've taken her comment on SCO isn't quite accurate.

I'm not a lawyer, don't work for SCO or IBM, and don't play any of these
roles on TV.


While public domain is the antithesis of ownership, I don't think it would in any way have shielded anyone from the SCO lawsuit. Taking the SCO complaint at face value, just for the sake of argument: had some company, "HAL" perhaps, released code it licensed from SCO into the public domain, that
would in no way have protected HAL from SCO's wrath.

Nor would those who picked up the supposed public domain code have been protected. If I put stolen property on my curbside with a 'free' sign, you who pick it up still have received stolen property. Likely you wouldn't be charged for this unwitting act, but I would. And if you wanted to keep the stolen property, you could then be charged some fee.


The point here is that SCO's claim has nothing to do with the property having shown up in open source - it is to do with the property having been used by HAL outside of contracted context. SCO's ownership is the same no matter whether the contested use is open source, closed source, private transaction, employee theft, whatever.

I'm still not a lawyer.


[Comments from my previous blog]

1. a reader left...
Tuesday, 16 September 2003 5:25 pm
A de jure standard would make a huge difference in the SCO lawsuit -- except that Linux isn't a de jure standard. A de jure standard is public domain by law. Once the intellectual property is defined as public domain by an international standards body, no one can claim ownership of the IP. That's why the ISO standardization is such an important factor in regards to C# and CLI. Linux may be open source, but it is not in the public domain. This is one of the risks associated with open source -- your open source provider supplies no guarantee of indemnification if someone comes along and sues you for violating their IP. Hence SCO can sue any Linux user. This isn't possible with C# and CLI.

Anne Thomas Manes [amanes@burtongroup.com]
2. glen martin left...
Tuesday, 16 September 2003 6:40 pm
A standards body can try to put a technology in the public domain, but if the body doesn't own the technology in question it isn't then public domain. While there may be a difference for the end-user, for HAL or the standards body there would be some difficulty.

I believe there may be a situation along these lines with respect to some of the MS contributions to standards bodies like ECMA and W3C. If only portions of the standards are donated, the public domain release by the standards body isn't all that effective in permitting free (that is, unconstrained) use.
3. a reader left...
Wednesday, 17 September 2003 7:16 am
But that's the major difference between an international standards body, such as ISO, and a vendor consortium, such as W3C. W3C has no right to put technology into the public domain. W3C doesn't own the IP in the W3C standards. The vendors and/or people that contribute to W3C maintain their IP rights. Hence the recent controversy about royalty free and reasonable and non-discriminatory (RAND) licensing practices.

But ISO is different. Any technology contributed to an ISO standard must be donated to the public domain, and once the technology has been standardized, ISO provides indemnification (protection against lawsuits).

But I agree with you regarding your caveat about what portion of the technology is in the public domain. That's why I've been careful to say that only C# and CLI are international standards -- but .NET is a proprietary framework built on the C#/CLI standard. And that's why I'm suggesting that we really need a non-proprietary framework built on C#/CLI that has no IP-encumbrances from Microsoft. Currently, Mono has cloned a number of Microsoft-owned .NET classes (ASP.NET, ADO.NET, SOAP, etc.). I'm suggesting that a new set of frameworks should be built on top of the Mono base that aren't based on Microsoft-proprietary IP.

Anne Thomas Manes [amanes@burtongroup.com]

Sunday, September 14, 2003

Quote of the week

Responding to Bush's request to add certain terrorism-related crimes to the list of those for which the death penalty is available, Deborah Pearlstein of Lawyers Committee for Human Rights said

"When you're dealing with an enemy that has made suicide attacks its weapon of choice, expanding the death penalty seems like a particularly counterproductive proposal."

McNealy on California employment expense

There's a million rules that make the cost of operating here just off the charts.
Well, I said my industry was perhaps an exception in that services are not delivered primarily locally.  As usual, my blog is about my opinions, not those of my company. And sometimes it isn't even my opinion, just a thought experiment. This is perhaps a topic of greater divergence than others.

Saturday, September 13, 2003

Business competitiveness

California seems to be moving towards mandatory health insurance for all employers with more than 20 employees.  Accompanied by the usual hand-wringing about some California company will see labour costs increase, hurting them competitively.

Continuing my anti-media rant, here are two aspects to this story I expected to see mention of in the article, but didn't.


First is that California (and probably plenty of other places too) is undergoing a shift from manufacturing to service jobs. Service jobs by and large deliver services locally. So while some pizza chain whines about having to sell 150,000 more pizzas to cover the insurance cost, so will his competitors. Somehow I don't think they'll all be able to increase the number of pizzas they sell, but worry not, they can all raise the price of their pizzas by a buck or so and not have to worry overmuch that UPS will start bringing in out-of-state pizzas.


Actually, maybe if they all start covering health insurance, they can sell more pizzas total. 80 odd years ago Henry Ford realised that paying his workers more would allow them to spend more, drive more money into the economy, and increase the number of cars he could sell. If pizza workers are spending less on health care, chances are they'll spend more on services (including pizzas either first or second hand). Perhaps the pizza chain in question won't sell 150,000 more pizzas, but they will likley see some increase.


But anyway, there is no competitive challenge to increased labor costs in a largely service based economy.


The second missing point is that research indicates that universal health care covers everyone for less total money than the current US helathcare system. This sounds insane, but the administration costs of the current US system factor as a relatively large fraction of the total bill. I mean costs associated with the multiplicity of insurance companies, variations betwen plans, doctors harassing insurance to get paid, etc ...

The $1,059 per capita spent on health care administration was more than three times the $307 per capita in paperwork costs under Canada’s national health insurance system ... [accounting for] at least 31 percent of total U.S. health spending in 1999 compared to 16.7 percent in Canada.

"Hundreds of billions are squandered each year on health care bureaucracy, more than enough to cover all of the uninsured, pay for full drug coverage for seniors, and upgrade coverage for the tens of millions who are under-insured"
I don't know ... it seems to me that the drive to required health coverage is a good thing. The current proposal falls short, though. To achieve the savings of the Canadian model, they'd have to extend health coverage to *everyone*, employed or not, along with rationalisations of plans and administration. Central administration saves effort for the administrator, and the doctor/hospital/pharmacy. The current California initiative increases coverage, but not enough to achieve the savings of universality.

All this having been said, in IT the services are in fact delivered remotely. My industry will be affected, even if the bulk of services is not. But I don't care, universal coverage still makes sense, and more sense than the current bills.  Of course, I'm an alien.


[Comments from my previous blog]

1. a reader left...
Wednesday, 25 February 2004 11:11 am
 
are you talking about having health coverage like other countries do? That everyone has coverage, employed or not for free. I like that idea. I just spoke with my Pennsylvania insurance company, and you wanna know how expensive health coverage is. That's y im on here now, looking for something cheaper

joe [joe@aol.com]

Friday, August 29, 2003

Dying for crisp thinking

Are media correspondents idiots?

A report in the San Francisco Chronicle talks about the rate of homeless deaths in San Francisco and elsewhere.

Now I want to be absolutely clear: this blog isn't about the homeless. I have nothing but sympathy for the plight of many of them. Can't say all, because I don't think there is any group that is homogenous enough to make a blanket claim about. But many. Most.

This blog is about the media, who can't seem to figure out what the news is.

This article says there are 169 homeless deaths a year. How newsworthy is this? I don't know if this is a 50%, 5% .5% or what mortality rate. Ok, buried in the article is a broad estimate of the population, 8,000 - 15,000. So 169 is between 1.1 and 2.2%. Hmmm. That doesn't sound so bad. I mean, no I wouldn't want them to die per se, but everyone does, and dying at the rate of <2% of a population per year doesn't sound out of line.

The article also mentions that there are far fewer homeless deaths elsewhere, eg. 37 in Boston. But again, how many homeless are in Boston, where it is bloody cold for a good part of the year? Previous articles have discussed a migration of homeless to SF due to weather and liberal policies. I have no idea what the relative number of homeless are between SF and Boston.

That's the point of my rant. I don't know, and the article singularly failed to tell me.

The headline for this article should have been "Homeless die at 4 times the rate of the larger population" if they wanted it to be truly useful. Or whatever the real number is.

The peak of insanity here is mention of 8,000 deaths a year total in SF. Wow! There are fewer homeless deaths than non-homeless? I knew home ownership was stressful, but hadn't realised it increases mortality rate. Streets, here I come!

Flipping idiots.

Thursday, August 21, 2003

Offshore and education funding

I've tried to stay away from the offshore issue since my Aug 4 blog, but this article asks:
Dammit, weren't our kids supposed to bring home the bacon? ... expensively educated middle-class kids learn that their investment (and, in the US, this can be upwards of $120,000 per child) has gone offshore.
Um, no. Higher education costs a fortune, yes, but that is more about the higher cost structure in the US (real estate, health care, litigation, etc) than about any extra value returned. And kids enter university here in much worse shape than those from other countries because of chronic underfunding of grade and high school. Folks, the US is being outinvested in $ adjusted to local economic conditions. Why would anyone think the knowledge (read: IT or programming) jobs should stay here?

The article goes on to wonder if this will be a political issue. No doubt it will, but I have no faith that the public and politicians will attack the problem at its root (funding and cost structure) when it will be easier to stump about protectionism and big bad corporations. Somehow I think adding value is a better product strategy than increasing cost.

IBM shedding PWC?

The Register reports that in the last quarter IBM has quietly shed half as many global services folk as it acquired through the PWC acquisition. No mention of other quarters. So, was the acquisition only about killing a competitor?

Monday, August 4, 2003

Does open source drive IT offshore?

Since I wrote on remoteness a few days ago, new articles and blogs keep popping up.

Chad Dickerson has just linked the moving of IT offshore with open source, in the August 4 issue of Infoworld. To hear him tell it, a pro-open-source IT fellow was bemoaning the move of IT jobs offshore, and wondered at where he could find a competitive US-staffed supplier.


This ties into Alan Williamson's well-read blog on the economics of open source, and the thread over at Simon Phipps' blog.


Chad  ends up wondering to what extent IT's pursuit of overhead in decisions to use open source (eg. Linux) instead of a commercial server operating system is driving all the margin out of software, and driving our suppliers (or their employees) offshore.

"IT managers make these kinds of decisions every day to save money, but it’s the same basic line of reasoning that drives American IT jobs offshore. The cost of running a business should be as low as possible, and any reduction in IT costs (including labor) helps the bottom line."
Difficult questions. Open source solves some problems really well (read: development process), but in relying on open source for deployments do we run the risk of becoming {fishery,steel,...} workers?

And eventually, per the Dilbert cartoon, there is no place 'offshore' enough. I mean, what happens when you keep squeezing margin out? Where does the continuous search for cheaper cost structure drive you?



1. a reader left...
Tuesday, 5 August 2003 6:26 am
 
Well, the flaw in the logic is: Why should IT companies not move offshore? If no open source competitors exist, they can reduce their costs anyway and become more competitve against their commercial non-OS competitors. Either way IT jobs are moved offshore.

Stephan Schmidt
2. a reader left...
Tuesday, 5 August 2003 8:34 am
 
The majority of IT staff do not do software product development. Therefore, the deflationary effects of open source do not affect them. For the minority of software developers working in the software industry, its added pressure however the benefits of reuse allow for higher level of products to be develop faster.
Furthermore, software development is not analagous to marketing. Microsoft w/c makes over 32B a year only hires 50,000 employees worldwide. That's a drop in the bucket in the overall picture of employment.
At best Open source destroys markets, but in now way does it push a move to go offshore. If you don't have a market, doesn't matter how low cost your labor is.

Carlos E. Perez
3. a reader left...
Tuesday, 5 August 2003 8:38 am
 
Sorry about the typos... should have read:

The majority of IT staff do not do software product development. Therefore, the deflationary effects of open source do not affect them. For the minority of software developers working in the software industry, its added pressure however the benefits of reuse allow for higher level of products to be develop faster.
Furthermore, software development is not analagous to MANUFACTURING. Microsoft w/c makes over 32B a year only hires 50,000 employees worldwide. That's a drop in the bucket in the overall picture of employment.

At best Open source destroys markets, but in NO way does it push a move to go offshore. If you don't have a market, doesn't matter how low cost your labor is.

Carlos

Wednesday, July 23, 2003

Remoteness, productivity and web services

While my blogs here are mine, and not my employer's, and are always quite independent, this one is even more so.  And I'm not even certain I have an opinion yet on this topic, I'm just kinda thinking out loud. Ok?

I was reading an article in the Economist, "New Georgraphy of the IT Industry" (for which I don't have a URL, sorry), talking about cost savings in moving tech workers offshore (eg. to India).  The way the article lays it out, the workers and their local infrastructure in India cost about 25% of those in Silicon Valley, and even after factoring in the additional remote infrastructure costs (telecoms and data connections) there is a saving of around 30% on loaded labour costs.

But there are additional costs that aren't addressed by this article, nor in any other source I've read.

A small additional cost is some amount of international travel for business folks bidding jobs, and leads of the teams that eventually do the work.

But more significant is that while all discussions focus on labour and infrastructure cost (savings), little appears on productivity. Agreeing for sake of argument that remote and local personnel are equally productive individually (since if they weren't that would be a different sort of problem), what about the direct impact of remoteness on team productivity?

My own experience is mixed on the subject.  I've been involved in a handful of projects over the past 15 years in which teams were split between local and offshore. But in this very anecdotal sample size of 1, trends (if we can call them that) have emerged:

Offshore works well for extremely well defined projects or portions, where they can run full speed with few remote consultations.

Offshore works poorly when the project involves any notion of research, as in:
  • we're not precisely sure what the problem is; or
  • how it will be solved; or
  • whether changes for a customer will involve changes to design.
These cases share a requirement for strong team coordination and communication, and that is exactly what is complicated in all cases of remoteness, and more difficult still in cases of several-timezone-remoteness.
This leads to a few minor draft conclusions. One is that, absent unusually well developed communications skills or remote team paradigms, remote (and especially offshore) splits are not particularly appropriate for startups, or for new product teams in established companies.

Another is that remoteness should work better when teams adopt some sort of formal design methodology. This means nailing down interfaces between different subprojects, but it also means thinking through problems more completely early on in the project to codify the design so there is less need for churn later. This worked well in one of the remote projects I've been involved in.

It will be interesting to see, over the next few years, how web services will help or hinder this notion of formalism. On the one hand, they allow that nailing down of interfaces. On another, the mindset surrounding web services is that the interfaces can and will change, but since they self-describe the users of the interfaces can tailor more easily. So perhaps web services as a formalism is really about assisting in the remoteness decouple. Don't know. It'll be interesting to watch things evolve.

For completeness, just because my stream of thought got me to thinking about web services and team productivity via the remote and offshore path (only because I started from the Economist article), doesn't mean remoteness or offshore are necessary to thinking about the two. Quite the opposite, actually. So on reading this, please don't think I'm slamming remote teams - I've worked with several, and those projects have at times been very successful. As have local projects.

I'm quite interested to hear other opinions on the productivity issue, and on whether web services will help this problem. Feel free to comment below, or send me a personal email to glen at glen-martin dot com.


[Comments from my previous blog]


1. a reader left...
Wednesday, 30 July 2003 6:26 pm
> Offshore works well for extremely well defined projects or portions, where they can run full speed with few remote consultations.
I'm thinking Y2K. Go make my 2 digit four digits.

> Offshore works poorly when the project involves any notion of research
Requirements such as go build me a booking or pricing engine are a little more complex and require considerable analysis and communication. I couldn't agree more with your statements.

Jason

Friday, July 18, 2003

Voter apathy

Well, here's a consequence of voter apathy that I certainly didn't expect.

People have been moaning for years that every election seems to have a lower turnout that the last, with judicious nodding at the supposed causes (both candidates are weenies, usually).


But how many non-voters would expect that to force a recall one needs only as many signatures as a fraction of the number who voted last?


In California, a recall is forced if the recall effort can claim only 12% of the number of gubanatorial voters in the last election.  Whose bright idea was that?  If 9% of the voters turn out for an election, around 1% of the electorate could force an expensive recall election process?


It is especially ironic that the main excuse for the Gray Davis recall campaign is the California budget shortfall ...


What bugs me about this is that if 95% of the electorate aren't angry with the incumbent, he can still be recalled. 


While there is some justification for allowing a small percentage of the voters who turn up to elect a governor (I mean, someone has to be governor. right?), it is not the case that a recall is necessary, so why accept small turnouts? Recall elections should probably be based on number of elgible voters, not number who turn out.  If 12% of eligible voters sign the petitions, there is a recall election.  If 51% of the elegible voters turn out and vote for a recall, you're gone.  If a majority of elegible voters either isn't upset enough to turn up to recall you or thinks you're ok, the recall fails.


That is, if a recall shouldn't require a 2/3 majority. I think one of Heinlein's characters said something like this, that it should take a 2/3 vote to pass a law, and 1/3 to rescind it. If a law doesn't seem a really good idea, why have it? And if on reflection it upsets even 1/3 of the population then it is probably not a really good idea.
 
[Comments from my previous blog]

1. Bill left...
Saturday, 12 August 2006 7:11 am
Try this on for size.
You might a well be and illegal alien if you do not vote.
There really is no voter apathy, just angry voters.
Who beleive no one pays attention...

Whither reuse

Does anyone believe in reuse anymore?


I started thinking about it again after the third time I was asked about reuse by a C?O during a briefing, and realised that despite not having got there yet, we are actually getting closer.


For me, reuse began with procedural programming. First with C, then with OO, and now with Web services, reuse proponents have argued that now we're finally going to benefit from software reuse.  Procedural brought functional decomposition, but it turned out that the specific decompositions weren't always useful or long-lived.  Then OO tried to improve decompositions by tying them to real-world objects that seem to have more longevity than bald software artifacts. Finally, web service protocols create patterns for specific kinds of information transfer and for self-description of interfaces.


While these qualities of decomposition, modeling and standard interfaces/self-description are probably necessary to support effective reuse, I suspect that they aren't sufficient.  We're going to have to solve a few more problems.


One is what I call world view. If you and I each break down a problem into some set of domain objects (hopefully benefiting from OO techniques),  will we come up with the same decomposition?  And if we don't, and I implement part of the solution and you another, how shall we integrate the two?  We'll need glue of some form or other, to convert between these different views of reality. We'll have to convert parameters, tweak call semantics ...


A simple example is in purchase orders. If I decompose purchase orders into extended line items (with the line item quantities in the line item, not the PO), and you in non-extended (where the PO refers to a simple product and the PO itself has the count), any integration of our various work must map between
these semantic realities, making reuse incrementally more difficult.


I suggest that effective reuse and integration can only easily occur when the various authors agree on some fundamentals about the problem, like what set of domain objects and methods on those objects would be useful.


There is work happening for some well-known application domains, fostered by associations within those industries.  Right now we are seeing agreements on world view being developed for some specific cases of information transfer, including (at least) supply chain, financial instruments, and so forth.  Eventually there will be more.


Another requirement of reuse is effective directories.  People have been talking about this for some time, but I don't believe we've seen a good enough repository yet.  After all, a directory must permit searching on those agreed-upon domain objects we've just mentioned.  However, the current directories I've heard about don't do this - they are horizontal directories, designed by software geeks who haven't yet grokked the human issue here. IMO.


But the largest problem, which applies equally to web services specifically and to software reuse in general, seems to me to be a legal one.


Web services proponents describe a world in which consuming organisations look up a supplier of the information or service they want in a directory, tailor a call to the published interface, and extract the result from the described report format.  What they don't even attempt to solve is the
problem of the business relationship.


When a consumer wants to do business with a supplier in a business setting, their legal teams do a long dance that involves negotiating contracts, signatures, and so forth.  A similar dance will be required for effective reuse that spans organisations, because the reuser (reprogrammer?) will want indemnification from patent infringements and bugs (say) that the component author is responsible for.


So far I've seen no work in this area, to facilitate automated manufacture of business relationships.  And without this, I think reuse can only ever be a personal thing.

Tuesday, July 1, 2003

Junk spam, or: No thanks, my package is big enough

But my Inbox isn't.

Well, it seemed like a good idea. Email spam is like fax spam, so why not charge the email spammers $500 per incident just as we do for fax?

Pity that a committee of the California legislature killed a bill, SB 12, that would do just this.

So what else to try? I've been using spamassassin for some time, and while it filters out a bunch of crap I still need to spend more time than I care to in turfing junk email.

Spammers of course say this is a free speech issue, but I'm reminded of that quip about freedom of the press belonging to he who owns the [printing] press.

I'm starting to think I should declare my mail server private property and charge these jerks with trespassing.

Or just charging them. I heard of a technique that works with phone solicitors. Keep a record of all calls (call number display can help), and when a new solicitor calls inform him that this is a business line and your business is evaluating telephone solicitation. On a subsequent call just listen as a consumer would, write up a report, and send it to them with a bill for as much as you think your time is worth, perhaps $500. This has apparently succeeded in small clains court, a fact not lost on the solicitors who view being informed that you will do this as somewhat more interesting than even the recent US federal Do Not Call list.

I wonder if this will work for email as well? It'd be a bit more work to track down the sender ... I might have to charge them more for my report.

Friday, June 20, 2003

Faulty patents or patents faulty?

Simon writes that open source has some trouble with unwitting infringement of patents, and (following the thread and replies in the comments), that while OSS is great at demonstrating prior art, it seems to fail at the usual commercial tactic of maintaining thickets of patents to cross-license when one needs someone else's patent.


This seems as good a time as any to wonder whether software patents make any sense. I mean, the whole notion of patents stems from an assumption that inventing is hard and expensive. This is true if one is inventing in physical or chemical processes, where the equipment needed to perform research is beyond the means of all but the well endowed.


It is less true in software. We used to joke that a good software engineer discards half a dozen good inventions before lunch each day, and the equipment requirements to invent in this space are essentially zero (at least for those in wealthier nations).


So, do patents actually serve us? Since they don't make up for any particular problem in development of software inventions, they tend to over-reward early participants in the field for writing down the obvious. I mean, the XOR cursor? Or the wireless modem? (Despite the notion of a phone modem and the notion of a wireless phone both being established, those bright minds at the patent office actually granted a patent on the trivially obvious combination!)


To my mind, patents in software add no value, and really only serve to subject software development to the less than scrupulous (lawyers, that is).


And the open source world seems tailor made to run into difficulty here, since (as others have noted) there is no self-policing (can't be because by and large developers have no clue what patents have been granted), no legal staff to produce defensive patents and maintain them for cross licensing, and limited resources to fight a legal battle with a large patent owner who might choose to litigate (prior art may be enough to have a case, but one needs legal resources to successfully defend).


It is starting to look to me as if the OSS world won't ever meet its real potential unless software patents go away. Since the non-OSS world has little incentive to change, it would seem to be up to OSS boosters to drive it.

Tuesday, June 17, 2003

Revisionist history

A Bush soundbite is receiving a fair amount of airtime today on PBS, in which he refers to a fair amount of revisionist history lately, and that "one thing is clear, [Saddam Hussein] is no longer a threat to the free world."


Actually, I think what's clear is that Hussein is *not* a threat to the free world.  *No longer* presumes that he has been one, which has singularly failed to be proven.




1. a reader left...
Sunday, 23 November 2003 5:08 am
Right, he was only a threat to the non-free among his compatriots in the Arab League (think of his friends the Kuwaitis and Iranians), and of course to his own countrymen, but who cares about them? Vivre le freedom! At least it's not us, eh?
Ferdinand de la Cerveso de Gaulle von Schmittenstern