I did a search on Google for 'tsunami disaster relief', and then had a look at the returned results, both the sponsored results as well as the text matches. Every one I looked at (and I looked at about the dozen top links returned) directed me to contribute to relief funds whose target beneficiaries were unspecified. Yes, they were all relief, but every one of the organisations had a charter far more broad than the disaster whose relief I might want to contribute to.
This is disappointing. I don't donate to the United Way because I'm never sure where my donation will go, nor am I overly pleased with their overhead ratio (admittedly, I researched the overhead several years ago, things might have improved). Instead I usually donate to single-purpose charities whose beneficiaries I specifically want to target.
There are arguments for another approach, of course. A problem inherent in donating to single charities is that high profile problems (notably AIDs, cancer, heart disease) will get the majority of donation, whereas any number of equally (or sometimes more) deserving charities are starved for attention and dollars.
Be that as it may, if I am concerned about where my donation will be used, I wasn't able to develop a comfort level with the tsunami relief funds I found in a brief search.
It seems ironic that those organizations sponsoring ads on Google have had enough time to buy ad space for "tsunami", but not enough time or perhaps inclination to create specific funds that would solicit my interest. But they're happy to use public attention on the tragedy to replenish their general coffers. In a world in which smaller scale tragedies like a single drive-by shooting victim has a charitable fund for their dependents organised through a church or bank, larger organisations should find creation of such targeted funds a simple matter. That is, if they were interested in fulfilling the implied promise of collecting under auspices of a specific named tragedy.
Friday, December 31, 2004
Tsunami warnings: how, precisely?
Webmink blogged on conspiracy theories surrounding the tsunami. A poster to his mailing list reported that the Thai government purportedly chose to not warn the public:
We're all thinking about the tsunami and wondering what effect something like that would have on us. Many of my own thoughts are about what kind of warning system could have made a difference.
I have no faith that there is any way to adequately inform people of something like this. No doubt more could have been done in Thailand, Indonedsia, India, Sri Lanka etc to warn in the tourist and 'business' regions that might be affected, but how to get timely warning to residential (and in many cases low-income residential) areas? And somehow I imagine that warning all the rich tourists and leaving poor citizens to fend for themselves would go over ... oh, about as well as lots of research funding for AIDS but much less for TB, despite TB being more prevalent and cheaper to treat (well, historically).
Even thinking out the 'western' world it isn't clear. I live in the San Francisco Bay area in the US. There are a great many areas close to (or sometimes a little below) sea level, and I can't think how people there would be effectively warned. There are a whole bunch of office buildings - are authorities going to call them all? Thousands, of varying sizes? I don't imagine workers are watching the TV for some sort of news flash. Do the authorities have some sort of triage system, priorizing calls based on organization size? I can see the next round of conspiracy theories already.
There are lots of beaches without lifeguards or any other sort of official presence. Coastal hiking trails. Marinas at which there might be any number of people in their boats. Wetlands parks. Low-elevation residential areas.
How is one to get a timely warning to any of these places?
... without definitive proof of an imminent tsunami, the meteorological department dared not issue a national warning lest it be accused of spreading panic and hurting the tourism industry ...
We're all thinking about the tsunami and wondering what effect something like that would have on us. Many of my own thoughts are about what kind of warning system could have made a difference.
I have no faith that there is any way to adequately inform people of something like this. No doubt more could have been done in Thailand, Indonedsia, India, Sri Lanka etc to warn in the tourist and 'business' regions that might be affected, but how to get timely warning to residential (and in many cases low-income residential) areas? And somehow I imagine that warning all the rich tourists and leaving poor citizens to fend for themselves would go over ... oh, about as well as lots of research funding for AIDS but much less for TB, despite TB being more prevalent and cheaper to treat (well, historically).
Even thinking out the 'western' world it isn't clear. I live in the San Francisco Bay area in the US. There are a great many areas close to (or sometimes a little below) sea level, and I can't think how people there would be effectively warned. There are a whole bunch of office buildings - are authorities going to call them all? Thousands, of varying sizes? I don't imagine workers are watching the TV for some sort of news flash. Do the authorities have some sort of triage system, priorizing calls based on organization size? I can see the next round of conspiracy theories already.
There are lots of beaches without lifeguards or any other sort of official presence. Coastal hiking trails. Marinas at which there might be any number of people in their boats. Wetlands parks. Low-elevation residential areas.
How is one to get a timely warning to any of these places?
[Comments from my previous blog]
Tuesday, December 21, 2004
Products and organizational culture
Is software development, software development? That is, is there a core competency in development that is unrelated to what is being developed?
Dare wrote:
This was in response to an open letter by Robert Scoble about getting Microsoft into the 'Pod business.
Interesting to me is whether it is reasonable to expect one organisation to excel at all kinds of software. I think different kinds of software demand a different mindset during
development. Differing priorities more than opposite goals.
As one example, think about games vs operating systems.
Games developers focus on new, flashy, aggressive push-the-envelope use of hardware. In some cases they probably anticipate the next generation of hardware, or even the next generation of overclockers. Stability is less important than texture and grit and larger-than-life experience. And if you push a little too hard and it crashes from time to time, well, not critical. So long as you meet the the Christmas buying season.
Operating system developers have almost the opposite goals. Stability, on hardware old and new, is (or should be) job #1.
I could come up with more examples. You can too.
But if there are different goals, is there one organisational culture that excels at both? That encourages boundless envelope pushing on the one hand and utter stodginess on the other?
By analogy, can corporate accounting and outside sales report to the same organisation? Is it reasonable to expect that they should, and if they do, how well will it work?
Dare wrote:
One reaction which is obvious in hindsight is the assumption in this post that Microsoft shouldn't abide the fact that Apple is dominating a market it isn't directly engaged in. This is such a natural way of thinking for Microsoft people ("we should be number 1 in every software/hardware/technology related market") that it is often surprising for non-Microserfs when they first encounter the mentality.
This was in response to an open letter by Robert Scoble about getting Microsoft into the 'Pod business.
Interesting to me is whether it is reasonable to expect one organisation to excel at all kinds of software. I think different kinds of software demand a different mindset during
development. Differing priorities more than opposite goals.
As one example, think about games vs operating systems.
Games developers focus on new, flashy, aggressive push-the-envelope use of hardware. In some cases they probably anticipate the next generation of hardware, or even the next generation of overclockers. Stability is less important than texture and grit and larger-than-life experience. And if you push a little too hard and it crashes from time to time, well, not critical. So long as you meet the the Christmas buying season.
Operating system developers have almost the opposite goals. Stability, on hardware old and new, is (or should be) job #1.
I could come up with more examples. You can too.
But if there are different goals, is there one organisational culture that excels at both? That encourages boundless envelope pushing on the one hand and utter stodginess on the other?
By analogy, can corporate accounting and outside sales report to the same organisation? Is it reasonable to expect that they should, and if they do, how well will it work?
Saturday, December 18, 2004
Are there two sides to every story?
I heard an interview with psychologist Drew Weston about the way we process discussion and debate, which I found fascinating. He was commenting on journalists presenting "both sides" "as if the midpoint of two biased views is somehow reality".
I've long been disturbed by the way reporters seem to work - I joke that if a reporter finds someone who believes the sun comes up in the west, the reporter will publish a story on the debated sunrise and present both sides fairly.
This isn't objective journalism. Paraphrasing Weston, it isn't objective to portray as opinion the undisputed or undisputable truth.
I've been chewing over some conservative Christian rhetoric recently, and perhaps this is a way to think about it.
On the night of the recent US election, ex Bush speechwriter David Fromme commented that religious conservatives only want respect for their views. This is a clever casting of the debate, because of course we all want to be reasonable. Reason is good. Bias is bad. But by awarding respect to views, are we agreeing that the views are respectable?
I'm not about to respect some of the views I hear, but I will respect the people who hold them, so long as their behaviours are respectable.
Mormons are taught that alcohol is bad, and they choose to not drink it. Fine. They don't tell me I can't drink. Fine. I respect them for having an opinion, expressing it, and living their life consistenty with regard to that opinion.
The problem I have with many religious conservatives, and Fromme's casting of what they want, is that they don't merely want respect for their views. What they really want is to change my behaviour based on their views. They're changing laws to preclude actions inconsistent with their views.
And while I treat them with respect for having and living their views, I cannot respect their desire to force me to live their world view.
Nor am I able to respect the journalist's desire to afford objective treatment to such unreasonable expectation.
It's fine to report on differing views of when life begins. Reporting on a debate of whether abortion should be legal, as if such were a legitimate debate, that's a cop out of real journalism. Because a debate on abortion is really a debate on the ability of a group with particular belief system to regulate the behaviour of another group with different beliefs, and that is exactly what a constitutional democracy is intended to protect against.
I've long been disturbed by the way reporters seem to work - I joke that if a reporter finds someone who believes the sun comes up in the west, the reporter will publish a story on the debated sunrise and present both sides fairly.
This isn't objective journalism. Paraphrasing Weston, it isn't objective to portray as opinion the undisputed or undisputable truth.
I've been chewing over some conservative Christian rhetoric recently, and perhaps this is a way to think about it.
On the night of the recent US election, ex Bush speechwriter David Fromme commented that religious conservatives only want respect for their views. This is a clever casting of the debate, because of course we all want to be reasonable. Reason is good. Bias is bad. But by awarding respect to views, are we agreeing that the views are respectable?
I'm not about to respect some of the views I hear, but I will respect the people who hold them, so long as their behaviours are respectable.
Mormons are taught that alcohol is bad, and they choose to not drink it. Fine. They don't tell me I can't drink. Fine. I respect them for having an opinion, expressing it, and living their life consistenty with regard to that opinion.
The problem I have with many religious conservatives, and Fromme's casting of what they want, is that they don't merely want respect for their views. What they really want is to change my behaviour based on their views. They're changing laws to preclude actions inconsistent with their views.
And while I treat them with respect for having and living their views, I cannot respect their desire to force me to live their world view.
Nor am I able to respect the journalist's desire to afford objective treatment to such unreasonable expectation.
It's fine to report on differing views of when life begins. Reporting on a debate of whether abortion should be legal, as if such were a legitimate debate, that's a cop out of real journalism. Because a debate on abortion is really a debate on the ability of a group with particular belief system to regulate the behaviour of another group with different beliefs, and that is exactly what a constitutional democracy is intended to protect against.
Friday, December 17, 2004
Ironic ads by Google
Google needs to start a new service: Ironic Ads by Google.
We've all seen the ads Google placed on both its search results and content pages in the web. When I visit some web pages that have a deal with Google, keyword searches against the page content are used to select ads "the reader will potentially be interested in".
This seems to make sense. I go to a page talking about an product, and might get ads offering that product for sale.
Where this gets to be more amusing is when the content page (in Google parlance) is actually a negative reference. A bad experience report against a consumer product. An argument against a public policy. Google's keyword search of the page identifies the same keywords, and then offers ads for the product being discussed.
I got thinking about this today in reading MinkBlog's article on RFID and privacy, which article I recommend by the way. The article points out that if RFID tags aren't disabled when they leave the sales/delivery chain, they can go on being used to track individuals.
The irony is that in an article about the perils of RFID for privacy, the ads placed by Google are for ... you guessed it ... RFID providers.
I can't help it. It's funny. Google should figure out a way to detect negative pieces, and then offer this service deliberately.
We've all seen the ads Google placed on both its search results and content pages in the web. When I visit some web pages that have a deal with Google, keyword searches against the page content are used to select ads "the reader will potentially be interested in".
This seems to make sense. I go to a page talking about an product, and might get ads offering that product for sale.
Where this gets to be more amusing is when the content page (in Google parlance) is actually a negative reference. A bad experience report against a consumer product. An argument against a public policy. Google's keyword search of the page identifies the same keywords, and then offers ads for the product being discussed.
I got thinking about this today in reading MinkBlog's article on RFID and privacy, which article I recommend by the way. The article points out that if RFID tags aren't disabled when they leave the sales/delivery chain, they can go on being used to track individuals.
The irony is that in an article about the perils of RFID for privacy, the ads placed by Google are for ... you guessed it ... RFID providers.
I can't help it. It's funny. Google should figure out a way to detect negative pieces, and then offer this service deliberately.
Friday, December 10, 2004
Lies, damned lies, and statistics: artists don't see file sharing as a threat
I enjoy reading Good Morning Silicon Valley, usually.
But a few days ago, in a comment regarding music file sharing, John quoted a survey:
And there is admission that the survey was conducted by self-selection. How many big-name musicians do we imagine responding to such a survey? I suspect only a few, at best, and so the survey speaks only about the attitudes of the disaffected artist. To the extent it speaks for any.
I can imagine one's attitude towards the publicity offered by file sharing might depend strongly on whether one has a record company contract providing promotion. Given the problems in survey design evidenced here, I feel perfectly within my rights to entirely explain away the result as being related to whether the artist had professional promotion. Even more, I am interested to see that even 28% see file sharing as a threat, when I imagine a survey population that is strongly (or even entirely?) biased towards the disenfrachised. To my mind, this result is suggestive of the opposite conclusion to the one reported.
I simply cannot understand why anyone would conduct such a worthless piece of research, or publish the results when done. Or then report on it (John!). Apart from to rant at the insanity, as I am doing.
Ok, ok, hyperbole aside, I can imagine why someone might conduct and report on a bad survey - activism. But not why a reporter would then report on the survey, as opposed to reporting on the activist intent. Or simply ignore it.
I hesitated to blog this one, because my feelings about filesharing technology are generally positive, and I didn't want my negative comments about the shoddy use of poor statistical methodology to be interpreted as I disagree with the point being made. I'm really only commenting here on the statistics.
But a few days ago, in a comment regarding music file sharing, John quoted a survey:
Only 28 percent of the 2,755 musicians surveyed saw file-sharing as a big threat to creative industries.
What would be interesting to know is the relationship between income and the perception of whether file-sharing hurts. Or perhaps the relationship between whether one has been signed to a media label and opinion on whether file sharing hurts. The fact is, there are dramatic differences between subpopulations within this survey. And there is admission that the survey was conducted by self-selection. How many big-name musicians do we imagine responding to such a survey? I suspect only a few, at best, and so the survey speaks only about the attitudes of the disaffected artist. To the extent it speaks for any.
I can imagine one's attitude towards the publicity offered by file sharing might depend strongly on whether one has a record company contract providing promotion. Given the problems in survey design evidenced here, I feel perfectly within my rights to entirely explain away the result as being related to whether the artist had professional promotion. Even more, I am interested to see that even 28% see file sharing as a threat, when I imagine a survey population that is strongly (or even entirely?) biased towards the disenfrachised. To my mind, this result is suggestive of the opposite conclusion to the one reported.
I simply cannot understand why anyone would conduct such a worthless piece of research, or publish the results when done. Or then report on it (John!). Apart from to rant at the insanity, as I am doing.
Ok, ok, hyperbole aside, I can imagine why someone might conduct and report on a bad survey - activism. But not why a reporter would then report on the survey, as opposed to reporting on the activist intent. Or simply ignore it.
I hesitated to blog this one, because my feelings about filesharing technology are generally positive, and I didn't want my negative comments about the shoddy use of poor statistical methodology to be interpreted as I disagree with the point being made. I'm really only commenting here on the statistics.
Tuesday, November 23, 2004
Ok, hockey
Doc asks "remember hockey?"
Well, yes, I shouldn't ignore violence in hockey. But largely the thing some complain about in hockey isn't as big a deal for me.
"I was at a fight and a hockey game broke out." Sure. But largely these are nice clean fights between two people who want to fight. Some might argue that there is no room for fights, period. I'm a little ambivalent.
But its the other chippy stuff that bugs me. Like I said, the spearing when the ref isn't looking. To my mind, the league or refs should review the video post game for uncalled abrasion, and award penalties against the next game.
Hockey has had its share of ugliness, too. I mean apart from Garth Butcher. :)
But brawls have happened:
This sort of thing is rare, and the NHL treated it seriously. Bertuzzi was handed an open-ended suspension, to last at least through the end of season including playoffs. The suspension will be reevaluated at beginning of next season's training. Or it would have if hockey players weren't locked out.
The only other case I recall is Marty McSorley, who swung his stick like a bat, from behind, hitting Brashear in the back of the head. Brashear suffered a severe concussion which ended his season. McSorley also received a season suspension.
Happily hockey has glass between the ice and the fans, so we won't see these events in the stands. Lately they've even added nets above the glass to make damn good and sure. Unlike basketball.
For all that people wring their hands over hockey being too violent, the NHL responds very appropriately to excessive violence. And all you folks who think the NBA was too harsh, the Artest suspension (basketball) was also a right thing. I haven't yet heard of a penalty being levied against Clemson or the Gamecocks.
Well, yes, I shouldn't ignore violence in hockey. But largely the thing some complain about in hockey isn't as big a deal for me.
"I was at a fight and a hockey game broke out." Sure. But largely these are nice clean fights between two people who want to fight. Some might argue that there is no room for fights, period. I'm a little ambivalent.
But its the other chippy stuff that bugs me. Like I said, the spearing when the ref isn't looking. To my mind, the league or refs should review the video post game for uncalled abrasion, and award penalties against the next game.
Hockey has had its share of ugliness, too. I mean apart from Garth Butcher. :)
But brawls have happened:
Canada and Russia emptied the benches for a bench-clearing brawl and someone at the arena in Piestany (in then Czechoslovakia) turned out the lights while punches were being thrown. The Russians and Canadians were booted out of the tournament, and Finland was awarded the gold medal.And beyond fighting, there's the Bertuzzi affair. Todd pursued another player, punched him from behind and drove him to the ice where he suffered further damage. Result: neck fracture, concussion and deep cuts to his face. But this wasn't a fight, and those who argue against fighting in hockey using this as an excuse, well, it doesn't work.
This sort of thing is rare, and the NHL treated it seriously. Bertuzzi was handed an open-ended suspension, to last at least through the end of season including playoffs. The suspension will be reevaluated at beginning of next season's training. Or it would have if hockey players weren't locked out.
The only other case I recall is Marty McSorley, who swung his stick like a bat, from behind, hitting Brashear in the back of the head. Brashear suffered a severe concussion which ended his season. McSorley also received a season suspension.
Happily hockey has glass between the ice and the fans, so we won't see these events in the stands. Lately they've even added nets above the glass to make damn good and sure. Unlike basketball.
For all that people wring their hands over hockey being too violent, the NHL responds very appropriately to excessive violence. And all you folks who think the NBA was too harsh, the Artest suspension (basketball) was also a right thing. I haven't yet heard of a penalty being levied against Clemson or the Gamecocks.
Monday, November 22, 2004
Sportsmanship
Where did the notion of sportsmanship go?
I've heard this thought a bunch over the past few days, the speakers complaining of some recent American pro-sports shenanigans. A basketball game turned into a brawl, with at least one player climbing into the stands to attack a fan. Police came ont the field to try to control a sideline clearing melee at a college football game. I'm not going to mention baseball.
If events like these are the first indication you've had of lapses in sportsmanship, well, wrong. My concern, referred to obliquely a few days ago, is about an attitude many players and broadcasters have that "a rule isn't a rule if you get away with it." The football player that moves the ball after play has stopped. Hockey player that spears another when the ref isn't looking. Tennis players that go into professional histrionics at a supposed bad line call.
And the television announcers support it, with their jabber about how "he got away with one there," and "Look, they're trying to get the next play off before anyone can challenge the call."
I'm especially disappointed in the players who defended their behavior.
This is what I don't want my kids seeing. These are the messages I don't need them learning. Someone once said "I'd rather have my children watch a film of two people making love than two people trying to kill one another." I couldn't care less about sex on TV. Janet Jackson is irrelevant. If there is erosion of social fabric (a favorite topic of some), it stems from our kids learning that fairness and respect don't matter. And what's next? Hit-and-run accidents, drive-by shootings and Enron.
So far as those backetball and football players over the past few days? Bar them from the sport for life. Once there are real consequences, you'll be amazed how quickly these prima donna attitudes change. Fines don't cut it. Maybe making them wear a patch on their uniform for their next few games that say "I'm a bum" would help.
Better yet, decertify those two college football teams. The focus on sport at college made a certain sense when it was to teach sportsmanship. It is demonstrably failing to do that.
I just read now that in fact there were some game and season suspensions for some involved in the basketbrawl. Well and good. But a survey at espn.com shows that the vast majority of the public believe the season or 25 game suspensions were just right or too harsh, the 6 game suspension was just right. Folks, you just want a rematch, don't you?
I've heard this thought a bunch over the past few days, the speakers complaining of some recent American pro-sports shenanigans. A basketball game turned into a brawl, with at least one player climbing into the stands to attack a fan. Police came ont the field to try to control a sideline clearing melee at a college football game. I'm not going to mention baseball.
If events like these are the first indication you've had of lapses in sportsmanship, well, wrong. My concern, referred to obliquely a few days ago, is about an attitude many players and broadcasters have that "a rule isn't a rule if you get away with it." The football player that moves the ball after play has stopped. Hockey player that spears another when the ref isn't looking. Tennis players that go into professional histrionics at a supposed bad line call.
And the television announcers support it, with their jabber about how "he got away with one there," and "Look, they're trying to get the next play off before anyone can challenge the call."
I'm especially disappointed in the players who defended their behavior.
This is what I don't want my kids seeing. These are the messages I don't need them learning. Someone once said "I'd rather have my children watch a film of two people making love than two people trying to kill one another." I couldn't care less about sex on TV. Janet Jackson is irrelevant. If there is erosion of social fabric (a favorite topic of some), it stems from our kids learning that fairness and respect don't matter. And what's next? Hit-and-run accidents, drive-by shootings and Enron.
So far as those backetball and football players over the past few days? Bar them from the sport for life. Once there are real consequences, you'll be amazed how quickly these prima donna attitudes change. Fines don't cut it. Maybe making them wear a patch on their uniform for their next few games that say "I'm a bum" would help.
Better yet, decertify those two college football teams. The focus on sport at college made a certain sense when it was to teach sportsmanship. It is demonstrably failing to do that.
I just read now that in fact there were some game and season suspensions for some involved in the basketbrawl. Well and good. But a survey at espn.com shows that the vast majority of the public believe the season or 25 game suspensions were just right or too harsh, the 6 game suspension was just right. Folks, you just want a rematch, don't you?
Tuesday, November 16, 2004
Objectionable content
There's been a flurry of commentary in mailing lists this morning on proposed US legislation that would make technological means to skip TV commercials illegal. But skipping objectionable content like gory or explicit scenes would be ok. Wired published a story on this as well.
But what if the objectionable content is the commercial?
Case in point: football. The sports broadcast itself is fine, and I have only a few concerns about letting my young'uns watch that. But the ads the broadcasters insert frequently make me cringe - gun violence, explosions, etc. These ads are far from G-rated content, and I will either skip them or my family (and me) won't watch the broadcast at all.
But what if the objectionable content is the commercial?
Case in point: football. The sports broadcast itself is fine, and I have only a few concerns about letting my young'uns watch that. But the ads the broadcasters insert frequently make me cringe - gun violence, explosions, etc. These ads are far from G-rated content, and I will either skip them or my family (and me) won't watch the broadcast at all.
[Comments from my previous blog]
1. Jay C left... If anyone has been noticing (try to miss it!) the advertisement for Comcast, where a guy goes beserk, and the whole bit erupts into really V I O L E N T mayhem, they shouldn't question the need to have a means to bypass that kind of CRAP! What kind of ad groups are producing this kind of stuff? and what kind of company management signs off on it ?
Saturday, 10 February 2007 5:24 pm
Thursday, November 4, 2004
Escape planning
I don't know what to say about this one, but I thought it was hilarious. For those suffering PTES (Post Traumatic Election Syndrome), here's a great 60's idea revisited.
[Comments from my previous blog]
1. a reader left... What an excellent idea! Although a quick step through the profiles, throws up a lesbian;
http://www.marryanamerican.ca/profiles/prof5.php
Some other great links:
http://idisk.mac.com/glwebb-public/new_map.jpg
But this one had me laughing the most (no idea if its true or not)
http://chrisevans3d.com/files/iq.htm
Alan
Thursday, 4 November 2004 10:16 am
http://www.marryanamerican.ca/profiles/prof5.php
Some other great links:
http://idisk.mac.com/glwebb-public/new_map.jpg
But this one had me laughing the most (no idea if its true or not)
http://chrisevans3d.com/files/iq.htm
Alan
2. a reader left... Democrats might get some comfort from the fact that George W .Bush can't run next time and Republicans haven't won a presidential election without either a George Bush or the Richard Nixon on the ticket since 1928. Of course Jeb's son is a George Bush as well.
I list the elections in my blog if you can't remember all the republican victories since 1928
http://planetralph.blogspot.com
Ralph Galantine
Thursday, 11 November 2004 7:21 pm
I list the elections in my blog if you can't remember all the republican victories since 1928
http://planetralph.blogspot.com
Ralph Galantine
Wednesday, November 3, 2004
Patenting global web-commerce?
I can understand a temptation to sue Dell (even a small possibility of winning or settling yields a better return than the lottery or playing the tables), but a patent on web-based international e-commerce?
I've ranted and railed about software patents - I guess I need to expand that to business method patents.
Society grants patents to companies to publish their ideas so others can leverage those ideas within still greater innovations. But if the idea is obvious, others will think of it without such publishing, and there is no value returned to society in exchange for the grant of exclusivity through a patent.
Hence such glorious patents as the XOR cursor, the ability to connect a modem to a cell-phone, and now the possibility of letting international customers use your web-commerce site, these are ***STUPID*** patents. We the people have got a bum deal from our government authorities who grant such travesties, because these patents do nothing except increase the cost of doing business. There is no social benefit.
I've ranted and railed about software patents - I guess I need to expand that to business method patents.
Society grants patents to companies to publish their ideas so others can leverage those ideas within still greater innovations. But if the idea is obvious, others will think of it without such publishing, and there is no value returned to society in exchange for the grant of exclusivity through a patent.
Hence such glorious patents as the XOR cursor, the ability to connect a modem to a cell-phone, and now the possibility of letting international customers use your web-commerce site, these are ***STUPID*** patents. We the people have got a bum deal from our government authorities who grant such travesties, because these patents do nothing except increase the cost of doing business. There is no social benefit.
Monday, October 25, 2004
Is Google just a bubble?
Some wonder at the growth of Google shares since their IPO. (Since the wondering, the stock continues to go up). And I too think this looks a lot like the old bubble.
However, the explanation of why Google is overvalued doesn't hold water.
I actually like traditional metrics, but this analysis fails because it assumes Google's lines of business are static. While an old auto company may have a pretty static business, Google is a relatively young company and the web is evolving fast.
Google revenue is growing now at about 10% a year with essentially one (plus or minus) line of business. The question folks ought to be asking is to what extent the Google management team can leverage their assets to bring new businesses to market, with brand new streams of revenue. If they can, 30% growth is easy (though perhaps not year-over-year for the next 15 years). Investors need to evaluate whether the current team succeeded because they had one really good idea, or because they are good at choosing ideas and executing. If the team isn't up to picking the right next idea, the question of whether their current line of business can support even the current stock price (let alone further growth) is easily answered.
However, the explanation of why Google is overvalued doesn't hold water.
... if you were to apply traditional metrics to the company, you'd find that Google's revenue would need to grow 30 percent a year for the next 15 years to justify its current valuation. And the chances of that happening do seem slim, particularly since Google is so dependent on advertising. ...
I actually like traditional metrics, but this analysis fails because it assumes Google's lines of business are static. While an old auto company may have a pretty static business, Google is a relatively young company and the web is evolving fast.
Google revenue is growing now at about 10% a year with essentially one (plus or minus) line of business. The question folks ought to be asking is to what extent the Google management team can leverage their assets to bring new businesses to market, with brand new streams of revenue. If they can, 30% growth is easy (though perhaps not year-over-year for the next 15 years). Investors need to evaluate whether the current team succeeded because they had one really good idea, or because they are good at choosing ideas and executing. If the team isn't up to picking the right next idea, the question of whether their current line of business can support even the current stock price (let alone further growth) is easily answered.
Sunday, October 24, 2004
Now an award-winning homebrewer
After brewing beer in my kitchen for several years, I won my first homebrew competition yesterday in the 11th annual Brew-Ha-Ha at our friends Jack & Mo's.
The names brewers come up with for their beers are frequently a lot of fun. We had a real "elections" theme to the titles yesterday, with Ballot Box Root Beer, Undecided Amber, and my own offering, False Victory.
As for the win, "Surprised" best conveys my feelings. Though perhaps it shouldn't, if one pays attention to recent political elections, where we've seen those staking out (or pretending to) the middle ground doing well. Just as in elections, my winning brew was somewhat lighter than my usual - a little less body, a little less hop. And that moderation seemed to work for many. Go figure.
The names brewers come up with for their beers are frequently a lot of fun. We had a real "elections" theme to the titles yesterday, with Ballot Box Root Beer, Undecided Amber, and my own offering, False Victory.
As for the win, "Surprised" best conveys my feelings. Though perhaps it shouldn't, if one pays attention to recent political elections, where we've seen those staking out (or pretending to) the middle ground doing well. Just as in elections, my winning brew was somewhat lighter than my usual - a little less body, a little less hop. And that moderation seemed to work for many. Go figure.
Wednesday, October 13, 2004
Criminalize acts, not means
I number of years ago I saw a bumper sticker that read "Register Mongols, not crossbows". The point being, if crossbows are illegal, only the Mongols will have crossbows.
Fast forward.
Today I read that the U.S. Justice Department seems to be modeling a war on file sharing after the war on drugs, and has released a set of recommendations re. copyright bills that would criminalize passive file sharing on peer-to-peer networks.
What these lobbyist-trough-lickers don't or won't get is that P-to-P file sharing is a technology. It may be used either for both legal and illegal acts. While VCRs and videocassettes were sometimes used for illegal piracy, they remain legal because recording for private home purposes is legal. While guns are sometimes (even frequently) used for illegal acts, they remain legal because they are also used for legal acts. The vast majority of people break the law in cars every day, yet they too remain legal.
P-to-P file sharing is used for any number of legal purposes as well, including legal distribution of copyrighted software or media by the owners or licensees of the copyright involved.
But as Lessig says in another context, Ashcroft doesn't get it.
Fast forward.
Today I read that the U.S. Justice Department seems to be modeling a war on file sharing after the war on drugs, and has released a set of recommendations re. copyright bills that would criminalize passive file sharing on peer-to-peer networks.
What these lobbyist-trough-lickers don't or won't get is that P-to-P file sharing is a technology. It may be used either for both legal and illegal acts. While VCRs and videocassettes were sometimes used for illegal piracy, they remain legal because recording for private home purposes is legal. While guns are sometimes (even frequently) used for illegal acts, they remain legal because they are also used for legal acts. The vast majority of people break the law in cars every day, yet they too remain legal.
P-to-P file sharing is used for any number of legal purposes as well, including legal distribution of copyrighted software or media by the owners or licensees of the copyright involved.
But as Lessig says in another context, Ashcroft doesn't get it.
[Comments from my previous blog]
1. a reader left... A lot of different discussions on copyright/filesharing/fair use are raging right now. To be honest: most of the traffic on P2P network is copyrighted material. We'll skip the copyright issue, that's another discussion altogether.
By banning the easiest way to share files there will be a massive move to other ways of 'sharing'. We've seen it happening with Napster. A lot of people want to 'share' files, and are willing to put in some effort to do so. Closing one road to their destination will only encourage them to find other ways.
The war on drugs has not been won, has it? The war on filesharing will probably end the same way.
Jeroen
Wednesday, 13 October 2004 12:37 pm
By banning the easiest way to share files there will be a massive move to other ways of 'sharing'. We've seen it happening with Napster. A lot of people want to 'share' files, and are willing to put in some effort to do so. Closing one road to their destination will only encourage them to find other ways.
The war on drugs has not been won, has it? The war on filesharing will probably end the same way.
Jeroen
2. glen martin left... Yes, a lot of copyrighted material is illegally shared over P-to-P networks. If it gets harder, the folks doing this sharing will go further underground. Read: "Speakeasy"s during Prohibition. A criminal class is created. The folks who suffer are the innocent bystanders and the legitimate users, not the illegitimate users.
Thanks for your comment.
Wednesday, 13 October 2004 1:15 pm
Thanks for your comment.
Wednesday, October 6, 2004
Social cost of spam
Everyone complains about the direct costs of spam, like bandwidth, storage, lost time, etc. Well, except the spammers themselves.
I haven't seen much on the social cost.
I believe it was a character in a Henlein book that complained "I wonder how much the accumulated idiots slow down society and social evolution". How much does spam interfere with culture?
An example: There was a media report today of a museum's emails being blocked by aggressive spam filters. The museum has the unfortunate name of "Horniman Museum".
No doubt my blog will now be blocked for having mentioned that.
Spammers have been deliberately misspelling words to get around spam filters - we've all seen that. I get literally hundreds of spam a day, and devote a small amount of time to filtering and anotehr small amount of time to look into a few of the messages (purely to track the kinds of techniques their using - honest). I've been wryly amused at their perseverence and their creativy. But their idiocy is interfering with legitimate ad desireable cultural exchanges. It just isn't funny.
Not that I have a lot of patience for the filters, or filterers, either. Aggressive filters have problems with much simpler situations, like legitimate information on breast cancer, boys clubs, and so forth, that aggresive keyword matching may flag as too sensitive for broad public consumption.
I haven't seen much on the social cost.
I believe it was a character in a Henlein book that complained "I wonder how much the accumulated idiots slow down society and social evolution". How much does spam interfere with culture?
An example: There was a media report today of a museum's emails being blocked by aggressive spam filters. The museum has the unfortunate name of "Horniman Museum".
No doubt my blog will now be blocked for having mentioned that.
Spammers have been deliberately misspelling words to get around spam filters - we've all seen that. I get literally hundreds of spam a day, and devote a small amount of time to filtering and anotehr small amount of time to look into a few of the messages (purely to track the kinds of techniques their using - honest). I've been wryly amused at their perseverence and their creativy. But their idiocy is interfering with legitimate ad desireable cultural exchanges. It just isn't funny.
Not that I have a lot of patience for the filters, or filterers, either. Aggressive filters have problems with much simpler situations, like legitimate information on breast cancer, boys clubs, and so forth, that aggresive keyword matching may flag as too sensitive for broad public consumption.
[Comments from my previous blog]
1. a reader left... With respect to spam, i have to say to not noticing it as a problem anymore. I am using SpamAssassin with the Baynesian filter on and its working a treat. We have setup two folders in our IMAP server; _IsSpam and _IsNotSpam. When an email that comes in that has slipped through, we simply drag it to there so the filter can learn once every 2hrs what is a good/bad email.
Over a small period of time, you find your spam folder now catching more and more, and i can now proudly report that my 'Spam' folder catches around 350+ emails a day, with very little false +ve's. Those that are caught that shouldn't we simply drag out and put them in the '_IsNotSpam' folder.
Having tried a lot of filter techniques, this is the only one that has worked every day.
Alan
Wednesday, 6 October 2004 12:17 pm
Over a small period of time, you find your spam folder now catching more and more, and i can now proudly report that my 'Spam' folder catches around 350+ emails a day, with very little false +ve's. Those that are caught that shouldn't we simply drag out and put them in the '_IsNotSpam' folder.
Having tried a lot of filter techniques, this is the only one that has worked every day.
Alan
2. glen martin left... I too use SpamAssassin, and it keeps spam down to a dull roar. And running SpamAssassin on my local server, I can check for false positives periodically.
But not everyone uses SpamAssassin. Some folks' email is filtered at the provider, eg, AOL. And even SpamAssassin can fall in the trap of "words spammers use", which can include some popular misspellings. Such as, I suppose, "Horniman."
Wednesday, 6 October 2004 3:04 pm
But not everyone uses SpamAssassin. Some folks' email is filtered at the provider, eg, AOL. And even SpamAssassin can fall in the trap of "words spammers use", which can include some popular misspellings. Such as, I suppose, "Horniman."
3. a reader left...
adult-galleries [adult-galleries1161@yahoo.com]
Monday, 25 October 2004 3:44 pm
adult galleries adult-galleries [adult-galleries1161@yahoo.com]
4. glen martin left... Oh dear, I'm hoping the above was a joke. How positively ironic.
Monday, 25 October 2004 4:01 pm
Wednesday, September 22, 2004
Circle of hell?
I heard a media report yesterday of a pair of comments by the 2 top candidates for US president.
Bush reportedly said: "The world is a better place for having Hussein in a cell. My opponent thinks he should still be in power."
Reportedly, what Kerry actually said was "There should be a circle of hell reserved for Hussein."
This is rather enlightening in either of two interpretations. Taken one way, Bush thinks of Iraq as a circle of hell. This is quite lucid of him, or at least, of Bush's own experience now with Iraq since he "won the war."
Taken another way, Bush views being in power as a circle of hell. I prefer this interpretation, and I trust, or dearly hope, he'll lose his sense of service to country again soon. God knows he's done it before.
Apologies for any minor innacurracies in the quotes. I heard them on the radio, no doubt my memory is less than perfect, but I assert they are thematically as I heard them reported.
Bush reportedly said: "The world is a better place for having Hussein in a cell. My opponent thinks he should still be in power."
Reportedly, what Kerry actually said was "There should be a circle of hell reserved for Hussein."
This is rather enlightening in either of two interpretations. Taken one way, Bush thinks of Iraq as a circle of hell. This is quite lucid of him, or at least, of Bush's own experience now with Iraq since he "won the war."
Taken another way, Bush views being in power as a circle of hell. I prefer this interpretation, and I trust, or dearly hope, he'll lose his sense of service to country again soon. God knows he's done it before.
Apologies for any minor innacurracies in the quotes. I heard them on the radio, no doubt my memory is less than perfect, but I assert they are thematically as I heard them reported.
Friday, September 17, 2004
Codepages be damned
Re. yesterday's blog, ok, so this is worse than I thought. Or perhaps better, depending on the way you look at it.
I broke down and booted into Windoze to try to figure out what codepage to use on the mount command, and lo and behold, regedit claims Windoze to be using CP 437, which is what I'd configured as the default in the kernel. But mounting from kernel 2.6.8 still doesn't work either specifying no codepage on the command, or cp 437.
Possibility 1 appears to be charset, which is another parameter on the mount command line.
Possibility 2 appears to be that Windoze is lying about what codeset the fs is formatted to.
Possibility 3 appears to be the new code is merely broken. I think I'd prefer this one, because while I'd owe some apologies re my uncharatable thoughts and comments, it is at least a reasonable mistake.
I think my next step is simply to back out the patch and produce a kernel-2.6.8--. Or perhaps check later kernels to see if someone further tweaked this area (but then those kernels might not have the various debian changes already incorporated into this one).
But coming back to "what is a user to do", I've gotta say this whole thing has been like a guessing game in that there isn't an apparent way for me to derive the parameter I need, and instead I'm left to guess what is expected. Ideally the mount error message would say "No, the drive is formatted to cp666 you moron! Use that!"
You: Pick a number between one and 100.
Me: Um, 42?
You: No! Wrong! Screwed! ....... Guess again!
... ad nauseum.
And I'm nauseated.
I broke down and booted into Windoze to try to figure out what codepage to use on the mount command, and lo and behold, regedit claims Windoze to be using CP 437, which is what I'd configured as the default in the kernel. But mounting from kernel 2.6.8 still doesn't work either specifying no codepage on the command, or cp 437.
Possibility 1 appears to be charset, which is another parameter on the mount command line.
Possibility 2 appears to be that Windoze is lying about what codeset the fs is formatted to.
Possibility 3 appears to be the new code is merely broken. I think I'd prefer this one, because while I'd owe some apologies re my uncharatable thoughts and comments, it is at least a reasonable mistake.
I think my next step is simply to back out the patch and produce a kernel-2.6.8--. Or perhaps check later kernels to see if someone further tweaked this area (but then those kernels might not have the various debian changes already incorporated into this one).
But coming back to "what is a user to do", I've gotta say this whole thing has been like a guessing game in that there isn't an apparent way for me to derive the parameter I need, and instead I'm left to guess what is expected. Ideally the mount error message would say "No, the drive is formatted to cp666 you moron! Use that!"
You: Pick a number between one and 100.
Me: Um, 42?
You: No! Wrong! Screwed! ....... Guess again!
... ad nauseum.
And I'm nauseated.
Thursday, September 16, 2004
Users' shoes
No fewer than three times over the past two days I've run into something utterly boneheaded from a usability standpoint.
Don't get me wrong, I largely agree with the decisions these developers had taken that led me my frustrations, but in each case the developers had solved the technical problem and left us poor users in the lurch.
Case in point, and not to pick on these folks specifically: Codepage handling in VFAT filesystem under linux in 2.6.8 kernel.
VFAT filesystems have codepages, and the codepage is apparently important to know how to (for example) translate upper-case to lower-case in filenames. The mount command allows you to specific the codepage to use.
When I migrated to 2.6.8 kernel this morning, I was surprised to find that my mounts didn't work. Seems the default codepage now has to be specified as a real codepage number, and that if it doesn't match the reality of the partition then no mount will occur. Prior kernels somehow worked without anything being specified. At least for me.
Now don't get me wrong, I understand something of why they made this change, and even agree. If you are interested in the reasoning, see the description of the change in the 2.6.8 changelog (searching for NLS_DEFAULT should turn it up).
However, where these developers utterly failed is in not providing any comment on how to determine which codepage is in use on the drive! I've spent an hour or so in Google, searched for codepage in /usr/src/linux/Documentation, and even found a list of windows codepages, tried several (this, by the way, is what I call BFMI or Brute Force Massive Ignorance), all to no avail. The codepage is encoded in the filesystem somehow, tell me what the number is! And if for some reason that is impossible, then provide some description of how to figure it out manually.
I don't appear to be the only person to express this frustration.
Usability, dudes. Yes, you got the right technical solution, but what are users supposed to do with it?
Don't get me wrong, I largely agree with the decisions these developers had taken that led me my frustrations, but in each case the developers had solved the technical problem and left us poor users in the lurch.
Case in point, and not to pick on these folks specifically: Codepage handling in VFAT filesystem under linux in 2.6.8 kernel.
VFAT filesystems have codepages, and the codepage is apparently important to know how to (for example) translate upper-case to lower-case in filenames. The mount command allows you to specific the codepage to use.
When I migrated to 2.6.8 kernel this morning, I was surprised to find that my mounts didn't work. Seems the default codepage now has to be specified as a real codepage number, and that if it doesn't match the reality of the partition then no mount will occur. Prior kernels somehow worked without anything being specified. At least for me.
Now don't get me wrong, I understand something of why they made this change, and even agree. If you are interested in the reasoning, see the description of the change in the 2.6.8 changelog (searching for NLS_DEFAULT should turn it up).
However, where these developers utterly failed is in not providing any comment on how to determine which codepage is in use on the drive! I've spent an hour or so in Google, searched for codepage in /usr/src/linux/Documentation, and even found a list of windows codepages, tried several (this, by the way, is what I call BFMI or Brute Force Massive Ignorance), all to no avail. The codepage is encoded in the filesystem somehow, tell me what the number is! And if for some reason that is impossible, then provide some description of how to figure it out manually.
I don't appear to be the only person to express this frustration.
Usability, dudes. Yes, you got the right technical solution, but what are users supposed to do with it?
[Comments from my previous blog]
1. Leena left... Thanks for providing such a useful information about shoes, i usually visit it to check some new stuff and discussion
Also check some kool designs here, found it useful http://www.insidethesports.com
Tuesday, 16 January 2007 5:20 am
Also check some kool designs here, found it useful http://www.insidethesports.com
2. glen martin left...
Thursday, 1 March 2007 6:12 pm
You're kidding, right?
Thursday, September 9, 2004
Everyone is a manager
There's a joke that everyone in customer service at a bank is a VP.
Dan Steinberg wrote of trends to decreasing # IT workers and increasing proportion of managers.
One potential factor that I haven't heard anyone discuss is a change in focus from big projects to small. A feature of the dot-com era, we used to speak of "internet time". Time to market was everything. Now the bubble has burst, but the pendulum hasn't swung all the way back to "quality before timeliness" as a driver.
Why should this matter? One feature of the manager/worker interaction is the need for managers to help resolve issues, focus resources on problems, that sort of thing.
Before the internet, projects were typically larger, and mostly internal to the organisation. Large projects with lots of introspection have fewer unplanned issues, at least in my experience, so there is just less for a manager to do. A project comes up, some folks are assigned, by and large you can sit down with the engineers, get them moving in the right direction, and sit back for a spell. There is more people supervision than problem resolution being done by management.
By contrast, a dynamic environment with lots of small projects being formed and reformed, interesting and difficult questions come up more often. And the extension of these projects into the external (that is, outside the firewall) realm means that more of these issues are of potentially broad organisational import. And thus may require more management attention, more often. And so, more managers.
Note in all this, I've been using the word "manager" to mean "responsible person who can speak for the organisation". "Manager" is vague, which is part of the problem - we just don't have common language to differentiate between "people supervisor" and "responsible team-member". Or job titles. Hence the joke about banks and VPs. They have to be VPs because their word can come to bind the bank. Even if they supervise no-one.
Which leads some to imagine that there is a problem when the ratio of managers increases when such is a spurious result due more to language than to reality. Again, in my experience.
Dan Steinberg wrote of trends to decreasing # IT workers and increasing proportion of managers.
One potential factor that I haven't heard anyone discuss is a change in focus from big projects to small. A feature of the dot-com era, we used to speak of "internet time". Time to market was everything. Now the bubble has burst, but the pendulum hasn't swung all the way back to "quality before timeliness" as a driver.
Why should this matter? One feature of the manager/worker interaction is the need for managers to help resolve issues, focus resources on problems, that sort of thing.
Before the internet, projects were typically larger, and mostly internal to the organisation. Large projects with lots of introspection have fewer unplanned issues, at least in my experience, so there is just less for a manager to do. A project comes up, some folks are assigned, by and large you can sit down with the engineers, get them moving in the right direction, and sit back for a spell. There is more people supervision than problem resolution being done by management.
By contrast, a dynamic environment with lots of small projects being formed and reformed, interesting and difficult questions come up more often. And the extension of these projects into the external (that is, outside the firewall) realm means that more of these issues are of potentially broad organisational import. And thus may require more management attention, more often. And so, more managers.
Note in all this, I've been using the word "manager" to mean "responsible person who can speak for the organisation". "Manager" is vague, which is part of the problem - we just don't have common language to differentiate between "people supervisor" and "responsible team-member". Or job titles. Hence the joke about banks and VPs. They have to be VPs because their word can come to bind the bank. Even if they supervise no-one.
Which leads some to imagine that there is a problem when the ratio of managers increases when such is a spurious result due more to language than to reality. Again, in my experience.
Saturday, September 4, 2004
Terrorists hate US freedom?
I don't believe it. Haven't believed it even once, despite the rhetorical political drivel one hears the press regurgitate.
On 9-11-1973, an American supported military coup in Chile replaced an elected democracy with the military dictatorship of Pinochet. Don't take my word for the American involvements:
U.S. involvement, specifically of Richard Nixon and Henry "Dr. Death" Kissinger, in the coup is staggering. In the documentary La Ultima Batalla de Allende (The Last Battle of Allende), Edward Korry, the U.S. ambassador to Chile from 1967-71, states that the CIA spent $2.7 million in the 1970 presidential election to sway the vote away from Allende, and Senate hearings in 1975 revealed that the CIA received $11 million to "destabilize" his administration. - article
One might also review this.
I don't believe for even a moment that the coincidence of dates is accidental.
Perhaps I have no sense of history to have missed this (I claim in defense being neither American nor Chilean), but I must admit to a certain surprise that in all the coverage of the more recent 9-11, the media haven't drawn more attention to this parallel, and apprently prefer the Administration's other excuses for conflict.
Even more, I cannot comprehend a mainstream media, or the American public that continues to believe said media, that parrots the Administration's excuses for renewed conflict. Despite that those behaviours clearly create more terrorists and division, not less. In fact, Bush's actions don't meet even his own administration's reports of their own goals:
Strategy links means to ends, designing tactics capable of achieving goals. ... [National Security Advisor Condoleezza] Rice says the Bush administration’s strategy rests on three pillars: First, thwarting terrorists and rogue regimes; second, harmonizing relations among the great powers; third, nurturing prosperity and democracy across the globe. But the effort to crush terrorists and destroy rogue regimes through preemption, hegemony, and unilateralism shatters great power harmony and diverts resources and attention from the development agenda. An effective strategy cannot be sustained when the methods employed to erect one pillar drastically undermine the others. - article
This kind of disconnect is what I've referred to in other contexts as the intelligence test of enlightened self interest. Goals that fail to meet or possibly meet ends ... well, Bush fails this test in spades. At least so far as professed goals of value to the American public. The desired ends of getting re-elected? Well, that is another story, isn't it?
Rice and common sense both compel me to believe that the only way to win the so-called war on terror -
Aside: what a joke that name is! That we're against something doesn't mean a war. Despite the fact that calling it a war makes this war time, and the elected leader a war-time president. So I suppose I'm waiting for the upcoming war on internet porn, followed by wars on littering and free speech.
- is to inspire fewer terrorists, not more, and that this is achieved not through killing the friends and family of borderline terrorists, but by taking away some of their major dissatisfactions with US foreign policy. I don't, after all, hear of terrorist attacks against Switzerland, despite the Swiss also having great freedoms.
Disclaimer: I am displaced, and disenfranchised. I didn't vote for or against Bush, and cannot vote in the upcoming US election, nor any other, American or otherwise. All the above is therefore less a political statement than a bemused social commentary in which I have neither personal nor political stake.
[Comments from my previous blog]
1. DID YOU KNOW left... on 9-11 germans,poland 1930,if not poland than one of those nations in europe. if you run a computer search on every major world problem since the 1600, the number 9-11 will show up many times. the old world order and the new world order,like to use this date. check to see how many major and minor wars,also who has been kill,removed from office, all the leaders in this world,this date will showup all through history.
Sunday, 17 September 2006 9:38 am
I just about killed someone yesterday
Driving down Highway 101, in the car pool lane, at about 65, and changing lanes to the right after signalling, into a hole between two cars in that lane. After I started my lane change, a motorcycle passes me on my right between the lanes of traffic. Fast. If I was doing 60-65, he was easily clearing 75. I probably missed him by about 3 inches. If I had started my lane change even 2 seconds earlier, I wouldn't have missed him at all. Or perhaps more to the point, he wouldn't have missed me. I drive a honkin' big truck, it isn't as if he didn't see me. Or as if he might survive going under one of my wheels at highway speed.
Now, some may comment here that motorcycles shouldn't pass on the median or on the dotted line between lanes, but in fact this is legal. Legal when traffic is stopped, for the very simple reason that an air-cooled bike without a fan can't tolerate being stopped in traffic for very long. But here is California it seems that they will pass at any time, safe or not.
I hear of collisions involving motorcycles fairly frequently on the news, and I'm always sympathetic. I used to ride myself, for several years, summer and winter, in a frequently rainy climate. I have my collection of stories about bad car drivers that I barely escaped. Even bad municipal bus drivers that forced me to choose between dumping my bike and several less palatable options. So I have been truly sympathetic, as only one who has been there can be.
This particular occasion, I found myself worrying more at how I might have extracted my young children from the situation without them seeing the blood.
But this blog is really about and to that biker, not about me. If you're reading, thank you ever so much for leading me down this chain of thought, and taking away my well meant sympathy for rider victims of motorcycle collisions.
You jerk.
Now, some may comment here that motorcycles shouldn't pass on the median or on the dotted line between lanes, but in fact this is legal. Legal when traffic is stopped, for the very simple reason that an air-cooled bike without a fan can't tolerate being stopped in traffic for very long. But here is California it seems that they will pass at any time, safe or not.
I hear of collisions involving motorcycles fairly frequently on the news, and I'm always sympathetic. I used to ride myself, for several years, summer and winter, in a frequently rainy climate. I have my collection of stories about bad car drivers that I barely escaped. Even bad municipal bus drivers that forced me to choose between dumping my bike and several less palatable options. So I have been truly sympathetic, as only one who has been there can be.
This particular occasion, I found myself worrying more at how I might have extracted my young children from the situation without them seeing the blood.
But this blog is really about and to that biker, not about me. If you're reading, thank you ever so much for leading me down this chain of thought, and taking away my well meant sympathy for rider victims of motorcycle collisions.
You jerk.
Monday, July 19, 2004
Programming in the box
Ok, my blog about turning parts of a system (EJB in this case) on and off is nagging at me.
A lot of us like to tinker. And we like to express that tinkering in the form of writing efficient systems. Myself, when I was going through a CS degree, used to pride myself on efficiency. "I can write that in 20% less code and get 10% better performance than anyone is my class."
Obnoxious, eh?
Tinkering gets in the way of productivity and robustness. When Ford designs a new car, they don't usually design a new battery from scratch. Or go to a 16 or 24 volt electrical system (though to do so would definitely solve some sound system problems). They reuse what they have. There is infrastructure that supports 12 volt cars.
Not all car companies work (or have worked) this way. Volvo had the reputation for artisan attention to quality, where a small team built each car from beginning to end, then moved to the next car. Volvo lost.
A lot of folks are worried about success of their company (read: keeping your job), or outsourcing (read: keeping your job). Productivity and "it works" are the way other folks (eg car companies) facing the same issues solved them. They build systems from components that are already designed, and use them as-is.
Dell is another example. Their model is to build systems out of industry-standard components. Unlike some larger companies that designed their own hard-drives, these folks used standard drives already on the shelf. Soooo, how many folks in HP's PC division are still there?
What all this is intended to add up to is: programming is in many respects no different from building other kinds of things. Writing from scratch, or tweaking and chopping and reforming existing components, isn't engineering. Engineering is about reusing existing components whose design properties are understood and proven, and changing only what needs to be changed to solve a problem, so you can predict the outcome (eg. time to market, bug frequency, etc). Repeatable process. The more variables you change, the less you can predict about what you'll get. And the less infrastructure you'll have around to support your product over the full lifecycle of its use.
A lot of us like to tinker. And we like to express that tinkering in the form of writing efficient systems. Myself, when I was going through a CS degree, used to pride myself on efficiency. "I can write that in 20% less code and get 10% better performance than anyone is my class."
Obnoxious, eh?
Tinkering gets in the way of productivity and robustness. When Ford designs a new car, they don't usually design a new battery from scratch. Or go to a 16 or 24 volt electrical system (though to do so would definitely solve some sound system problems). They reuse what they have. There is infrastructure that supports 12 volt cars.
Not all car companies work (or have worked) this way. Volvo had the reputation for artisan attention to quality, where a small team built each car from beginning to end, then moved to the next car. Volvo lost.
A lot of folks are worried about success of their company (read: keeping your job), or outsourcing (read: keeping your job). Productivity and "it works" are the way other folks (eg car companies) facing the same issues solved them. They build systems from components that are already designed, and use them as-is.
Dell is another example. Their model is to build systems out of industry-standard components. Unlike some larger companies that designed their own hard-drives, these folks used standard drives already on the shelf. Soooo, how many folks in HP's PC division are still there?
What all this is intended to add up to is: programming is in many respects no different from building other kinds of things. Writing from scratch, or tweaking and chopping and reforming existing components, isn't engineering. Engineering is about reusing existing components whose design properties are understood and proven, and changing only what needs to be changed to solve a problem, so you can predict the outcome (eg. time to market, bug frequency, etc). Repeatable process. The more variables you change, the less you can predict about what you'll get. And the less infrastructure you'll have around to support your product over the full lifecycle of its use.
Turning things off
Weigi wrote in a thread at TSS:
"J2EE is too complicated" won't be an excuse to not use J2EE any more because of the modular nature of JBoss's class app servers. Don't like Entity Beans? Simply delete the whole module from the app server!
(not meaning to pick on Weigi, others have and continue to say the same thing. His comment just came to my attention this morning.)
This misses the point, which is that a compatible server can run any J2EE applications (where by "J2EE application" I mean one that depends on portable APIs and not the other stuff) If you disable entity beans, or servlets, or jndi, or whatever, you don't have a compatible server any more.
If you're lucky, the container will still do the other things you want. But it might not. Who knows? This container configuration wasn't tested. If the container isn't doing other things properly, then your app (that depends on the container contract) may not be either. And we have all seen that application regression test suites are usually not up to the task of fully validating correct (continued) operation of large apps, so you may well not notice that the app isn't doing everything it is supposed to.
Another thing that bugged me about this comments is: why disable <foo> any way? If you don't like to know what day it isn' do you disable your desktop calendar? Or better, the operating system APIs that let your app query the time? If you don't want to use something, don't use it. But don't go futzing with the operating system. Or container, which is after all an operating system for J2EE apps.
Finally, I'm reminded of a blog I wrote in April, about programmers decisions. Perhaps you don't like entity beans, but if so, you're like a carpenter who doesn't like a hammer (he looks pretty stupid trying to drive a nail). Sometimes entity beans aren't the right design choice, so don't use them then. But to say "I don't like them" and go on to rip them out of your system, well, I don't have a family so I'm gonna rip the back seats out of my car to make more room for groceries.
There is a time when it makes sense to turn something off: when you're running a hardened system and haven't security validated some portion of the runtime. I'll note that in such an environment you've probably tested the bejezus out of the total system too, so the downsides of mucking with the OS are mitigated. These guys run a special operating system too.
[Comments from my previous blog]
1. a reader left... JBoss has always (at least as long as I can recall) advertised the ability to remove EJB support. See "Architecture Overview", item 2.
http://www.jboss.org/overview
I don't have an opinion as to whether or not this is a good idea.
Joe Cheng [code@joecheng.com]
Monday, 19 July 2004 8:12 am
http://www.jboss.org/overview
I don't have an opinion as to whether or not this is a good idea.
Joe Cheng [code@joecheng.com]
2. a reader left... HOld on, this sounds like complete nonsense. When have you ever written an application and you were not sure whether it used EJB? Or not sure whether it uses JMS??
Foo Bar
Monday, 19 July 2004 7:21 pm
Foo Bar
3. a reader left... One of the great powers of JBoss that many, many users take advantage of is the Microkernel to be able to take out services and build in their own services. J2EE is great, but it does not meet every application's needs. Some need less and some need more. JBoss facilitates this - of course to comply with the Sun J2EE Certification we ship with the whole thing...
Bob
Bob Bickel [bob.bickel@jboss.com]
Tuesday, 20 July 2004 6:57 am
Bob
Bob Bickel [bob.bickel@jboss.com]
4. glen martin left... Sure, some applications need more features and some less, or in terms of Foo Bar, I know whether I nee EJB or JMS.
The point I was trying to make is that needs change. Our applications live rather longer than we might have imagined, and get extended etc. An application that only does web pages today may develop need for messaging, so a JMS component that was disabled previously would have to be brought back. And in bringing it back, we can (subtly or less so) impact the behaviour of the system that has been there all along because of class conflicts or something.
A value I believe in is that extension should be transparent to the existing system. A number of talks I've done in the past have focused on this. One technique I like is to use async messaging for all iner-coarse-component communications, so I can extend through mere subscription to a message channel. Ie. ESB.
I'll say again, just because one application doesn't use JMS today doesn't mean that JMS should be user-unloaded. By user-unloading, the resulting combination of services is potentially untested. By contrast, an intelligent container might choose whether to load a component or not, and presumably would select between a number of tested runtime configurations, based on the needs of the application(s) currently running.
The result is similar - whether the user disables components or the container does, componets get disabled. But in the case some are promoting, the running container config isn't thoroughly tested. In another it is. One guess which is the better choice.
I'll come back to the example of operating systems. I have one kernel build and one set of drivers I load all the time, I don't change them day to day as my application needs or the network I'm on change. It is a black box. I treat it as one. It provides he servicecs I need today. It will provide the services I need tomorrow without me futzing around with it.
Tuesday, 20 July 2004 9:35 am
The point I was trying to make is that needs change. Our applications live rather longer than we might have imagined, and get extended etc. An application that only does web pages today may develop need for messaging, so a JMS component that was disabled previously would have to be brought back. And in bringing it back, we can (subtly or less so) impact the behaviour of the system that has been there all along because of class conflicts or something.
A value I believe in is that extension should be transparent to the existing system. A number of talks I've done in the past have focused on this. One technique I like is to use async messaging for all iner-coarse-component communications, so I can extend through mere subscription to a message channel. Ie. ESB.
I'll say again, just because one application doesn't use JMS today doesn't mean that JMS should be user-unloaded. By user-unloading, the resulting combination of services is potentially untested. By contrast, an intelligent container might choose whether to load a component or not, and presumably would select between a number of tested runtime configurations, based on the needs of the application(s) currently running.
The result is similar - whether the user disables components or the container does, componets get disabled. But in the case some are promoting, the running container config isn't thoroughly tested. In another it is. One guess which is the better choice.
I'll come back to the example of operating systems. I have one kernel build and one set of drivers I load all the time, I don't change them day to day as my application needs or the network I'm on change. It is a black box. I treat it as one. It provides he servicecs I need today. It will provide the services I need tomorrow without me futzing around with it.
Wednesday, July 14, 2004
Leaving Sun
I've been hesitating to blog on this, but I've have enough queries now that I suppose just to retain my sanity I should just come out and say I've left Sun.
I joined Sun 4 years ago to be product manager for J2EE, which role I had for, oh, about one day, since a number of departures around the time I joined meant I needed to cover some other functions. Since then, I've had the good fortune to meet a lot of really interesting and intelligent people, both in and out of Sun, and work on some really cool things.
If I'm proud of anything, it is that while there I managed to change Sun's licensing model enough to permit open source development of compatible J2EE application servers. It was hard, and really rewarding, and through that process I've done way more about law than I ever imagined (or wished).
But four years is four years, and longer than I'd stayed anywhere for over a decade. Other opportunities called.
In many ways I'll miss Sun. It has been very cool and a great engine of professional growth for me. Maybe I'll go back sometime. I absolutely wish them well, both as a corporation and as a group of individuals I respect.
I joined Sun 4 years ago to be product manager for J2EE, which role I had for, oh, about one day, since a number of departures around the time I joined meant I needed to cover some other functions. Since then, I've had the good fortune to meet a lot of really interesting and intelligent people, both in and out of Sun, and work on some really cool things.
If I'm proud of anything, it is that while there I managed to change Sun's licensing model enough to permit open source development of compatible J2EE application servers. It was hard, and really rewarding, and through that process I've done way more about law than I ever imagined (or wished).
But four years is four years, and longer than I'd stayed anywhere for over a decade. Other opportunities called.
In many ways I'll miss Sun. It has been very cool and a great engine of professional growth for me. Maybe I'll go back sometime. I absolutely wish them well, both as a corporation and as a group of individuals I respect.
[Comments from my previous blog]
1. a reader left... Soo..which company are you going off to? Will you be working with J2EE/Java there also?
Mo
Wednesday, 14 July 2004 8:38 pm
Mo
Wednesday, June 16, 2004
Microsoft Antivirus product?
I read this morning that Microsoft is going to sell an Antivirus product.
The first thing it does is de-install Windows. (buh-dum pah!)
But seriously, I'm certainly not the first to note that this is just a bit like MS charging to fix critical bugs it left in the system. And Dvorak's note about an antivirus tool being a way for MS to get an online footprint on their customers' machines that they can touch weekly is cautionary, especially in context of MS' failed attempt to get in the middle of all transactions their customers conducted on the net (remember Passport?).
MS still wanna be Big Brother. At least the taxation part of that.
The first thing it does is de-install Windows. (buh-dum pah!)
But seriously, I'm certainly not the first to note that this is just a bit like MS charging to fix critical bugs it left in the system. And Dvorak's note about an antivirus tool being a way for MS to get an online footprint on their customers' machines that they can touch weekly is cautionary, especially in context of MS' failed attempt to get in the middle of all transactions their customers conducted on the net (remember Passport?).
MS still wanna be Big Brother. At least the taxation part of that.
Monday, May 17, 2004
So much for kinder and gentler
MS says of their renewed foray into Search,
Microsoft efforts are so sweeping that painting its strategy as a simple matchup with Google is a "narrow, narrow way of looking at it."
This is classic MS. Not competitive, "But just wait, it's gonna be GREAT!" Freeze sales or adoption or whatever of competitors with "just around the corner" messaging. While they then take a few years to catch up.
To be fair, many companies use tactics such as this, speaking of their next product. And I use it sometimes myself - I'll talk about J2EE futures, but usually to make a point regarding forward compatibility of applications, which is a current 'feature'.
MS just take it that much farther, talking about 2 or 3 generations down the road as "coming soon".
For what its worth, I think there is plenty of opportunity to improve search technology. There are a number of searches I'd like to be able to do that current providers simply don't offer. If MS want to deliver a better product, have at it. But their performance to date in this area is less than stellar, and the sheer hubris of their statement above given their track record ...
"Fool me once, shame on you. Fool me twice ..."
Saturday, May 15, 2004
Outsourcing and responsibility
Hewlett-Packard is settling a claim with the government of Canada. But what I find interesting about this is their comments that their own employees aren't responsible, but instead that one of their subcontractors hatched a scheme to defraud both parties.
And this matters?
Businesses must be cognizant of their responsbility to provide a contracted service. Customers don't and can't monitor their arrangements on how to service that contract. There are whole businesses (general contractors) whose sole job is to manage these sorts of complexities.
To their credit, HP seems to accept this notion in that they settled the claim.
But even raising this as an excuse is laughable and worriesome. Is HP somehow less responsible because it failed to scrutinise its suppliers?
Is your medical insurer or financial company less responsible because the leak of your personal data happened in India rather than in their own office? Or is a software vendor less responsible for a back-door because it was inserted by a contract programming house in India or Russia or other grossly-stereotyped labour market?
I wonder how long it will take for some enterprising lawyer to argue for punitive damages in a confidential disclosure case in an amount comparable to the labour savings the defendent enjoyed in outsourcing the work in the first place.
And this matters?
Businesses must be cognizant of their responsbility to provide a contracted service. Customers don't and can't monitor their arrangements on how to service that contract. There are whole businesses (general contractors) whose sole job is to manage these sorts of complexities.
To their credit, HP seems to accept this notion in that they settled the claim.
But even raising this as an excuse is laughable and worriesome. Is HP somehow less responsible because it failed to scrutinise its suppliers?
Is your medical insurer or financial company less responsible because the leak of your personal data happened in India rather than in their own office? Or is a software vendor less responsible for a back-door because it was inserted by a contract programming house in India or Russia or other grossly-stereotyped labour market?
I wonder how long it will take for some enterprising lawyer to argue for punitive damages in a confidential disclosure case in an amount comparable to the labour savings the defendent enjoyed in outsourcing the work in the first place.
[Comments from my previous blog]
1. a reader left... Now that a shame less title to get high click rate.
Anonymous
Saturday, 15 May 2004 1:57 pm
Anonymous
2. glen martin left... If you say so. Personally, I thought and think it closely related to the content.
Sunday, 16 May 2004 4:41 am
3. john left... Outsourcing comes with a set of great responsibilities however some peoples presenting them not as article but as onion hoof.
Friday, 21 July 2006 10:53 pm :: http://www.evision.com.pk/outsourcing.ht
4. bedava oyunlar left... good works
Wednesday, 30 April 2008 6:05 am :: http://www.eglencek.com
5. BPO Manila left... Great post! I think that people are forgetting that outsourcing doesn't mean that you are handing over your entire company to an outsourcing partner. You are still the boss; you still need to constantly monitor the workings of the outsource company. As in all businesses, the leader or manager is still responsible if an employee does something wrong. The leader may not be directly to blame, but he has the responsibility of overseeing the entire operation, making sure that everything is going smoothly.
Wednesday, 2 June 2010 5:51 pm :: http://www.oneworldconnections.com
Wednesday, April 28, 2004
Language goals
What are language features for?
magpiebrain (love the name!) blogged:
magpiebrain (love the name!) blogged:
Each language feature introduced that tries to enforce safety of some kind invariably introduces some reduction in the power of the language. Some of these tradeoffs’ seem acceptable.
Written as above, it would seem that the primary goal of a language feature was to restrict. My own view is a little different - the primary goal of language features is to focus. Focus attention on the critical parts of the problem domain being addressed by the language. Focus time on solving instances of those problem domains.
I have a photography hobby (that is sadly starved for time). When I'm composing images, I usually work in B&W and upside down, to turn off as best I can the parts of my brain that are hung up on naming objects and instead focus on their shapes and relationship to one another.
The same is true in programming as well, and my choice of language.
Prolog is interested in propositional logic, and so exposes Horn clauses as the way to utter propositional logic problems. (Whatever happend to Prolog? Does anyone still use it? I think I last wrote a Horn clause over 15 years ago).
C was designed to solve operating system problems: communications, scheduling, isolation, etc. Pointers and pointer arithmetic allows easy iteration through lists of same-sized (ideally same-typed) things, like process info, thread info, malloc info. Buffer management is important for an operating system because it is tasked with marshalling the limited resources of the system - it needs to know how much memory is allocated, free, occupied by idle processes, etc.
The Java language and platform weren't really intended for this sort of thing. Business apps are more concerned with business and presentation logic. While it can (and has) been said that pointers were removed for safety, I think safety is just gravy. Motivations aside, pointers simply aren't needed nor useful for biz apps and presentation logic. If they had been necessary you can bet that necessity would have won over safety.
Taken to ridiculous extremes to bludgeon the point into submission, all of these problems can be solved in assembly language, but we'd never have the time to do so. My attention span is shorter than the overhead of working in that language (or class of languages, I suppose) for anything but a tiny problem.
Positive value trumps negative value.
Where magpiebrain was really going was to dynamic typing, and he says:
These dynamic languages can be seen primarily as enabling languages - they make the assumption that developer actually know what they are doing.
Expressing 'dynamic' as enabling is good, I like that (per the above). ;) But dynamism by and large hasn't been useful to me in focusing my attention on the problem I'm solving.
And as for knowing what they're doing ...
A developer can know what he's doing, but developers frequently don't. Especially as their numbers, and the number of years a project stays in development and maintenance increases. It is the institutional knowledge problem, or the buzzphrase we used to use, programming in the large. Teams aren't all that good at disseminating and maintaining knowledge.
Which means that so long as "knowing what they're doing" only requires local knowledge, fine. But few problems are that localised, at least in my experience.
(Sam, if there was a trackback link, I couldn't find it. Apologies)
[Comments from my previous blog]
1. a reader left... Yeah, I really need to display thr trackback URL - the RDF gets spit out so it should really autodiscover. Anyway, found this via bloglines.
When I was talking about language features in the first sentence, I stated features introduced to improve saftey, not all language features...
Sam Newman
Wednesday, 28 April 2004 9:29 am
When I was talking about language features in the first sentence, I stated features introduced to improve saftey, not all language features...
Sam Newman
2. Geoff Arnold left... In some sense, this is tautological: a feature introduced to enforce safety must do so by making some previouly legal construct illegal, or by constraining the effects of some language construct. The interesting question is whether the constraint is significant or not; are there circumstances under which I might reasonably have wished to express that which is now inexpressible? In the case of Java and pointers, for example, I think that I can state fairly comfortably that in a memory model where there are no guarantees about how structured entities are mapped into memory and no general way for me to discover the mapping, pointers are pretty much useless and I lose nothing if I'm restricted to references.
Visit me @ http://geoffarnold.com
Wednesday, 28 April 2004 9:45 pm
Visit me @ http://geoffarnold.com
Sunday, April 25, 2004
Instant Solutions?
This morning I picked up the April issue of JDJ, and Joe Ottinger's editorial "Looking for Instant Solutions?" (which doesn't appear to be freely available, so no link, sorry) caught my eye.
His premise is that solutions aren't valuable in large part due to their inflexibility. The same for programmers. Both are stuck in the past, nailed up to prior experiences that adversely colour their reactions.
In part I like this notion, because it explains why some very bright developers insist repeatedy that Enterprise JavaBeans are the wrong solution - it is that they aren't evaluating in terms of the current application and current capabilities of EJB containers so much as they are evaluating in terms of their prior projects and frequently version 1.0 EJB containers.
But Joe goes a little too far with this thought, appearing (after skipping a couple of steps of his article) to recommend the use of meta-solutions over solutions. That is, he calls for a framework of solution creators, from which application-specific solutions can be constructed on the fly.
Perhaps this is an evolution thing. His thought reminds me of reports of the rennaissance, in which there was a vast explosion of creativy and advancement in design, thought, science. What there wasn't was much standardisation. And in the time between then and now, the processes of constructing things has evolved quite a bit.
The economics of software construction should be of major concern to developers today. Without efficient end-to-end software construction/deployment/maintenance/extension/etc, net return isn't there, and can't flow to the developers. Read: downward cost pressure, which leads to offshoring and other amusements.
The evolution of processes (in the previous sense of the rennassance, construction, et al) has been driven by the same economics: these processes start off with artisans, and over time the grunt work is taken out (from one point of view) or the inefficiencies are removed (from another) or the room for creative craftsmanship is removed (from a third point of view). Building a car, I no longer design my fasteners one-off. Only the pentagon gets away with $500 toilet seats.
So long as we're still designing and building screws, we can't focus our brain cycles on the car. And it the car that makes a profit, and pays our salaries.
So while I can understand a desire to use meta-solutions, I don't believe they are efficient or that they help the economics of software development and really allow the field to advance. I certainly agree that a single solution doesn't apply in all situations, but the solution isn't to design the solutions one-off, but is instead to develop a more flexible solution or group of solutions that cover a range of our activities.
My big red tool chest has 4 sizes of Phillips screwdrivers, and 2 Robertson. And this is good. I don't need to craft screwdrivers on my own.
His premise is that solutions aren't valuable in large part due to their inflexibility. The same for programmers. Both are stuck in the past, nailed up to prior experiences that adversely colour their reactions.
In part I like this notion, because it explains why some very bright developers insist repeatedy that Enterprise JavaBeans are the wrong solution - it is that they aren't evaluating in terms of the current application and current capabilities of EJB containers so much as they are evaluating in terms of their prior projects and frequently version 1.0 EJB containers.
But Joe goes a little too far with this thought, appearing (after skipping a couple of steps of his article) to recommend the use of meta-solutions over solutions. That is, he calls for a framework of solution creators, from which application-specific solutions can be constructed on the fly.
Perhaps this is an evolution thing. His thought reminds me of reports of the rennaissance, in which there was a vast explosion of creativy and advancement in design, thought, science. What there wasn't was much standardisation. And in the time between then and now, the processes of constructing things has evolved quite a bit.
The economics of software construction should be of major concern to developers today. Without efficient end-to-end software construction/deployment/maintenance/extension/etc, net return isn't there, and can't flow to the developers. Read: downward cost pressure, which leads to offshoring and other amusements.
The evolution of processes (in the previous sense of the rennassance, construction, et al) has been driven by the same economics: these processes start off with artisans, and over time the grunt work is taken out (from one point of view) or the inefficiencies are removed (from another) or the room for creative craftsmanship is removed (from a third point of view). Building a car, I no longer design my fasteners one-off. Only the pentagon gets away with $500 toilet seats.
So long as we're still designing and building screws, we can't focus our brain cycles on the car. And it the car that makes a profit, and pays our salaries.
So while I can understand a desire to use meta-solutions, I don't believe they are efficient or that they help the economics of software development and really allow the field to advance. I certainly agree that a single solution doesn't apply in all situations, but the solution isn't to design the solutions one-off, but is instead to develop a more flexible solution or group of solutions that cover a range of our activities.
My big red tool chest has 4 sizes of Phillips screwdrivers, and 2 Robertson. And this is good. I don't need to craft screwdrivers on my own.
Sunday, March 21, 2004
Re: Hindsight
Bill read my comments on the weakness of some treatments of XP and writes in response:
The suggested answer to the mentioned logging problem is to log asynchronously (which is a very good one, let's be clear on that). That's fine, but by the time you do design your way out for all functional and non-functional aspects of even a medium sized system, chances are you'll have bled the customer dry thanks to futureproofing and still they won't have what they need today. Assuming of course that your guesses worked out to be right.
This can be true, and in my experience it often is. Heck, I've been guilty of it. So I understand the problem.
But in between these two extremes is a middle ground of designing for flexibility. Building an async log for its own sake may well have been overkill for the immediate requirements, but the decoupling of log producer from log consumer, and use of a flexible transport, means that a wide variety of unseen futures are made simpler.
So my admittedly narrow reading of Glenn's entry is that he has a narrow reading of what XP is offering. ... there's a whole school of thought within XP of programming towards patterns (known solutions), that has not been mentioned or critiqued here.
That's fair. Certainly my comments were a narrow treatment. I too think there is a fair amount to learn from XP, and some XP thought applied with discretion very much helps some projects. It is just that I cringe when I read "develop the minimum" regurgitated without the surrounding thought, as I happened to do yesterday morning. The logging example was straight from that source. But perhaps I was guilty of the opposite sin. Mea culpa.
Saturday, March 20, 2004
Foresight
It has become fashionable in some circles these days (notably, eXtreme Programming and Agile Development) to say "code the absolute minimum for the problem (you know you have)."
Problem is, because these same folks often forego a lot of early design work (which they refer to as the Paralysis by Overanalysis antipattern), they often don't know what problem they are trying to solve. Or not completely, or not accurately or something. The solution they have for this is to "refactor often." Well, duh. This reminds me of the Monty Python "ant counter" sketch: "What do you feed them on?" "Nothing." "Then what do they live on?" "They don't, they die." "They die?" "Well, of course if you don't feed them."
I am forced to consider Pressman's research here. For those who may forget, Pressman looked at the cost to fix defects at different points in an application lifecycle, and recorded exponential cost growth as one moves into later phases of a waterfall process. If I remember the numbers correctly, setting a scale at 1 unit cost if found during initial coding, problems found in design (before coding) reduced the cost to .75, problems found in unit test were perhaps 1.25, integration 2 or so. Etc.
How is it that skipping the initial design will reduce costs? It won't, because refactoring costs. In fact, for me the only result that seems likely is that for any project too complex to keep all in one's head at once, eXtreme programming will increase costs by shifting defect correction to the right on Pressman's curve.
An overlapping group of folks as recommend XP also suggest that <some complex technology> is a sledgehammer and shouldn't be used for small problems.
But what is a small problem, expecially if one doesn't know the requirements (well) and hasn't done (much, or any) design? Moreover, how many applications we deploy today will never be extended? And if your application will be extended, just how will you go about this?
An example that is used to illustrate a 'flyweight' problem that shouldn't be solved with a complex solution is a logging facility. Instead there are a number of solutions out there that take a variety of approaches: one such is to write a log class that takes a couple of parameters (severity and message, for example) and writes them to a flat file or through JDBC to a database. Cool. Simple problem, simple solution.
So roll forward, your application has been deployed with its simple logging solution. Now someone else builds another application that wants to talk to yours, and wants to add entries to the same log, preferably in-order. There are a few ways to accomplish this. One is to have the different applications each open the file whenever they want to add something, which assumes the filesystem is local to both applications. Or if your used JDBC, that the database server is accessible etc. Another solution is to add some sort of remote protocol (perhaps a web service interface) to the first application, to allow the second to send a log event. That's rather a bunch of code to add to that first application. Of course, now that there is a second application, you're going to have to think about what information to log - do you want to capture the application name as well, perhaps? So you find yourself now writing 3 fields instead of the 2 your wrote previously, and updating all your old code to call the new log method signature. And perhaps converting your old log files.
More time goes by, and you've noticed that there is a recurring event that shows up in your log that requires special processing whenever it occurs. Perhaps a cracker is targeting your server and you want to know when it is happening, so whenever the log gets this event you want an email sent to your cellphone. If you've been using a log file, you could at this point write another program that watches the tail of that file, parses the text, and if it matches a pattern compose and send an email. This is an expensive solution because it is a separate process, and it is parsing information that was probably composed out of separate fields pereviously. And again, assumes the file is local, available.
The point of all this is that even something as simple as a log facility may wind up having very different requirements that those it started with. In fact even the assumption that the log only gets written and if it is processed at all it is later in a batch probably by a human, is suspect. And with every change in requirements came some refactoring, or rewriting, and a whole bunch of new code.
Returning to our ant counter, "I don't understand." "You let them die, then you buy new ones."
As if! Like we can toss old applications when requirements change! Those old COBOL applications are still hanging around, 30 years later!
By contrast, choosing a better architected solution from the beginning increased some up-front work, but can reduce the overall complexity as time goes by. My own favorite solution for logging is publishing messages to a publish/subscribe message bus. The messages retain structure (that is, they stay as a set of attribute/value pairs), and they can be reused by new applications (for example, our monitoring function that sends an email) sometimes with less new code and without any change to the originator or any other participant in the log. And the scalability as the number of processes filing log entries increases is very good, without those logging being blocked by others' locks on the log file or database.
To my mind, this is true agility - easier extensibility and stable functions (small changes in inputs mean small changes in outputs).
Oh, and if you develop the logging structure once that is more extensible, you can more easily reuse it for your next application - the code to log the next app is to format the attribute/value pairs for the new app into a message, and perhaps add a new message channel to the messaging system, and add some filter on the mesage bus to listen for those new messages and save the contents. Not hard, and distribution and scaling came for free on this second use.
The real problem seem to me that XP is going out of its way to downplay foresight and planning. In a world in which applications frequently, if not invariably, are extended to do more that originally intended, solving for the minimum is frequently not a good choice.
XP may have been a good solution to the bubble economy and 'internet time', but I think 'internet time' has turned out to be the wrong problem to solve. Adding real value seems to be coming back into fashion.
Problem is, because these same folks often forego a lot of early design work (which they refer to as the Paralysis by Overanalysis antipattern), they often don't know what problem they are trying to solve. Or not completely, or not accurately or something. The solution they have for this is to "refactor often." Well, duh. This reminds me of the Monty Python "ant counter" sketch: "What do you feed them on?" "Nothing." "Then what do they live on?" "They don't, they die." "They die?" "Well, of course if you don't feed them."
I am forced to consider Pressman's research here. For those who may forget, Pressman looked at the cost to fix defects at different points in an application lifecycle, and recorded exponential cost growth as one moves into later phases of a waterfall process. If I remember the numbers correctly, setting a scale at 1 unit cost if found during initial coding, problems found in design (before coding) reduced the cost to .75, problems found in unit test were perhaps 1.25, integration 2 or so. Etc.
How is it that skipping the initial design will reduce costs? It won't, because refactoring costs. In fact, for me the only result that seems likely is that for any project too complex to keep all in one's head at once, eXtreme programming will increase costs by shifting defect correction to the right on Pressman's curve.
An overlapping group of folks as recommend XP also suggest that <some complex technology> is a sledgehammer and shouldn't be used for small problems.
But what is a small problem, expecially if one doesn't know the requirements (well) and hasn't done (much, or any) design? Moreover, how many applications we deploy today will never be extended? And if your application will be extended, just how will you go about this?
An example that is used to illustrate a 'flyweight' problem that shouldn't be solved with a complex solution is a logging facility. Instead there are a number of solutions out there that take a variety of approaches: one such is to write a log class that takes a couple of parameters (severity and message, for example) and writes them to a flat file or through JDBC to a database. Cool. Simple problem, simple solution.
So roll forward, your application has been deployed with its simple logging solution. Now someone else builds another application that wants to talk to yours, and wants to add entries to the same log, preferably in-order. There are a few ways to accomplish this. One is to have the different applications each open the file whenever they want to add something, which assumes the filesystem is local to both applications. Or if your used JDBC, that the database server is accessible etc. Another solution is to add some sort of remote protocol (perhaps a web service interface) to the first application, to allow the second to send a log event. That's rather a bunch of code to add to that first application. Of course, now that there is a second application, you're going to have to think about what information to log - do you want to capture the application name as well, perhaps? So you find yourself now writing 3 fields instead of the 2 your wrote previously, and updating all your old code to call the new log method signature. And perhaps converting your old log files.
More time goes by, and you've noticed that there is a recurring event that shows up in your log that requires special processing whenever it occurs. Perhaps a cracker is targeting your server and you want to know when it is happening, so whenever the log gets this event you want an email sent to your cellphone. If you've been using a log file, you could at this point write another program that watches the tail of that file, parses the text, and if it matches a pattern compose and send an email. This is an expensive solution because it is a separate process, and it is parsing information that was probably composed out of separate fields pereviously. And again, assumes the file is local, available.
The point of all this is that even something as simple as a log facility may wind up having very different requirements that those it started with. In fact even the assumption that the log only gets written and if it is processed at all it is later in a batch probably by a human, is suspect. And with every change in requirements came some refactoring, or rewriting, and a whole bunch of new code.
Returning to our ant counter, "I don't understand." "You let them die, then you buy new ones."
As if! Like we can toss old applications when requirements change! Those old COBOL applications are still hanging around, 30 years later!
By contrast, choosing a better architected solution from the beginning increased some up-front work, but can reduce the overall complexity as time goes by. My own favorite solution for logging is publishing messages to a publish/subscribe message bus. The messages retain structure (that is, they stay as a set of attribute/value pairs), and they can be reused by new applications (for example, our monitoring function that sends an email) sometimes with less new code and without any change to the originator or any other participant in the log. And the scalability as the number of processes filing log entries increases is very good, without those logging being blocked by others' locks on the log file or database.
To my mind, this is true agility - easier extensibility and stable functions (small changes in inputs mean small changes in outputs).
Oh, and if you develop the logging structure once that is more extensible, you can more easily reuse it for your next application - the code to log the next app is to format the attribute/value pairs for the new app into a message, and perhaps add a new message channel to the messaging system, and add some filter on the mesage bus to listen for those new messages and save the contents. Not hard, and distribution and scaling came for free on this second use.
The real problem seem to me that XP is going out of its way to downplay foresight and planning. In a world in which applications frequently, if not invariably, are extended to do more that originally intended, solving for the minimum is frequently not a good choice.
XP may have been a good solution to the bubble economy and 'internet time', but I think 'internet time' has turned out to be the wrong problem to solve. Adding real value seems to be coming back into fashion.
[Comments from my previous blog]
1. a reader left...
Just as you suggest, there are massive benifits to be gained from correctly designing up front. This is the lure. But no matter how well you model the current requirements, if either you or your users don't understand the problem completely your model will be wrong. The more up front design you did, the more that that costs.
So, where the problem is understood, use patterns, designs, libraries etc. that have probably already been made to solve it. Where the problem (or solution) requires a few iterations to solve, build only what you need right now.
fletch
Sunday, 21 March 2004 2:47 pm
Where XP excels is when the solution is not clear. This happens frequently (because problem's that have clear solutions, very often have already been solved -- your logging as a case in point).Just as you suggest, there are massive benifits to be gained from correctly designing up front. This is the lure. But no matter how well you model the current requirements, if either you or your users don't understand the problem completely your model will be wrong. The more up front design you did, the more that that costs.
So, where the problem is understood, use patterns, designs, libraries etc. that have probably already been made to solve it. Where the problem (or solution) requires a few iterations to solve, build only what you need right now.
fletch
Saturday, February 14, 2004
Uncivil Union
Sometimes situations are too political to have a solution.
For those who have been more lucky than me and missed hearing incessantly about this saga, the Mass. Supreme Court ruled that the State had no ability to restrict marriage to one man and one woman. As I understand it, being neither from Mass. nor American, the reasoning behind this had to do with the setting up of a second class civil union status for other forms of marriage being unnacceptable under the state constitution.
So the whining and bickering and spitting and rock-throwing continues. And the weeping, hair-pulling and knashing of teeth on TV.
The religious folks feel quite put upon, after all. Ignore for the moment that the number of folks getting married is dropping like a stone. I can't be bothered to find actual links, but I hear that in the US the rate has dropped to 50% or so. In Canada somewhat lower. And in Scandinavia the rate of those living together actually being married is reputedly down around 20%. So marriage is under threat - not the threat that some seem to imagine these days, but the threat of disinterest in the institution of marriage or the institution of the Church.
But the religious right are so busy being belligerent about the whole thing they'll never never notice the real enemy.
Neither will they nor the politicians come to the obvious solution: take civil benefits away from those entering into non-civil unions.
Wouldn't that be easy? This whole problem occurs because the State conveys rights and benefits to those entering into what is essentially a private relationship (marriage) conveyed by a private individual (priest). If the State only conveyed those rights and benefits to those entering into a civil union, then fine, straights could go down to the courthouse to enter into the civil union, and then wander over to the church to come clean with God. Non-straights could go down to the courthouse to enter into the civil union, and later go have whatever non-traditional ceremony they might want. And they can do so without reflecting anything at all on the sanctity of the private ceremony the church-goers believe is their duty to God.
And we all can stop worrying about people's private acts and private lives.
Well, except Michael Jackson's. *sigh*
For the record, I'm married, though I didn't get married until the US Immigration and Naturalisation Service (INS) required it of me. Happily, my partner was and is of the gender they appreciate. That is, not mine. Het privilege indeed.
For those who have been more lucky than me and missed hearing incessantly about this saga, the Mass. Supreme Court ruled that the State had no ability to restrict marriage to one man and one woman. As I understand it, being neither from Mass. nor American, the reasoning behind this had to do with the setting up of a second class civil union status for other forms of marriage being unnacceptable under the state constitution.
So the whining and bickering and spitting and rock-throwing continues. And the weeping, hair-pulling and knashing of teeth on TV.
The religious folks feel quite put upon, after all. Ignore for the moment that the number of folks getting married is dropping like a stone. I can't be bothered to find actual links, but I hear that in the US the rate has dropped to 50% or so. In Canada somewhat lower. And in Scandinavia the rate of those living together actually being married is reputedly down around 20%. So marriage is under threat - not the threat that some seem to imagine these days, but the threat of disinterest in the institution of marriage or the institution of the Church.
But the religious right are so busy being belligerent about the whole thing they'll never never notice the real enemy.
Neither will they nor the politicians come to the obvious solution: take civil benefits away from those entering into non-civil unions.
Wouldn't that be easy? This whole problem occurs because the State conveys rights and benefits to those entering into what is essentially a private relationship (marriage) conveyed by a private individual (priest). If the State only conveyed those rights and benefits to those entering into a civil union, then fine, straights could go down to the courthouse to enter into the civil union, and then wander over to the church to come clean with God. Non-straights could go down to the courthouse to enter into the civil union, and later go have whatever non-traditional ceremony they might want. And they can do so without reflecting anything at all on the sanctity of the private ceremony the church-goers believe is their duty to God.
And we all can stop worrying about people's private acts and private lives.
Well, except Michael Jackson's. *sigh*
For the record, I'm married, though I didn't get married until the US Immigration and Naturalisation Service (INS) required it of me. Happily, my partner was and is of the gender they appreciate. That is, not mine. Het privilege indeed.
[Comments from my previous blog]
Friday, February 13, 2004
Apples and Oranges
Those who have had the fortune (I waffle between good- and mis-) to work with me know I don't always take well to being asked the same stupid question more than twice. So in the hope of heading off a whole lot more unhappiness, I want to put this one to bed.
Why do people insist on asking whether J2EE technology performs better than .Net? Or scale, or have better up time, or whatever?
All these are useful measures of a product. And if there is one thing that Microsoft gets right in their marketing FUD against J2EE it is that ...
MS: "J2EE is not a product."
Me: Well Duh! It is a standard.
Not that Microsoft seems to know much about standards sometimes. So I'm a little unclear as to the point they are trying to get across with their 'not a product' criticism. There are over 25 products that implement that standard compared to what, 1 .Net? But I digress.
Performance and RAS (Reliability, Availability, Scalability) are great ways to measure and compare products, depending on your needs. And some product implementations of the J2EE standard will be optimised for different kinds of uses, and have RAS or performance or footprint differences that may help or hinder your desired deployment.
And that's what's so meaningless about making these sorts of comparisons to between J2EE and .Net this way. The scalability of an application server intended and optimised for small-footpring embeddable use is nothing to do with J2EE, it is a product feature.
The Honda Accord is much more nimble than a tractor, and keeps the rain off better than a motorcycle. Visit your local Honda dealer today.
Why do people insist on asking whether J2EE technology performs better than .Net? Or scale, or have better up time, or whatever?
All these are useful measures of a product. And if there is one thing that Microsoft gets right in their marketing FUD against J2EE it is that ...
MS: "J2EE is not a product."
Me: Well Duh! It is a standard.
Not that Microsoft seems to know much about standards sometimes. So I'm a little unclear as to the point they are trying to get across with their 'not a product' criticism. There are over 25 products that implement that standard compared to what, 1 .Net? But I digress.
Performance and RAS (Reliability, Availability, Scalability) are great ways to measure and compare products, depending on your needs. And some product implementations of the J2EE standard will be optimised for different kinds of uses, and have RAS or performance or footprint differences that may help or hinder your desired deployment.
And that's what's so meaningless about making these sorts of comparisons to between J2EE and .Net this way. The scalability of an application server intended and optimised for small-footpring embeddable use is nothing to do with J2EE, it is a product feature.
The Honda Accord is much more nimble than a tractor, and keeps the rain off better than a motorcycle. Visit your local Honda dealer today.
Subscribe to:
Posts (Atom)
Simon Phipps