Wednesday, June 16, 2004

Microsoft Antivirus product?

I read this morning that Microsoft is going to sell an Antivirus product.

The first thing it does is de-install Windows. (buh-dum pah!)

But seriously, I'm certainly not the first to note that this is just a bit like MS charging to fix critical bugs it left in the system. And Dvorak's note about an antivirus tool being a way for MS to get an online footprint on their customers' machines that they can touch weekly is cautionary, especially in context of MS' failed attempt to get in the middle of all transactions their customers conducted on the net (remember Passport?).

MS still wanna be Big Brother. At least the taxation part of that.

Monday, May 17, 2004

So much for kinder and gentler

MS says of their renewed foray into Search,
Microsoft efforts are so sweeping that painting its strategy as a simple matchup with Google is a "narrow, narrow way of looking at it."
This is classic MS. Not competitive, "But just wait, it's gonna be GREAT!" Freeze sales or adoption or whatever of competitors with "just around the corner" messaging. While they then take a few years to catch up.

To be fair, many companies use tactics such as this, speaking of their next product. And I use it sometimes myself - I'll talk about J2EE futures, but usually to make a point regarding forward compatibility of applications, which is a current 'feature'.

MS just take it that much farther, talking about 2 or 3 generations down the road as "coming soon".

For what its worth, I think there is plenty of opportunity to improve search technology. There are a number of searches I'd like to be able to do that current providers simply don't offer. If MS want to deliver a better product, have at it. But their performance to date in this area is less than stellar, and the sheer hubris of their statement above given their track record ...

"Fool me once, shame on you. Fool me twice ..."

Saturday, May 15, 2004

Outsourcing and responsibility

Hewlett-Packard is settling a claim with the government of Canada. But what I find interesting about this is their comments that their own employees aren't responsible, but instead that one of their subcontractors hatched a scheme to defraud both parties.

And this matters?

Businesses must be cognizant of their responsbility to provide a contracted service. Customers don't and can't monitor their arrangements on how to service that contract. There are whole businesses (general contractors) whose sole job is to manage these sorts of complexities.

To their credit, HP seems to accept this notion in that they settled the claim.


But even raising this as an excuse is laughable and worriesome. Is HP somehow less responsible because it failed to scrutinise its suppliers?

Is your medical insurer or financial company less responsible because the leak of your personal data happened in India rather than in their own office? Or is a software vendor less responsible for a back-door because it was inserted by a contract programming house in India or Russia or other grossly-stereotyped labour market?

I wonder how long it will take for some enterprising lawyer to argue for punitive damages in a confidential disclosure case in an amount comparable to the labour savings the defendent enjoyed in outsourcing the work in the first place.


[Comments from my previous blog]


1. a reader left...
Saturday, 15 May 2004 1:57 pm
 
Now that a shame less title to get high click rate.
Anonymous
2. glen martin left...
Sunday, 16 May 2004 4:41 am
 
If you say so. Personally, I thought and think it closely related to the content.


3. john left...
Friday, 21 July 2006 10:53 pm :: http://www.evision.com.pk/outsourcing.ht
 
Outsourcing comes with a set of great responsibilities however some peoples presenting them not as article but as onion hoof.
4. bedava oyunlar left...
Wednesday, 30 April 2008 6:05 am :: http://www.eglencek.com
 
good works


5. BPO Manila left...
Wednesday, 2 June 2010 5:51 pm :: http://www.oneworldconnections.com
 
Great post! I think that people are forgetting that outsourcing doesn't mean that you are handing over your entire company to an outsourcing partner. You are still the boss; you still need to constantly monitor the workings of the outsource company. As in all businesses, the leader or manager is still responsible if an employee does something wrong. The leader may not be directly to blame, but he has the responsibility of overseeing the entire operation, making sure that everything is going smoothly.

Wednesday, April 28, 2004

Language goals

What are language features for?

magpiebrain (love the name!) blogged:
Each language feature introduced that tries to enforce safety of some kind invariably introduces some reduction in the power of the language. Some of these tradeoffs’ seem acceptable.
Written as above, it would seem that the primary goal of a language feature was to restrict. My own view is a little different - the primary goal of language features is to focus. Focus attention on the critical parts of the problem domain being addressed by the language. Focus time on solving instances of those problem domains.

I have a photography hobby (that is sadly starved for time). When I'm composing images, I usually work in B&W and upside down, to turn off as best I can the parts of my brain that are hung up on naming objects and instead focus on their shapes and relationship to one another.

The same is true in programming as well, and my choice of language.

Prolog is interested in propositional logic, and so exposes Horn clauses as the way to utter propositional logic problems. (Whatever happend to Prolog? Does anyone still use it? I think I last wrote a Horn clause over 15 years ago).

C was designed to solve operating system problems: communications, scheduling, isolation, etc. Pointers and pointer arithmetic allows easy iteration through lists of same-sized (ideally same-typed) things, like process info, thread info, malloc info. Buffer management is important for an operating system because it is tasked with marshalling the limited resources of the system - it needs to know how much memory is allocated, free, occupied by idle processes, etc.

The Java language and platform weren't really intended for this sort of thing. Business apps are more concerned with business and presentation logic. While it can (and has) been said that pointers were removed for safety, I think safety is just gravy. Motivations aside, pointers simply aren't needed nor useful for biz apps and presentation logic. If they had been necessary you can bet that necessity would have won over safety.

Taken to ridiculous extremes to bludgeon the point into submission, all of these problems can be solved in assembly language, but we'd never have the time to do so. My attention span is shorter than the overhead of working in that language (or class of languages, I suppose) for anything but a tiny problem.

Positive value trumps negative value.

Where magpiebrain was really going was to dynamic typing, and he says:
These dynamic languages can be seen primarily as enabling languages - they make the assumption that developer actually know what they are doing.
Expressing 'dynamic' as enabling is good, I like that (per the above). ;) But dynamism by and large hasn't been useful to me in focusing my attention on the problem I'm solving.

And as for knowing what they're doing ...

A developer can know what he's doing, but developers frequently don't. Especially as their numbers, and the number of years a project stays in development and maintenance increases. It is the institutional knowledge problem, or the buzzphrase we used to use, programming in the large.  Teams aren't all that good at disseminating and maintaining knowledge.

Which means that so long as "knowing what they're doing" only requires local knowledge, fine.  But few problems are that localised, at least in my experience.

(Sam, if there was a trackback link, I couldn't find it. Apologies)


[Comments from my previous blog]


1. a reader left...
Wednesday, 28 April 2004 9:29 am
 
Yeah, I really need to display thr trackback URL - the RDF gets spit out so it should really autodiscover. Anyway, found this via bloglines.

When I was talking about language features in the first sentence, I stated features introduced to improve saftey, not all language features...

Sam Newman
2. Geoff Arnold left...
Wednesday, 28 April 2004 9:45 pm
 
In some sense, this is tautological: a feature introduced to enforce safety must do so by making some previouly legal construct illegal, or by constraining the effects of some language construct. The interesting question is whether the constraint is significant or not; are there circumstances under which I might reasonably have wished to express that which is now inexpressible? In the case of Java and pointers, for example, I think that I can state fairly comfortably that in a memory model where there are no guarantees about how structured entities are mapped into memory and no general way for me to discover the mapping, pointers are pretty much useless and I lose nothing if I'm restricted to references.

Visit me @ http://geoffarnold.com

Sunday, April 25, 2004

Instant Solutions?

This morning I picked up the April issue of JDJ, and Joe Ottinger's editorial "Looking for Instant Solutions?"  (which doesn't appear to be freely available, so no link, sorry) caught my eye.

His premise is that solutions aren't valuable in large part due to their inflexibility. The same for programmers. Both are stuck in the past, nailed up to prior experiences that adversely colour their reactions.

In part I like this notion, because it explains why some very bright developers insist repeatedy that Enterprise JavaBeans are the wrong solution - it is that they aren't evaluating in terms of the current application and current capabilities of EJB containers so much as they are evaluating in terms of their prior projects and frequently version 1.0 EJB containers.

But Joe goes a little too far with this thought, appearing (after skipping a couple of steps of his article) to recommend the use of meta-solutions over solutions. That is, he calls for a framework of solution creators, from which application-specific solutions can be constructed on the fly.

Perhaps this is an evolution thing. His thought reminds me of reports of the rennaissance, in which there was a vast explosion of creativy and advancement in design, thought, science.  What there wasn't was much standardisation. And in the time between then and now, the processes of constructing things has evolved quite a bit.

The economics of software construction should be of major concern to developers today. Without efficient end-to-end software construction/deployment/maintenance/extension/etc, net return isn't there, and can't flow to the developers. Read: downward cost pressure, which leads to offshoring and other amusements.

The evolution of processes (in the previous sense of the rennassance, construction, et al) has been driven by the same economics: these processes start off with artisans, and over time the grunt work is taken out (from one point of view) or the inefficiencies are removed (from another) or the room for creative craftsmanship is removed (from a third point of view).  Building a car, I no longer design my fasteners one-off.  Only the pentagon gets away with $500 toilet seats.

So long as we're still designing and building screws, we can't focus our brain cycles on the car. And it the car that makes a profit, and pays our salaries.

So while I can understand a desire to use meta-solutions, I don't believe they are efficient or that they help the economics of software development and really allow the field to advance. I certainly agree that a single solution doesn't apply in all situations, but the solution isn't to design the solutions one-off, but is instead to develop a more flexible solution or group of solutions that cover a range of our activities.

My big red tool chest has 4 sizes of Phillips screwdrivers, and 2 Robertson. And this is good. I don't need to craft screwdrivers on my own.

Sunday, March 21, 2004

Re: Hindsight

Bill read my comments on the weakness of some treatments of XP and writes in response:
The suggested answer to the mentioned logging problem is to log asynchronously (which is a very good one, let's be clear on that). That's fine, but by the time you do design your way out for all functional and non-functional aspects of even a medium sized system, chances are you'll have bled the customer dry thanks to futureproofing and still they won't have what they need today. Assuming of course that your guesses worked out to be right.
This can be true, and in my experience it often is. Heck, I've been guilty of it. So I understand the problem.
But in between these two extremes is a middle ground of designing for flexibility. Building an async log for its own sake may well have been overkill for the immediate requirements, but the decoupling of log producer from log consumer, and use of a flexible transport, means that a wide variety of unseen futures are made simpler.
So my admittedly narrow reading of Glenn's entry is that he has a narrow reading of what XP is offering. ... there's a whole school of thought within XP of programming towards patterns (known solutions), that has not been mentioned or critiqued here.
That's fair. Certainly my comments were a narrow treatment. I too think there is a fair amount to learn from XP, and some XP thought applied with discretion very much helps some projects. It is just that I cringe when I read "develop the minimum" regurgitated without the surrounding thought, as I happened to do yesterday morning. The logging example was straight from that source. But perhaps I was guilty of the opposite sin. Mea culpa.

Saturday, March 20, 2004

Foresight

It has become fashionable in some circles these days (notably, eXtreme Programming and Agile Development) to say "code the absolute minimum for the problem (you know you have)."

Problem is, because these same folks often forego a lot of early design work (which they refer to as the Paralysis by Overanalysis antipattern), they often don't know what problem they are trying to solve. Or not completely, or not accurately or something. The solution they have for this is to "refactor often." Well, duh. This reminds me of the Monty Python "ant counter" sketch: "What do you feed them on?" "Nothing." "Then what do they live on?" "They don't, they die." "They die?" "Well, of course if you don't feed them."

I am forced to consider Pressman's research here. For those who may forget, Pressman looked at the cost to fix defects at different points in an application lifecycle, and recorded exponential cost growth as one moves into later phases of a waterfall process. If I remember the numbers correctly, setting a scale at 1 unit cost if found during initial coding, problems found in design (before coding) reduced the cost to .75, problems found in unit test were perhaps 1.25, integration 2 or so. Etc.

How is it that skipping the initial design will reduce costs? It won't, because refactoring costs. In fact, for me the only result that seems likely is that for any project too complex to keep all in one's head at once, eXtreme programming will increase costs by shifting defect correction to the right on Pressman's curve.

An overlapping group of folks as recommend XP also suggest that <some complex technology> is a sledgehammer and shouldn't be used for small problems.

But what is a small problem, expecially if one doesn't know the requirements (well) and hasn't done (much, or any) design?  Moreover, how many applications we deploy today will never be extended? And if your application will be extended, just how will you go about this?

An example that is used to illustrate a 'flyweight' problem that shouldn't be solved with a complex solution is a logging facility. Instead there are a number of solutions out there that take a variety of approaches: one such is to write a log class that takes a couple of parameters (severity and message, for example) and writes them to a flat file or through JDBC to a database. Cool. Simple problem, simple solution.

So roll forward, your application has been deployed with its simple logging solution. Now someone else builds another application that wants to talk to yours, and wants to add entries to the same log, preferably in-order.  There are a few ways to accomplish this. One is to have the different applications each open the file whenever they want to add something, which assumes the filesystem is local to both applications. Or if your used JDBC, that the database server is accessible etc. Another solution is to add some sort of remote protocol (perhaps a web service interface) to the first application, to allow the second to send a log event.  That's rather a bunch of code to add to that first application.  Of course, now that there is a second application, you're going to have to think about what information to log - do you want to capture the application name as well, perhaps? So you find yourself now writing 3 fields instead of the 2 your wrote previously, and  updating all your old code to call the new log method signature. And perhaps converting your old log files.

More time goes by, and you've noticed that there is a recurring event that shows up in  your log that requires special processing whenever it occurs. Perhaps a cracker is targeting your server and you want to know when it is happening, so whenever the log gets this event you want an email sent to your cellphone.  If you've been using a log file, you could at this point write another program that watches the tail of that file, parses the text, and if it matches a pattern compose and send an email.  This is an expensive solution because it is a separate process, and it is parsing information that was probably composed out of separate fields pereviously. And again, assumes the file is local, available.

The point of all this is that even something as simple as a log facility may wind up having very different requirements that those it started with. In fact even the assumption that the log only gets written and if it is processed at all it is later in a batch probably by a human, is suspect. And with every change in requirements came some refactoring, or rewriting, and a whole bunch of new code.

Returning to our ant counter, "I don't understand." "You let them die, then you buy new ones."

As if! Like we can toss old applications when requirements change! Those old COBOL applications are still hanging around, 30 years later!

By contrast, choosing a better architected solution from the beginning increased some up-front work, but can reduce the overall complexity as time goes by.  My own favorite solution for logging is publishing messages to a publish/subscribe message bus.  The messages retain structure (that is, they stay as a set of attribute/value pairs), and they can be reused by new applications (for example, our monitoring function that sends an email) sometimes with less new code and without any change to the originator or any other participant in the log. And the scalability as the number of processes filing log entries increases is very good, without those logging being blocked by others' locks on the log file or database.

To my mind, this is true agility - easier extensibility and stable functions (small changes in inputs mean small changes in outputs).

Oh, and if you develop the logging structure once that is more extensible, you can more easily reuse it for your next application - the code to log the next app is to format the attribute/value pairs for the new app into a message, and perhaps add a new message channel to the messaging system, and add some filter on the mesage bus to listen for those new messages and save the contents. Not hard, and distribution and scaling came for free on this second use.

The real problem seem to me that XP is going out of its way to downplay foresight and planning. In a world in which applications frequently, if not invariably, are extended to do more that originally intended, solving for the minimum is frequently not a good choice.

XP may have been a good solution to the bubble economy and 'internet time', but I think 'internet time' has turned out to be the wrong problem to solve.  Adding real value seems to be coming back into fashion.


[Comments from my previous blog]


1. a reader left...
Sunday, 21 March 2004 2:47 pm
Where XP excels is when the solution is not clear. This happens frequently (because problem's that have clear solutions, very often have already been solved -- your logging as a case in point).

Just as you suggest, there are massive benifits to be gained from correctly designing up front. This is the lure. But no matter how well you model the current requirements, if either you or your users don't understand the problem completely your model will be wrong. The more up front design you did, the more that that costs.

So, where the problem is understood, use patterns, designs, libraries etc. that have probably already been made to solve it. Where the problem (or solution) requires a few iterations to solve, build only what you need right now.

fletch