Regulars

Regulars

The local arm of the world`s number one business intelligence software vendor has ambitious plans for growth.
It`s 1994, St Petersburg, the offices of software development company AO Saturn. A

24-year-old programmer, Vladimir Levin, allegedly masterminds a virtual bank job, netting a cool $10 million from Citibank. (Hacking underground lore still claims that another Russian hacker, known only as Megazoid, performed the hack, armed with a $10 computer, modem and vodka.)

The bank eventually reports the infringement to the police, and Levin is arrested a year later at Heathrow. Citibank recovers most of its money, but the cost to the bank`s reputation is incalculable. Other banks learn this hard lesson, and reporting cyber crime becomes taboo – unless conviction is guaranteed.

Although most would rather sweep the issue under the rug, we know security infringements cost even more today than they did in 1994. According to the FBI`s latest Computer Security Institute survey, the loss of proprietary information cost respondents more than $70 million in total in the past year. Coming in second for costly security headaches were the so-called “denial of service” (DDOS) attacks, at $65.6 million for the year – down ten-fold from 2002. Between the 233 survey respondents that could put a monetary value on their losses, a good half-billion dollars was reported lost.

Where are the hackers?

But the most worrying stat is that only 33 percent of respondents reported incidents to authorities. According to the findings, 53 percent that answered a question as to why they didn`t report incidents claimed they didn`t know they could. So while, on the one hand, lack of security is a costly risk for a business or government agency, on the other, knowledge about security is low – even in the US.

So how does South Africa match up to the US? Not too well, according to former hacker Kriek “Kokey” Jooste. Says he: “[The level of IT security in corporate South Africa is] poor. I think it reflects the level of IT skill available. England is slightly better since economic conditions allow English companies to spend more on security than South African companies can. England itself is still behind a lot of other countries, including the US, Germany and Finland.”

Total onslaught

Security expert (and hacker in the original MIT sense of the word) Andrew G Thomas of Hobbs & Associates recently scanned the co.za domain space to test South Africa`s commercial web site security.

“In excess of ten percent of the 11 000 active name servers running at that time were vulnerable to various attacks,” says Thomas. One well-known hack method “could have, at the time of this audit, taken out in excess of 30 percent of the primary websites hosted in the co.za domain space”.

Jooste, who has worked both in South Africa and abroad as a security expert and was once notorious for being the country`s top hacker, adds: “While the majority have ‘wised up` to previous security threats and are protecting against it, new threats have appeared which haven`t been dealt with yet. Security is constantly changing and people need to keep up with the change.

“IT security in South Africa can be improved by reducing the time it takes to adapt to new threats. This doesn`t mean spending on new technologies, but rather spending on keeping the necessary skills up to date.”

Companies perceive security as a complex and expensive animal. It has to keep out multiple types of attack, at multiple entry points and from multiple sources.

Hackers, by contrast, have the freedom of low barriers to entry (a PC with an Internet connection), of selecting targets at random and of spending weeks or months attacking a single target. Defence against this anonymous and all-encompassing threat is going be pretty intricate.

Not so, maintains Dr Andrew Hutchinson, member of the T-Systems solutions crafting team. “The best approach is keeping it simple,” he says. “One of the enemies of security is complexity.”

Hutchinson goes as far as to propose Linux as a possible security solution – not because it is free, but because it is easily configurable. “If you can keep the modules clear and defined, you can make a better solution. If you look at something like Linux, you can take out the modules you don`t want to install. It`s much simpler in Linux to leave out a particular service, like the mail server.”

Mike van den Bergh, MD of Gateway Communications, agrees. “The simplest systems provide the best security. When a system is unusable, people start to look for ways around it to make it useable. You`ve got to have something that is intuitive and built-in automatically.”

Get some culture

But while simple is best, don`t confuse a simple system with a simple solution. Security often falls down because corporate entities don`t see it in a holistic way. Security isn`t about a “point solution” like a firewall or a virus scanner. It should permeate a company`s culture.

Thomas believes small, medium and micro enterprises (SMMEs) are more vulnerable than corporates to point solutions.

“At an abstract level, larger corporations have a significantly greater awareness of the value of information security practices and skills, brought to them and sold to them by both the media and by the larger audit firms and their related consulting arms,” he says.

“The SMME market by comparison shows greater exposure to information security risks and here information security management is often performed poorly, if at all. Additionally, SMMEs do not have the advantage of being able to afford the level of staff specialisation that larger companies have, instead relying on general staff or outsiders. They are particularly vulnerable to the sales pitches of unscrupulous or ignorant vendors that advocate a particular product or style of solution, as opposed to an integrated and holistic risk management approach.”

Jooste agrees: “Technology vendors love to tell people that they can give you a complete solution in a box. In most situations even the simplest of technologies are more than adequate, if configured and managed right. Apart from having the right skilled people in place, in-house or outsourced, companies need to ensure that everyone partakes in the IT security function of a company, through education and awareness.

“I think that, coming from a holistic point of view, security is everybody`s responsibility, because your security is only as strong as the weakest link. Companies need to also look at user education,” says Van den Bergh.

Ultimately people are the weakest link.

In Gates we trust

Microsoft`s trustworthy computing initiative is getting up steam. But will it have the desired effect in the market?

“Trustworthy computing” is the latest bit of jargon buzzing around the security community, and it is widening the rift between open source protagonists and what they dub The Man – otherwise known as Wintel (the Microsoft/Intel partnership).

The term originated among Microsoft execs, catching public attention with Bill Gates`s memo in January 2002, announcing that: “We must lead the industry to a whole new level of trustworthiness in computing.”

Since the fateful day on which Microsoft halted all development to send its staff on secure development training, the phrase has come to represent Microsoft`s “secure by design, default and deployment” slogan. It`s become more than just a marketing placebo.

The ultimate goal is to get to what the industry calls a trusted computing platform – a combination of hardware and software guaranteed to be secure, and for which Microsoft`s alliance with Intel will be very valuable.

Research company Gartner analyst John Pescatore expects trusted computer platforms to start becoming a standard PC feature around 2005, but believes they will only reach critical mass in about 2008. We will also start to see the trusted platform appearing in cell phones and PDAs around 2008.

Good news, bad news

Pescatore believes the business impact of this Wintel initiative will be the introduction of “new business models enabled by digital rights management, safer use of public computers for employee remote access and stronger intellectual property protection”.

It sounds like pretty good news for the end-user. Early trustworthy products eliminate any doubt that they`re more secure than their predecessors. So why are open source pundits up in arms?

Firstly, Gates`s statement in his memo that “no trustworthy computing platform exists today” was a bit of a slap in the face for alternative operating systems – open source included. Whether Linux and its relatives are trustworthy is debatable and dependent on one`s definition of “trustworthy”, but few doubt that Linux has trounced Redmond on security, and nobody will argue that it has won the perception battle.

Linux distributions usually include heavy-duty security tools, and deployments are by habit more secure. Linux vendors tend to warn users that their installation choices could create security risks.

Linux supporters also argue that open source is the ultimate in secure design. Thousands of skilled users can assure themselves – and others – of the robustness of Linux`s security, and even choose to improve the source code themselves.

Civil rights lawyers, privacy pundits, open source programmers and free speech fans also harbour justified fears about hardware-enforced digital rights management, as proposed by the trustworthy computing paradigm. It could limit the utility of what used to be personal, all-purpose computers, and will implement copyright and patents in a manner substantially removed from their original legal basis, by prohibiting fair use and sneaking off with statutory expiry provisions.

But as much as Linux users might protest Microsoft`s claim to being the pioneer of trustworthy computing, such sour grapes do not affect the validity of Microsoft`s security initiative in principle.

What might have a greater effect is Microsoft`s own users` reaction to trustworthy computing. While some will no doubt embrace the change with open arms (and maybe a few comments of “about time”), two opposite ends of the user scale could be negatively affected by the development.

The first is the “power user”, whose primary concern with trustworthy computing to date has been the automatic installation of patches – little applications that Microsoft sends out to users regularly (a little too regularly, some argue) that fix a specific problem with the operating system. The patches, however, have developed a reputation of often causing more problems than they remedy, making systems administrators unwilling to install a patch until it has been comprehensively tested in their operating environment.

Kicking back

The second type of user Microsoft might alienate is on the opposite end of the scale – the non-technical user. Microsoft has made its name by offering operating systems that work out of the box. But, by its nature, trustworthy computing means most services don`t work out of the box. Users will have to install and configure every service they want to use, which is not an ideal situation for the unqualified.

Whether these two extremes will kick back against the changes or accept them as the price of higher security remains to be seen. It will be a good test of whether the “trust” in trustworthy computing is taken to heart – not only by Redmond, but by Microsoft`s customers too.

Microsoft`s turning point

Windows 2003 is no doubt the most secure operating system from Microsoft to date, but it still needs some TLC.

Let`s face it. Microsoft has a pretty bad track record for security. Gartner puts security (and lack thereof) as one of the three factors driving what it calls “the anti-Microsoft movement”.

“Many governments are unhappy with its aggressive strategies (which have garnered significant antitrust investigations and actions) and a less-than-perfect record in software quality, security and privacy. The Microsoft juggernaut has, at times at least until recently, seemed oblivious to the growing antipathy shown by some of its previously loyal customers – especially in countries outside North America,” says Gartner bluntly.

Partly, of course, it is most vulnerable to security threats for the simple reason that it is by far the most common desktop operating system in use.

But even so, its security record is about to change. At least that`s what Microsoft tells us. Its first product to roll out under the trustworthy computing banner is Windows 2003 Server, and Microsoft is hoping that this product will convince concerned customers eyeing Linux that all is well (and safe) in Redmond.

“Secure by design, secure by default, and secure by deployment” is the rallying cry, heard from Seattle to Sunninghill. From what we can tell from the outside, Microsoft has taken this motto to heart. So far, Standard Bank, Ster Kinekor and Professional Provident Society (PPS) have migrated to the new product and, according to Microsoft SA`s director of the .Net and developer group Danny Naidoo, they are “thrilled” – in the positive sense of the word.

Password: Password

Naidoo takes us through some of the new security features of Windows 2003: “The attack surface is reduced with less services running by default. For example, you need to explicitly run IIS (Microsoft`s web server and a common point of attack) on the server. Administrators can`t create an administration account with the password ‘password`, or with a blank password.”

Microsoft has also committed itself to fixing bugs and making patches available to users faster than before. It went through every line of code looking for vulnerabilities. It has got a ton of open- and closed-standard authentication protocols built in. It offers encrypted file systems, smart card support, wireless LAN security and a built-in sandbox for applications running on the common language runtime platform. Compared to previous versions of Windows, Microsoft has made a momentous effort to live up to the trustworthy computing cry.

But before you toss out your firewalls and sell your anti-virus licences, there is some bad news. First and foremost, Microsoft is only providing companies with the tools to make a more secure system.

“We see security as a shared responsibility between ourselves and our customers,” says Naidoo. “We will need the customers to act and use the products the way they are designed to be used.”

This is what Microsoft calls “secure by deployment”. If customers don`t implement patches and fixes, leave unused and unsecured ports open, or use “money” as their administrator password, there isn`t much hope for them.

Microsoft hasn`t made any claims that its platform is “bug-free” or “unhackable”. (Oracle went down that road and its unhackable platform was duly hacked.)

And a complaint by a concerned South African journalist to the Advertising Standards Authority shows that it won`t do to make claims that Microsoft will make hackers extinct.

In fact, a couple of security holes in the new OS have already been discovered and posted to NTBugTraq, a security watchdog community that alerts Microsoft, customers and hackers alike of potential security holes.

Although we have not had the opportunity to review the product ourselves, we did spot a worrying glitch on a local site running Windows 2003 recently. The site, hosting a large local retailer, defaulted to showing the ASP.Net source code whenever it had an error on a page. This was more likely a configuration fault than a Microsoft bug, but it does prove that Win2003 is not perfectly secure.

Some believe that Microsoft is making its products too secure, compromising too much functionality and ease-of-use.

“All organisations seem to be becoming more aware of the importance of releasing software that is more, rather than less, secure by default. Regardless, there is and will for the foreseeable future be a constant tension between ease of use and security, with end users traditionally preferring technologies that work ‘out-of-the-box`,” notes Andrew G Thomas of Hobbs & Associates.

“I think Microsoft will be pressurised to remove the security and make it optional again,” opines Mike van den Bergh, MD of Gateway Communications. “Until legislation for seatbelts was introduced, the flashing lights and beeps to encourage you to wear the seatbelt didn`t help. I think they may have to backtrack on trustworthy computing, but I`d be happy if I`m wrong.”

Continues Van den Bergh: “You can make people aware of security, and people must understand the consequences of ignoring secure practices – that`s where it comes down to education.”

Education is one facet of trustworthy computing that Microsoft can truly be commended on. At the moment, the focus is on third party developers and administrators, but Naidoo says the material includes formulating security processes and procedure for staff, and communicating these throughout the company.

“It`s not just about product,” says Naidoo. “We need to create a trust between us and our customers and us and our partners so that they see us as trustworthy enough to do business with. We are also changing our culture – we`re transforming an organisation that`s 55 000 people strong. You can`t do that overnight. I think this programme will run over a decade or so.”

When Rick Parry threatened an MBO of the local distributor for the Progress database and development platform, the US principal took note and instead created a full-blown subsidiary in South Africa. It wasn`t sorry.
Described famously by Microsoft`s Steve Ballmer as a cancer, open source software is challenging strongly in the low to mid-tier server market and beginning to make its present felt, albeit lightly, on the desktop. Adoption at the high end still appears slow, however, as corporate users worry about issues concerning support, accountability and availability.

Critically, however, open source has been recognised by the traditional hardware vendors, most notably and vociferously IBM, as well as by many influential application vendors.

Local Linux solutions provider Obsidian Systems` Anton de Wet believes the influence of IBM throwing its considerable weight (and around $1 billion) behind open source in general, and Linux in particular, cannot be overstated in terms of its growing credibility as a workable alternative. He says big strides have been made in the high-end space in the US, but the uptake in the South African market is far slower.

“Over the last couple of months we`ve seen some movement from the technologists at the bigger vendors, but the local market still lacks the evidence of a major implementation on large, mission critical systems. Once someone has taken the plunge we will see Linux take off in this sector.”

Stephen Owens of enterprise solutions provider Epi-Use Systems agrees that Linux has the potential to make a huge impact on the high-end market.

“IT managers feel more comfortable because Linux has established a very good track record in terms of stability and security. All the high-end server type applications have long been capable of running Linux,” he says. “Now that the major vendors are embracing it, it will become more mainstream.”

Linux unlimited

Inus Gouws, a consultant at Computer Associates (CA) Africa, is also convinced Linux has moved beyond its “cheaper option” status. “Linux is not limited at all. The only limit that Linux has is resources. Yes, it runs on lower specification hardware, but when you consider it has the capability of running on clustered environments, the whole picture changes.

“The bigger the resource pool the more capable it becomes. You can now have a distributed environment with mainframe availability and reliability. This is very good news for the mainframe crowd as operating system and hardware changes also affect them.

“Indeed,” adds Gouws, “some organisations leverage the power of Linux to run their high-end servers, again lowering the costs. These servers are usually clustered environments and have hardware specifications second to none, hosting applications that range from web servers to mission-critical medical information centres and online e-business transactional applications.”

Thomas Black of the Shuttleworth Foundation believes the advance of Linux in the server market is not limited to low-powered boxes, but is best suited to high-cost utility servers. Linux has experienced most of its growth in Unix territory, he explains, because of the similarities between these operating systems.

Battle for the desktop

“Whereas Unix was previously restricted to expensive, high-end hardware, Linux introduced Unix-like power and functionality to low cost systems. Linux, in turn, has seen a steady migration from lesser hardware to high-end systems, allowing a single operating system to be run across the spectrum of available hardware, thereby placing less emphasis on the hardware itself,” Black says.

“Linux facilitates clustering and grid computing for lower spec hardware to achieve the same results as a high-end mainframe, but at a lower cost.”

Open source software`s future on the desktop is, perhaps, less clear but always the cause of heated debate – much of which is driven, rather unproductively, by anti-Microsoft sentiment.

While it can be argued with some validity that open source desktop options offer much of the basic functionality of the ubiquitous range of Windows offerings, it is hard to imagine them posing a serious challenge to Microsoft`s dominance for some time.

No new tricks to learn

Black believes there is still a long way to go before open source becomes competitive on the desktop, particularly in the home user environment. He points out, however, that there has been strong development over the past two years and that current options can be effectively rolled out in mass deployments that are centrally managed. This means companies can consider alternatives to Windows for their fleets of PCs – which will familiarise employees with open source.

Obsidian`s De Wet agrees and, while he concedes that open source operating systems are not necessarily better than Windows as yet, is confident they will get there in the next couple of years. (That`s assuming Microsoft stands still, eh Anton? Just kidding.)

IBM South Africa`s Aubrey Malabie is more confident: “With current versions of Linux, there is not even a paradigm shift for users moving from another desktop to a Linux variant. It`s not as if users have to re-learn skills or change the way they work.

“There are differences, though, between what most users are used to today and what a Linux desktop offers. The biggest difference is that with a Linux desktop you scale up, and users can realistically expect more from their systems in comparison with what they use today,” he says.

The question of licensing is important, but not as important as many open source disciples would have us believe. Research house Gartner estimates the licensing of software accounts for just eight percent of total cost of ownership.

However, says Anton van der Berg of Linux proponent Bisart, for small companies faced with the spectre of having to update illegal software quickly, or face the wrath of the Business Software Alliance, it becomes an issue.

“Our experience shows that if you look at the average small South African business, running around seven PCs, perhaps only three of these will be fully licensed. This is where Linux becomes an option,” he says.

Adds Obsidian`s De Wet: “Any switch to Linux should be slow and measured. Licences usually come up for renewal in a three-year cycle, so companies that are up to date and wish to make the change for other reasons would be advised to use the time to plan ahead for the migration.

“I would recommend beginning with OpenOffice as a pilot and then, if that is found to be acceptable, moving on to a full Linux implementation,” he says.

Despite (or perhaps because of) the hype and publicity generated by open source software, a number of myths have arisen, both positive and negative, that require examination. The most widespread, not surprisingly, centres on the cost savings and has largely been perpetuated by the perception that open source software is free, as in gratis, as opposed to free as in free to adapt and distribute, and free from lock-in through proprietary standards.

A bogus argument

While it is true that much of the open source software available is cheaper than proprietary alternatives, credible open source proponents have moved beyond citing this as a reason for adoption.

Comments Epi-Use`s Owens: “Cheap is a bogus argument. Instead, the real benefits of open source – the ability to spread the adoption of open standards, the robustness and inherent inter-operability of the software, and the availability of hundreds of thousands of people in the market to test it – are attracting the interest of companies.”

Shuttleworth Foundation`s Black agrees: “Price is getting less and less important. Now, more emphasis is being placed on the freedoms – not being locked into a particular product, the ability to be able to adapt software as your needs change.”

One query often raised concerns the availability and quality of support for open source software. While it is true that companies using the truly free distributions will have to rely on the open source community for support, Gartner stresses this is not necessarily a bad thing.

“Enterprises in some more-remote geographies point out that open source support can be better than what they`ve been paying vendors for. However, enterprises that require professional support for their client OS will need to pay for it. These costs may work out to be less than the cost of a Windows licence and support, but they need to be understood, and not assumed to be zero,” the research house says.

Owens concedes it is probably fair to say there is not as much support for open source as there could be, but points out that in most countries there are a number of companies that offer support services.

“Globally, the big vendors like IBM, Oracle and HP all offer Linux support as part of their overall offerings,” he adds.

“When you start talking about the lesser-known open source products, then you can argue there`s less support. Conversely, the argument can be made that the technology is so open that you require less support.

“If something goes wrong you have full access to the source code and the availability of the community that developed it. It does, however, remain one of the challenges to open source adoption, if only from a perception point of view,” Owens says.

Another myth, common to users of desktop applications, is that extensive retraining is not necessary. Not so, says Obsidian`s De Wet, who adds that the misconception is also prevalent among Windows power users.

So much for the myths, but what other, real, factors should proponents of open source software consider when they try to persuade companies to come on board?

Mark Rotter, principal analyst for software, IT services, telecoms and networking at the BMI-TechKnowledge Group, believes it is essential they understand what their enterprise customers feel is important about software.

Beyond bogus

Most companies, he says, are firstly concerned with the financial benefits to be gained from implementing new software, be this in new income or improved cost and process efficiencies.

Other factors to bear in mind include business benefits in terms of help with day-to-day business challenges, usability of the technology, and the introduction of predictability into environments plagued by human error.

Rotter believes South African open source software vendors have not yet found a sound business model, but adds that the introduction of web services should see more rapid adoption, driven by Linux, over the next three years.

And where`s the money to be made? In the short term, says Rotter, the main areas will be web content management, basic Linux implementations and support, and some consulting work.

So, the penguin marches on, now with the support of many of the major vendors. Will it eventually become dominant? Probably, but that`s still quite a way off, particularly on the desktop. There are still a number of advantages that lie with proprietary software and, perhaps, it`s appropriate to give the final word to Microsoft SA`s Danny Naidoo.

“Our software gives the customer several value benefits, such as our industry-leading R&D investment, market leadership, reliability, accountability and commitment to improving our service capabilities through an ever widening and improving partner base.”

Government - the new disciple of open source

It is perhaps ironic in the light of the open source software movement`s “anti-establishment” roots that governments around the world, particularly in developing countries, are fast becoming fervent converts. The South African government is no exception.

Open source and proprietary software have co-existed in government IT infrastructures for many years, but the new millennium has seen more and more countries adopting measured strategies that will free them from their reliance on commercial software vendors.

At first glance, this can be explained by a desire to save costs and, in the developing world, foreign currency; but there are wider considerations. Research group Gartner has identified a number of issues behind this public sector flirtation with open source software:

* A reaction to the cost implications of new, fixed-term software licence fees introduced by several large commercial software vendors;

* Significant lobbying activities by commercial vendors that support open source software as a business strategy;

* Anti-trust cases that have raised the profile of Microsoft as the software industry`s most dominant vendor;

* The realisation by several governments that technology expenditures have not benefited local players, but rather foreign, mostly US-based, vendors;

* Heavy investments in e-government have been made without ascertaining their sustainability over time;

* Many governments are looking at open source software for the “perceived” savings and ease associated with its implementation, as well as its flexibility;

* The widening of choice in “good enough”, supported, open source products.

All this is true enough, but, argues Epi-Use`s Stephen Owens, governments of countries like South Africa, Peru and Brazil have taken the open source debate to a higher level. “Open source is viewed as a way to solve social and socio-economic problems and these countries have adopted a more philosophical approach.

“In Peru, for instance, they`ve taken a constitutional standpoint on open source. They`re trying to ensure that information and data is available for the future and, therefore, believe they can`t be locked into proprietary software.

“Also, this information has to be available to their citizens, who have to be able to access the data and be protected from any malicious intent. They have to have access to the source code to ensure there aren`t any ‘bugging devices` in their software. Out of this flows the demand for free software.”

National interest

Gartner confirms that much of the proposed preferential legislation for open source software is fuelled by long-term strategic objectives, often expressed in terms of “national interest”.

“Open source software is initially seen as a shortcut to technological independence in terms of satisfying internal technology needs with local skills and resources, while at the same time building a basis for future service and product exports,” the research house says.

“For some of the emergent economies in Latin America or Africa, the ability to introduce IT more widely – in schools, businesses or the public sector – is limited, to a large extent, by up-front software costs. A preferential attitude toward open source software is justified in terms of narrowing the ‘digital divide`.”

This certainly appears to be reflected in the South African government`s approach to open source. According to Minister of Public Service and Administration Geraldine Fraser-Moleketi, developing countries like South Africa spend billions on software licences. Billions of dollars in valuable foreign exchange that, she believes, could be used to build houses, roads, hospitals and schools.

“Not only will we save taxpayers` money directly but, because government is the country`s largest IT user, its adoption of open source is expected to act as stimulus for adoption in other sectors.

“Open source has the potential to improve the cost and speed of service delivery and thereby efficiency in the public service. It can also have a positive impact on quality. Existing open source software can be obtained at low expense and then redistributed widely without further payment for licences,” adds Fraser-Moleketsi.

“This creates a potential for significant cost saving. Furthermore, because different vendors all have access to the source code, they can compete to sell their support services, exercising downward pressure on prices.”

State Information Technology Agency (Sita) group CIO Mojalefa Moseki claims government has already made significant savings on licensing, software procurement, support and upgrades through the use of open source.

Government, he adds, has also benefited from increased levels of security and improved response times. “Because the software is supported internally, software errors and support calls are responded to more quickly. We believe open source software is as good as, if not better than, commercially available software. In many cases, it is more stable and more reliable,” Moseki says.

It is clear the expectation is that the absence of up-front licence fees and the availability of community-based support can lead to lower costs. However, Gartner warns that while open source software has some obvious acquisition cost advantages, adopters would be wise to investigate the longer-term total cost of ownership.

“Additional outlays for maintenance and support may negate any licensing cost savings,” the research house says.

Nhlanhla Mabaso of the CSIR`s Open Source Resource Centre believes proprietary vendors have unwittingly popularised open source software.

“There`s certainly more to open source than licence fees: we have to focus on the total cost of investment. This must include the benefits of investing in the development of our people and our economy as opposed to a strict financial cost approach,” he says.

Mabaso stresses that the South African government`s approach to open source is not prescriptive. Rather, he says, it is aimed at eliminating discrimination and levelling the playing fields.

Prescription and Ignorance

“There are usually two factors that limit choice – prescription and ignorance. In South Africa people were failing to exercise their right of choice because they were both ill informed and misinformed. You must remember it is often easier to opt for a well-established foreign vendor`s solution than expend the effort investigating less publicised yet viable alternatives,” he says.

CSIR CEO Sibusiso Sibisi confirms that government should not be construed as campaigning against proprietary offerings. “Government needs to investigate open source software as an alternative. In some cases proprietary software may be preferred, and in other cases open source software.

“Government also needs to encourage open source software development activities. It must not enter into a debate taking entrenched positions. But we do object strongly to people who offer proprietary solutions and criticise attempts to implement open source software solutions,” he says.

Epi-Use`s Owens welcomes government`s ‘middle of the road` approach. “A government strategy like this stimulates the economic environment in that far more companies can now become players in the field. Black empowerment companies in particular stand to benefit greatly,” he says.

There can be little doubt that government and the public sector is an ideal breeding ground for the expansion of open source software, but it must be remembered that the caveats that apply to its adoption in the private sector are equally relevant.

An animal of a different kind

Uninitiated adopters of open source software, seeking to be free of the licensing burdens imposed by the proprietary vendors, confront an animal of an altogether different kind – the GNU GPL.

Licences associated with conventional proprietary software are relatively easily explained – despite their length and complexity. They`re there to protect the developers` intellectual property, investment in R&D, market share and, to a lesser extent, ability to generate revenue.

Licensing requirements of the open source movement, on the other hand, are driven by altogether more altruistic motives.

In order to understand this, it`s worth taking a step back in history to 1984 when Richard Stallman, a researcher at the MIT AI Lab, started the GNU Project – the name being self-consciously self-referential, denoting “GNU`s Not Unix”.

The thinking behind the GNU Project, and that of its umbrella body the Free Software Foundation, is simple – a belief that software source code is essential to advancing the discipline of computer science and, in order to encourage innovation rather than market domination, should be free.

Stallman was not naïve enough, however, to dismiss the threat of companies snapping up the code for profit, and instituted the GNU General Public Licence (GPL) to prevent this.

The GPL is designed to enshrine the freedom to distribute copies of free software, open up the source code to those who want it and allow the adaptation of the software, or the use of pieces of it in new free programs.

It replaces the standard copyright agreement with what Stallman dubbed “copyleft”, an idealistic scheme that, while not prohibiting the sale of software, is aimed at preventing monopolism.

Obsidian Systems` Anton de Wet explains: “The GPL can be described as a ‘viral licence` in that it ensures that everything you do has to be available to the community at large. It is one of the main reasons behind the rapid development of the open source movement.”

The GPL is not the only licensing model covering open source software. According to research house Gartner, it is now generally recognised that a valid open source licence has to comply with the definition of Eric Raymond`s more utilitarian Open Source Initiative (OSI) (see www.opensource.org) in that it:

* Allows free redistribution

* Provides access to the source code

* Allows modifications and the creation of “derived works”

* Protects the integrity of the author`s source code

* Does not discriminate against persons, groups or fields of endeavour

* Applies automatically and without a signature

* Does not “contaminate” other software.

“In commercial licence agreements, the enterprise acquires only the right to use the software; it does not take intellectual property ownership from the software vendor. Likewise, with an open source licence, intellectual property ownership stays with the original holder,” Gartner says.

The success of the GPL has made it the dominant open source licence model, but some – the Berkeley Software Distribution (BSD) licence for instance – are even more liberal.

Under the BSD-type licence, users basically can do what they want with the code as long as they credit the original developer. Companies like web server specialist Apache believe this is the best type of licence for those looking to further extend an existing commercial project.

Mix and match

Under the BSD licence, a company can mix and match the software with its existing proprietary code, only releasing what it feels will further the development of its own goals.

Gartner believes that enterprises must understand this crucial difference if they need to go beyond simply using open source software or making modifications for internal use.

“This is particularly important for software vendors planning to extend open source products,” the research house says.

Who you gonna call?

Advocates of proprietary software are often quick to point out that an open source licence offers no protection in terms of warranties to the user. This is generally true and, in theory, makes it impossible for a company to sue for damages when the software breaks down.

Gartner, however, points out that this should be seen in perspective. “Typically, the warranties in commercial software only guarantee that the software will ‘perform substantially in conformance with documentation`, with no express fitness for any purpose, and that ‘reasonable efforts` have been made to ensure it does not contain viruses.

“These scant protections are also often limited to 90 days after receipt of the software. In many cases, the warranty has expired before the enterprises have even thought about deploying the software. Finally, the remedy, if warranty is breached, is usually limited to recovery of the licence fee.”

(It`s worth noting that proprietary vendor indemnity against third party copyright claims and the like can generally be viewed in a similar light.)

While the freeing up of source code to allow adaptation and free distribution of software remains anathema to many proprietary vendors, there appears to be some recognition of the value that can be derived from open source innovation.

Sun Microsystems` Community Licence and Microsoft`s Shared Code fall into this ambit, but they cannot be described as open source licences. While they do allow some access to the source code, there is no free redistribution, use is restricted and no modifications or derived works are allowed.

Gartner stresses, however, that this does not mean that they are better or worse than open source licences in general – just that they are not the same thing.

The issues behind software copyrights, patents and licensing continue to be a source of heated debate between proprietary vendors and the open source community. Both sides` arguments carry valid points and potential customers of either would be well advised to study them and identify which better suits their individual needs.

More Articles...

Page 568 of 574

568