Partners

Diberdayakan oleh Blogger.
RSS

Advantages And Disadvantages Of Using Tax Software

Each year millions of Americans have their taxes prepared by a professional tax preparer. Having tax returned professionally prepared reduces the likelihood of errors being reported on a tax return; however, professional tax preparation is often expensive. For this reason there are a large number of individuals who are making the decision to file their own federal and state tax returns. While it is possible to file tax returns the traditional way with paper tax forms there are now many taxpayers who are relying on tax preparation software to quickly and accurately prepare and file their taxes. Tax software programs have increased in popularity over the past few years; however, like many other software programs tax software programs have advantages and disadvantages.
Before learning about the different advantages and disadvantages of tax software it is important that taxpayers learn the different types of software programs that are available. There are a number of popular tax software programs that include Turbo Tax, TaxCut, TaxAct, and more. Each of these tax software programs are likely to offer multiple tax software versions. Many tax software programs come in a standard version, a deluxe version, or a premium version. Each brand of a tax software program may include different features under each tax version; however, many of the tax software programs operate in the same way. Standard or basic versions are likely to only include federal income tax return forms. Deluxe and premium software versions are likely to include both federal and state income tax forms. Premium tax software versions are likely to include additional help in finding tax credits and deductions.
One of the main advantages of using a tax software program is that they are fairly easy and quick to use. Tax software programs are usually step-by-step; therefore, many individuals can complete a tax return faster than on traditional paper and in less than half of time. Many taxpayers who use tax software prefer the software versions that offer both state and federal tax forms. The majority of software programs will transfer the information from a federal return over to a state tax return. This not only saves time, but it also guarantees that the information found on a state tax return is accurate.
Another advantage to using a tax preparation software program is that is costs less than hiring the services of a tax professional. Tax preparation fees generally depend on where the taxes are being prepared at and how many tax forms need to be filled out and how complicated they are. The majority of individuals end up paying one hundred dollars or more to have their taxes professional prepared. The price of a tax preparation software program can range from free all the way up to sixty dollars or more.
In the past few years e-filing has become popular. E-filing allows a tax return to be received and processed quicker which often results in taxpayers getting their tax refunds sooner. Even though e-filing has dramatically increased in popularity there are still a number of individuals who do not feel comfortable e-fling their taxes. These taxpayers are often worried about their personal information being transmitted over the internet. All tax software programs give users the options of e-filing their federal and state tax forms or printing them out.
While e-filing tax returns may be convenient there are many tax software programs that charge an additional e-filing fee. Taxpayers are encouraged to fully read the box of a tax software program or read the description of the software program online. It is not uncommon for many taxpayers to not realize that they will be charged an additional fee for e-filing. There are some tax preparation software programs that only mention the e-filing fee in the fine print of their product description. Even with the additional fee it is still likely that the majority of tax software programs are cheaper than having a tax return professionally filed. In addition to e-filing fees, taxpayers are encouraged to be on the lookout for any other hidden fees because there are likely to be some with many tax software programs.
With many tax software programs guaranteeing their work it is evident that tax software programs are easy to use and accurate. With mathematical checks and easy print offs for personal records it is obvious that there are many advantages to using a tax software program. Taxpayers are encouraged to weigh the above mentioned advantages and disadvantages of tax software programs and then make an informed decision on how their tax returns should be prepared and filed.

Gray Rollins is a featured writer for the http://TaxHelpDirectory.com. To learn more about tax software, visit http://www.taxhelpdirectory.com/taxsoftware/ and to learn more about accounting software, visit http://www.taxhelpdirectory.com/software/.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

If You are Already Blogging, Money May be Just a Click Away

If you already spend a fair amount of time blogging, money may come to you literally as soon as you ask for it. Once you have an established blog with a regular readership, it is easy to turn a profit through advertising. By hosting sponsored links or banners, you can see income from your hobby almost overnight. Even if you did not start your blog intending to turn a profit, making supplementary income from your blog may be easier than you think.

Of course, even for people who have spent months or years blogging, money from advertising revenue may not add up to a large sum. The amount of money that you can make as a blogger depends on a lot of different factors, but perhaps the most important element of the equation is the topic of your blog. If your blog is on a subject that appeals to a demographic that advertisers have a strong desire to reach, you will be more likely to be able to turn a large profit on your blog than if your blog is on a fairly obscure subject that does not draw the kind of audience that advertisers need to appeal to. Of course, the only way to find out where you fall on this spectrum is to try hosting some ads. If you are already blogging, you have nothing to lose. 

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Vanishing virtual worlds?

ive years ago everyone was talking about virtual worlds, such as Second Life, and how they could be used in business. Henry Tucker MBCS asks if they are still there.
Back in 2006 the media was awash with articles about virtual online worlds such as Second Life. Companies were opening up offices there and people started, it seemed, to make real money online. Platforms such as these looked as though they would be the model for future business, then the hype died down and people started to pull out.
So is the Second Life model dead, does anyone use it anymore or is it still going?
Ian Hughes, better known by his online moniker ePredator, is the Chair of the BCS Animation and Games Development Specialist Group and a self-proclaimed metaverse evangelist.
He started out coding in his bedroom and then went to work for IBM dealing, mostly, with emerging technology. It was during this time that he says he wanted to see how he could introduce game technology into corporate life.
‘I became very aware that the way that we communicate in games, and what you get to know about someone in games, was a lot richer than our standard emails and telephone calls. And as we were increasingly doing instant messaging in corporate worlds it didn’t seem that big a leap to be using virtual environments, although not necessarily games,’ he said.
In about 2000, Ian tried to persuade people in his team to do things with virtual worlds and game technology. He and his team started to see how to work with one another and communicate across their offices using these environments.
He said that sometimes it was as simple as having a game of Quake to get to know people. Then, in about 2006, he turned himself into a metaverse evangelist at IBM, which was the start of the next phase of virtual worlds when Second Life was popular.
‘When we started to look at Second Life we said “this isn’t a game. It is game technology, it lets you design, share and immerse in things as well as talk and do more than just PowerPoint in a space with other people at distance.” I spent several years in IBM helping people with worlds such as these, sometimes with Second Life, often it wasn’t.

Games industry

‘These virtual worlds hadn’t spawned out of the game industry but they were based on game technology. Despite this the game industry was ignoring everything that happened with virtual worlds and corporates because that’s not their normal business. Yet I was thinking this is all the same stuff.
If you take what the games industry does and what traditional IT does, they are not that different. In fact they are identical, yet they don’t talk to one another. This technology, as well as being useful, is good for bringing the technologies together.’
Part of the reason for the perceived decline in virtual worlds is down to the usual adoption curve for a new technology. When it was new a lot of people were talking about it, so the media paid attention. Second Life still has a passionate set of users, it doesn’t have as many users as Facebook, which is always the comparison, but it is nine years old.
‘It seems to be getting more attention again,’ says Ian. ‘A new generation of people are discovering that just doing things on Facebook isn’t that interesting and can be irritating. They want more engagement and they want to try some other stuff.
‘At the same time Facebook has created CloudParty, which is a 3D virtual world. Alongside this is Open Sim, which is an open source virtual environment. This started off as a sort of clone of Second Life except you run your own servers or you can pay someone to run one for you. That is getting used by some universities that need private virtual environments to rehearse certain medical scenarios.
‘There are lot of things happening and a lot of people doing stuff, but because it’s not high-end marketing stuff and because companies are nervous about saying that they are anything to do with it, working in virtual worlds is an uphill struggle because even gamers don’t like virtual worlds. So there’s a large virtual community and we’re still here.’

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Spiling the beans

Just before the launch of its latest game, Henry Tucker MBCS spoke to Peter Hofstede, from Spil Games, about the challenges of mobile game development.
Spil Games are based in Amsterdam and have a history of making Flash games.
The teams work on games for a worldwide audience and according to Peter Hofstede, who manages the process of thinking up new games, ‘if you work for Spil for a number of years you get to know the word for game in a number of different languages.’
In the past year though the company has changed its focus towards mobile gaming and it now does full-on app development.
He says that this wasn’t that easy a process as it meant a major change in terms of the skills his developers needed.
The studio now uses the Unity game engine, which according to Peter is a very different toolkit than they had used before. And with 80 per cent of its projects now made using the engine this has meant that whereas the development for Flash games was between three and four months, it can be as much as a year with the bigger budget mobile products.
Another issue they have with mobile is that they can’t port the games between the various platforms but have to start from scratch each time. At best, Peter says, they can use some of the assets but you generally have to rethink the control scheme at the very least. This has the knock on effect of increasing the production costs, plus the projects are getting more complicated.
‘We have worked on quite deep Flash games, which sometimes took a year to produce. But in general production costs have increased ten-fold. So one of the effects is that we are focusing on doing a lot less and really only taking our best IP and not building a really wide portfolio of games,’ he added.

Free-to-play

An issue with mobile games is that they don’t sell for much compared to the boxed products you get for the big consoles. This then has an effect on the way that the games are developed. With some apps there is a more to free-to-play approach, where the game is free but you buy things in-game. At Spil they consider this, but it depends on what the game is.
‘Our research has shown parents are willing to pay for a good product. If there is a good trial version they are quite keen to invest in the product. For others it makes sense to go with a free-to-play mechanic, but it needs to fit with the IP.’
As to how much the development costs that’s still drifting says Peter. ‘We’ve been working on a number of games and Sara has been our biggest project. We know that we’re going to have quite a lot of cost after the launch as well, probably if we do well we will have to spend more after the launch than before.
‘This is because we are shifting towards games as a service and that means we are building updates for after the launch. We also need to adapt to what we see in the game. If we see things aren’t interesting we swap them out and also if we see some traction we are going to invest more. We hook the games up to an analytics back end so that we can see where people fall off, are they stuck and so on.’
When asked if he thought there is a sweet spot for games he said that deep down it’s about psychology and different audiences have different drivers.

Boys will be boys

‘We know that boys are into extreme stuff where they can experiment, see how far they can go. For girls it’s different, with things like roleplay, playing out future careers. Creativity is very important, there are themes that work; animals, beauty their future self. It starts from that.
‘Then there are game mechanics. It needs to be rewarding, needs to be a challenge. There’s this concept of the killer game loop, which is you need to have a small gameplay cycle that you can finish in two minutes.
The player completes one cycle, then they can go in and do another or they can step out. You need to be able to dip in and out. The guys from NaturalMotion Games are talking about the Starbucks queue test. You need to be able to finish one loop whilst you are waiting for your coffee. You do something then you get a reward, a positive feedback loop.’

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

To be(spoke) or not to be...

When it comes to choosing software for your business at some point you’re probably going to have to decide between an off-the-shelf package or having some bespoke software created. Carey Hiles, Development Manager Box UK asks how do you decide what’s the best path for your situation?

Functions and compromise

In many cases - you want to fulfil a generic business function such as payroll or OCR or anti-virus, for example - there’s already an app (or thousands of apps) for that. But often these involve compromise.
Your chosen off-the-shelf payroll system may not be able to cope with the complexities of the multi-tiered, service length multiplied by previous quarter’s performance divided by peer review percentage algorithm that you decided was the fairest way to apportion bonuses oh so many years ago. You may be able to force your processes to fit around the tools you’ve chosen, and often this will be a compromise that you can live with.

Delivering user benefits

But what about when start thinking about investing in a piece of software that needs to deliver real user benefits? Whether these users are your customers, and you want to give them a great experience, or your staff that you want to make more efficient, in these cases it’s not going to benefit you to shape them around the software.
So it’s at this point you’ll want to start doing things like finding out requirements, and when you start going down the requirements route you’re most likely getting into building bespoke software. You may find something already on the market that ticks most of the boxes, but they’ll almost certainly be some knocking of edges off, whether through incremental compromises or large ticket prices.

Bespoke software partners

If you chose a partner that’s able to invest in getting to know your organisation you can take advantage of their experience in terms of design and domain knowledge. They should take responsibility for understanding your unique business needs, taking your requirements and translating these into a specific piece of software for you, so you know that what gets delivered will solve your problems and maximise user benefits, which will hopefully turn into company benefits.

Of course, this approach is likely to have greater initial costs than off-the-shelf, but you’ve got to remember that with these kinds of existing products you’re usually paying for lots and lots of features you don’t need, support areas you’ll never use, and the profits of a producer who may have a “timeboxed" profit window.

Commercial off-the-shelf products are built to maximise their profitability before competition overtakes it, meaning that their focus isn’t always on long-term scalability or reacting to changing market conditions.
While a bespoke software partner is also going to be profit-driven (unless they’re exceptionally altruistic), in these situations profit is derived from the longevity of the relationship. At Box UK our Agile way of working supports this - the way we work means that even if your requirements change during the build, they be addressed and built in to the production schedule. This helps minimise waste, and we won’t slip because we’ve gone too far down a cul-de-sac.

Retaining control

Closely related to these considerations is that of control. If you consider the lifecycle of a software product, first there is the seed which responds to a market demand.
Then follows market research and other feasibility exercises, before the solution begins to be planned and designed (hopefully using an iterative development methodology that reduces time to market through a minimum viable product that can be improved and extended with further releases). Then what? IPO? Acquisition? Decommission?

It may seem like years away, but software that is static is heading towards joining all decommissioned software on the scrapheap. To be valuable to businesses, suppliers need to adapt their solutions to a changing world, as clients become thinner, primary screens become smaller, user input becomes touch, consumers become more sophisticated, etc.

Of course, it would be generalising to say that all off-the-shelf products are static – look at the version histories of Microsoft Word, Adobe PhotoShop, Symantec Norton Anti-Virus, which are continuously changing and reacting.

But when was a feature you, or your users, requested implemented in any of these products? (Note to advertising agency Crispin Porter and Bogusky: Windows 7 was their idea? I don’t believe you.)

If you can think of a request that was implemented then it’s probably either a massive coincidence, or I’ll wager that a different compromise was absorbed simultaneously. And what if your provider decides to change the direction of the product, or remove support for your version? Depending on how important a role that piece of software plays in your organisation, it could cause massive problems.

If you go bespoke you’re in charge of the roadmap - you make decisions about what goes on there, how it should be extended or updated, and when and if it should be deprecated. What you’ve got is a base to start with that you can build on - and you can do that very cost-effectively if you plan it in time.

At Box UK we build products. And we write bespoke software. And we create bespoke functionality on top of products. Are we conflicted? Not at all. The right solution may be on the shelves of Best Buy or it may be in the head of our team. It depends on what function the solution needs to fulfil, how far you’re prepared to compromise, and how integral it is to your business processes.

If you are going to go with something that’s pre-written, make sure that you work with a supplier that gives you a direct line into that roadmap. You don’t want to be the millionth customer (there’s no prizes), but being customer number 10 is a good place to be, because you’ve got a product that works but also have a voice and are big enough to affect change .

If you want a solution that delivers value, rather than simply enabling a process, a dedicated software partner can deliver this. However, choosing the correct company is still a fundamental element of success - they need to have the experience, commitment and approach that best fits with your organisation and be able to sustain a long-term relationship.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

A unique platform for desktop video applications

Desktop video conferencing has always been a case of two halves. For every benefit to intra-company communication, there seems to be a technical limitation threatening the stability of the network. With this fact firmly in mind Ben Burns, Solutions Lead, EMEA at MASERGY, examines the benefits of SVC-Optimised QoS.
Issues like packet loss have also long plagued video conferencing deployed over non ideal networks.  That is why IT teams seek alternative means of deploying intra-corporate video.
One common counter-solution to solve network congestion is adding more bandwidth. This risks increasing the volume and management complexity of network traffic without addressing the causes of congestion. A second method is to deploy scalable video coding (SVC) technology to combat latency and packet loss.

What is SVC?

Put simply, SVC is a layered video codec extension of the established H.264 standard. SVC transmits both a base video layer and one or more enhancement layers to improve video quality up through high definition. This layering technique allows for a certain percentage of packet loss while maintaining the video stream, thereby all but eliminating artifacts or pixels.  
What this means is that SVC can allow for multiple clients with different capabilities to receive the same video signal without the need for further encoding. As a result, a single SVC-encoded stream with different receivers who have various capabilities can decode the necessary layers to produce an image from standard definition through to high definition. This greatly reduces encoding latency and the overall computing power that is required.
For enterprises looking to deploy desktop VC on a wide scale, this offers an interesting option. Users can experience high quality calls without the need to pay private network prices. Unlike traditional HD video calls, SVC does not experience significant lag. Instead the definition of the call constantly varies in quality, fluctuating between high and standard definitions depending on network congestion.

Managing an SVC-based desktop video implementation

While easy to see the raw benefits of SVC technology, deploying video on mass can still prove difficult. A carefully considered method is required to ensure access to critical business applications is not inhibited. 
This can create an inverted video QoS paradigm where any sizable desktop video deployment, if implemented over standard WAN, can overwhelm the network. So, instead of protecting the video application from other corporate traffic, the opposite occurs; other corporate applications need to be protected from the video application.  

A unique approach

A native MPLS network for intra-company data communications with Intelligent QoS allows for the prioritisation of essential voice and video applications and prevents the inverted QoS problem from occurring.
In the past, the emphasis has typically been on the guarantee that critical applications should perform while the remaining traffic should fall into an unmarked or best effort queue. While internet traffic is not marked, companies still rely on consistent access to the public Internet for daily operations. Avoiding this dilemma can be challenging and requires a specific solution.
With most WANs, the majority of traffic is marked as undifferentiated data and their type of service (ToS) precedence value is considered to be 0. Commonly traffic of this type is referred to as best effort and not given special treatment over the WAN. However, that doesn’t mean it should be ignored. Much of the best effort traffic is still of tremendous value to enterprises and it often requires a minimally acceptable level of throughput and performance.
By utilising a limited plane class of service (CoS) that deliberately de-prioritises below best effort, users can maintain bandwidth hungry video flows without the fear of overloading the network. A limited plane that services at a lower priority than normal data is also ideal for data back-up or packet-loss-tolerant applications such as SVC desktop video.
Its inherent resilience in the presence of packet loss means SVC-based video can cope with the variable network conditions it encounters on the least effort plane of service.

Recognising the Future

Even with the above addressed, issues of security, scalability and performance can still remain. With a native IP multi-protocol label switching (MPLS) network, enterprises can operate desktop video at varied security levels - running one VLAN publically and another privately for example.
Furthermore it is possible to use a Limited Plane on both VLANs to provide users with higher security requirements the option to use a private network. This prevents exposing video communications to the public Internet and the potential security risks that come with it. In essence, it removes the need to encrypt intra-company calls and as a result reduces network workload and management.
Using a Limited Plane lets users select which SVC-based calls can run at a higher QoS level. Intelligent service control with QoS-enabled service scalability is perfect for adjusting bandwidth at a moment’s notice. It can dynamically allocate bandwidth between public and private services based on the traffic’s QoS tagging.
Facilitating the adoption of a new communication technology while protecting higher priority applications on the corporate network is always going to prove difficult, but a solid CoS enables enterprises to manage and protect their network while looking to the future.
The advantages are clear. Possessing the flexibility to run public or private networks on a single circuit with built-in security for private intra-company communications is a powerful tool. The ability for VC applications to perform without affecting the network’s performance is crucial for business. Ensuring that you can offer staff high-definition video capabilities without a disruption to your network is only possible with advanced QoS capabilities.
Understanding this need and implementing a solid plan is vital to the future of your network. Only when this is understood does SVC yield its full potential. By combining intelligent network management with SVC technology, you can successfully address the issues that have long prevented large scale VC rollouts.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Making software licensing work for you

With software licences and maintenance consuming almost one third of overall IT budgets, software is a key company asset, and not actively managing its usage diminishes its value to the business. Patrick Gunn, Flexera Software, discusses the top 10 software licensing mistakes enterprises make and explains how to work around them.
Increasingly today, there is awareness of the risks associated with unmanaged assets, but there is a distinct lack of importance assigned to software licence management and optimisation. Enterprises commonly make the following mistakes when dealing with their software estates:
  1. Making ad hoc purchases - Allowing employees to make ad hoc purchases and not controlling authorised purchases is a common occurrence. Enterprises often buy licences as needed in a piecemeal fashion, rather than under a volume purchase agreement, which can be much more cost effective.
  2. Not tracking installation and use - By tracking installation of software and its usage, enterprises may be able to substantially reduce ongoing maintenance payments – either because the applications are not being used or because they are no longer supported by the vendor.
  3. No central repository - A central repository for keeping proof of software licences so that they are easily accessible for review allows enterprises to quickly comply with vendor audit requests, saving time and money.
  4. Not tracking renewal dates - Not keeping track of software licence agreements and renewal dates makes enterprises vulnerable to lapses in software assurance or other maintenance programmes, which can prove costly for enterprises. Some vendors may demand that enterprises just pay maintenance retrospectively to the renewal date, whilst others could make organisations re-purchase the licences. 
  5. No communication between departments - IT operations must work with procurement to ensure that software is installed and used in accordance with the respective licence agreements to avoid software compliance issues. This is often not the case.
  6. Not purchasing maintenance at the right time - The right time to purchase maintenance is when enterprises are looking to be part of an upgrade. For example, Adobe has a couple of new software releases planned this year. If enterprises buy maintenance before the release is announced, the price will be significantly lower and they will become automatically eligible for that product upgrade. 
  7. Not ascertaining strategic requirements - Ordering licences without determining what the enterprise truly requires over the longer term could be an expensive mistake. For example, an enterprise might need just Microsoft Exchange & Windows Client Access Licence’s (CALs) now, but in six to twelve months time decide that it actually needs to deploy SharePoint.  In the context of this example, a CORE CAL would be the better option – offering all three applications in a bundle - as it will save the enterprise money in the long run.
  8. Assuming licensing rules don’t change - Licensing rules change frequently and enterprises need to stay on top of all the vendor rules and regulations. Not doing so can result in enterprises being out of compliance, which could be a costly oversight if audited by software vendors. This situation is now being further exacerbated with the proliferation of virtualisation technologies and cloud computing.
  9. Not applying the product use rights - Product use rights define how software licences can be consumed. They include upgrade, downgrade, second use, virtual machine use and multiple version rights. Accurately applying them can drastically reduce licence consumption and hence reduce the need to buy more licences.
  10. Not automating enterprise licence optimisation - An optimised licence environment cannot be achieved without an automated solution. Enterprise licence optimisation solutions, also known as next generation software asset management tools, enable enterprises to collect all the necessary data - from asset inventory to purchase orders and organisational data - and apply licence entitlement rules to generate the necessary reports to effectively manage software licences.
Enterprise licence optimisation is not just about an improved vendor licence compliance position, it is also about taking a strategic approach to understanding the software needs of enterprises so that the software deployed contributes to their efficiency and effectiveness whilst maximising the return on investment and reducing costs.
Interestingly, according to the InformationWeek Analytics Outlook 2010 survey, demand is on the rise for new IT projects to help automate and improve business processes. Over 50 per cent of respondents reported that IT demand is expected to be higher this year than in 2009.
This trend is likely to be representative of Europe as well. Enterprise licence management is one area where automation can potentially reduce overall IT costs by five to ten per cent annually. Enterprises should investigate.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Transforming legacy Lotus Notes apps

With more than 10 million custom enterprise applications built on Lotus Notes and Domino since the early 1990s, there are around 50,000 organisations around the world that may need to modernise for a web- and cloud-enabled IT landscape. Andreas Richter, Director Marketing Europe at GBS, explains how.
Lotus Notes and Domino are mainly used in banking, telecoms, aerospace, electronics, insurance, consumer products, pharmaceuticals and government. Many of these Notes-based business applications have become mission critical to the organisations running them.
Recent surveys among Lotus Notes users and companies showed a desire to be less dependent on the Notes Client, saving costs for licensing and desktop support, especially for users who only needed periodic access to a business application.
Others were looking to web-enable their applications, meeting the ever-increasing end-user demands for access to their business applications via smart phones, tablets and other mobile devices, web-enabling such applications was seen as a stepping stone to the future.
In addition the ability to leverage software-as-a-service style hosting, deployment and management of business applications was seen as a key approach to providing anywhere, anytime access to regionally, nationally and globally dispersed user groups. Based on these results, modernisation is essential for future business, but companies do not know where to start and how to get there within their current budgets.
Applications can be updated and transformed manually but this is time-consuming and expensive. On average, transforming business critical applications manually is estimated at £20,000 per app, while standard applications (representing 50-60 per cent of all customer applications) is estimated at £3,000 per app.
Based on these estimates, migrating a full platform of an enterprise with 10,000 applications would cost tens of millions of pounds and would take over a decade to complete. Furthermore, companies want to have as little disruption to their users as possible while migrating their applications.
This not only includes minimising the burden on users to assess or re-produce application business logic to support application re-engineering, but to also ensure that the modernised and web-enabled applications closely match the original applications on a like-for-like basis. This is so as not to overwhelm users with having to work on a new interface and eliminate the need for additional and cost-intensive training.
Facing all these challenges, the desire to leave the Notes Domino Client altogether and move to other platforms became more and more attractive.

A way forward

However, thanks to the web application framework XPages and solutions built for the platform, modernising and web-enabling your applications is only a few steps away. This approach allows automatic transformation of applications into a web 2.0 style application.
The principle behind this involves transforming the original application into X-pages, SSJS & Java, converting the UI and therefore delivering a modern user experience as well as transforming existing business logic. The resulting application template contains both, the original Notes design elements and the new XPages design elements, but working only on the application template, leaving your data and security intact and untouched.
By automatically converting 75 - 90 per cent of an existing application’s design to an XPages design, transforming an application now only takes a few days. XPages allows you to leverage new technology and features that will drastically reduce IT costs and improve the way that users work.
You have the choice to run transformed applications with the rich client, a browser or both, making it easier to deploy those applications and even adopt self-service and private cloud delivery models. But the best part is that using one of the new solutions to automatically convert your apps saves you up to 90 per cent of the cost of rewriting the applications manually.
You will continue to leverage your prior investments in Lotus if you upgrade your portfolio to the newest release of Domino and have the benefit of modernised applications, that are scalable to users’ demands without having to invest in costly redevelopment.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Software piracy: Like drowning puppies, apparently

he Business Software Alliance makes a case for why software piracy is a major blight on the UK economy. Do I buy it? Not from one of those people wandering around car parks selling DVDs, I don't.
Software piracy is against the law, and it's morally wrong too. If you're a BCS member then you should make sure that you are beyond reproach in your software license asset management. In short, I'm on board with the stated objective of the Business Software Alliance (BSA) by default. That said, I suffered a serious allergic reaction after a hype overdose on Wednesday night at a small reception for the BSA at the Palace of Westminster.
I have two issues. The first is that BSA figures on impact of piracy on the economy seem a tad overblown. The second is that they were presenting on the evils of organised counterfeiting, but I suspect they are really interested in legal action against small businesses with disorganised software asset management. These ambiguities are critical if you are looking to enlist the support of legislators, which is what the BSA was aiming to do with this meeting.
A big part of the BSA presentation was the many billions, and 13,000 jobs, that the UK economy is loosing to software piracy. This is based on a study conducted with IDC; IDC being a reputable research organisation have also published the methodology. Broadly speaking, they take the total hardware units shipped and times that by an estimated average software load. That gives them the total software deployed. They also know the total sales of licensed software. Take one from the other, and you have the total amount of pirated software, according to them. There is a paragraph on how they account for open source software, freeware and shareware, but I'm blowed if I can work out what it means. Is it me, or is that methodology... well... something...? To quote The Economist on the subject (sorry, link is subscription only):
The association's figures rely on sample data that may not be representative, assumptions about the average amount of software on PCs and, for some countries, guesses rather than hard data. Moreover, the figures are presented in an exaggerated way by the BSA and International Data Corporation (IDC), a research firm that conducts the study.
My other issue was that most of the meeting focused on the impact of large scale counterfeiting operations - packaging and selling software in an organised way; a criminal enterprise. A gentleman from Trading Standards gave a good presentation on the impact of counterfeiters and the work Trading Standards are doing in response. Clearly they need more support in what they are doing, and any law abiding citizen (and any MP) would no doubt agree.
Not much to do with shoddy asset management though...
Of course, dodgy software is probably loaded to the gunwales with trojans and viruses, and all your data will fly into the wide blue yonder if you use it. Moreover, if you use it at home then you will doubtless lose all your treasured family photos. Think of the children.
Again, not much to do with what seems to be BSA's primary activity. That seems to revolve around organisations using more copies of software than they are licensed for, either deliberately, or accidentally. This article suggests 80% of their settlement cases are due to negligence. Action most usually starts when BSA is tipped off by disgruntled staff (who can get a substantial reward) as I understand it. The BSA then hove into view with a team of lawyers, threatening small and medium sized businesses with court action if they can't produce licence documentation for everything they are using. A proportionate action, of course, that will no doubt benefit the UK economy.
The BSA are asking legislators for a number of things, but the one that caught my eye was stronger intellectual property damages law to act as a deterrent. Their presentation to the MPs gave some startlingly high (and worryingly derived) figures for loss to the economy, and focused on criminal gangs and counterfeiting. If indeed their aim is to take legal action for their own financial gain against companies that may well be trying to obey the law, it is disingenuous to say the least to make a misleading 'public interest' argument.
Again let me reiterate that I'm not in favour of software piracy (the Federation Against Software Theft was set up by the BCS, and takes a very different approach), or bashing intellectual property holders for exercising their legal rights. I'm just interested in making sure that MPs are not presented with misleading information. Their job is hard enough as it is.
Am I being unnecessarily pedantic? What is the BCS's responsibility in these circumstances? Looking forward to hearing what you think!

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Three’s a crowd

Unusually three new games consoles were announced at this year’s CES in Las Vegas in January. The trouble is, Henry Tucker MBCS doesn’t think we need any of them.
When it comes to new gaming hardware, things aren’t usually announced at CES. So it was somewhat of a surprise that this year three new consoles saw the light of day (well the artificial light of the Las Vegas Conference Center, that is).
The smallest surprise came from Valve with the unveiling of its new platform that everyone, up until now, has called the Steam Box, but is now called Piston. This has been much mooted and discussed in more than 1,000 online forums for a while.
The idea is that Valve (the people behind games such as Half-Life 2 and CounterStrike) will bring its hugely popular and not that expensive games to the TV, where they have, up until now, been PC only games.
The Piston will do more than play games though, as with other consoles, and support NetFlix for example. And whereas consoles have one specification that can’t be modified, Piston will be modular so that gamers can add more memory and swap out the motherboard. In fact the Piston is just one possible Steam Box as other manufacturers are likely to come out with other designs. Talk about muddying the waters.
Straight off this sounds like a bad idea. One of the reasons for consoles’ popularity is their uniformity, you know exactly how a game will play, which you don’t have on a PC, where every single one is different.
The other announcements were the OUYA (a crowd sourced Android-based games console) and Project Shield from computer graphics manufacturer NVidia.
The latter was the big shock as no one seems to have known it was in development. In fact the developers themselves see it as something of an experiment. Unlike the other two it is a handheld console, a market that is not quite as healthy as it once was.
Nintendo dropped the price of the 3DS and Sony is struggling to sell the PS Vita, so I can’t see why anyone would want to bring in another platform, though Project Shield does promise to play Android games so, in one way, it is only new hardware, not an entirely new platform.
OUYA is a slightly different take on the standard console. Again it uses the Android OS, but is designed to connect up to your TV and do all the multimedia stuff the other consoles do.
Although I can see why each of the consoles has been developed, to a degree, a large part of me does wonder why at the same time.
The Android platform is ideal for gaming, but I don’t think many people will want to use it in their living rooms to play games on. In fact the only way I think it will take off is if Google was to launch its own Android set top box that could play games and do the streaming TV / film bit as well. Now there’s an idea! They could call it the Google (not goggle) Box.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Videogame trend: Skylanders Champions Real World Augmentation Over Photorealism

Video-games offer ever more realistic visuals and sound, and with the PS4 just announced and the next Xbox not far away they will be getting closer to photorealism than ever.
Titles like Beyond: Two Souls on the PS3 make no bones about the fact they aim to offer cinematic experiences in both storytelling and graphics.
However, while the video-game industry at large declines, a set of games that offer realism in an entirely different have managed growth.
This new genre was created by the Skylanders franchise and uses RFID technology in a USB ‘Portal peripheral to wirelessly connect to toy figures and grant access to in-game characters and content.
The interaction is two ways. While the toy is used to unlock these in-game characters, the game wirelessly saves progress, upgrades and customisations back to the chip in the toy figure. This means that when you use the toy on a different game (even a different console platform) the player’s unique character is instantly available.
Such has been the success of the Skylanders franchise other publishers have been keen to match Activision’s approach. Earlier this year Disney Interactive announced their version of this toy-meets-game genre with Disney Infinity. As CEO John Blackburn explains, the game leverages the various characters in a “Play Box” mode that lets players create their own game challenges.
Keen not to be caught on the back foot, soon after Activision announced Skylanders Swap Force. This latest iteration of the Skylanders franchise offers new toys that separate at the waist and can be mixed and matched to create custom combinations - both in the real world and in the game.
Guha Bala, CEO of Vicarious Visions (the developer behind the game), talked about the new features and why the game requires players to replace their old USB peripherals with a new Starter Pack used in the first two games. The new Swap Force character’s technology has led to them redesigning the peripheral that reads the figures to accommodate their mix and match function.
There are also other developers with similar offerings, as well as a slew of upcoming games. Most interesting of these is the iPad Cars 2 AppMates app that uses a toy car not only to access in-game characters but also to control them. Players place the car on the surface of the tablet and drive them around much as they would a Matchbox car on the living-room floor.
In a similar way to how the Wii created a family friendly market that other platform holders still court to this day (Xbox 360 with Kinect and PS3 with Move), Skylanders has proven that this toy-meets-game genre is hugely attractive to families.
The challenge for Activision, as well as other developers entering this space, is walking the tightrope between profitability and value for money. In Skylanders there are some tensions, like the reposed figures that come out each year and the Soul Gem in game adverts for different toys, and in Disney Infinity there are the Power discs that have to be purchased in foil packs like collectible cards.
The successes in this space so far have resulted not only from clever technology and heavy marketing, but from genuine creativity in design and attention to detail. Compare the Skylanders or Disney Infinity figures to any other collectable toy line and they really do look quite impressive - both in the number of paint passes and the variety of materials used.
Also, both Activision and Disney Interactive have invested heavily in proprietary development tools to streamline the creation of these games and allow for quick iterations and a yearly development cycle.
This year, Activision took this a step further by introducing a second developer to its Skylanders franchise - much like it does with Infinity Ward and Treyarch sharing development on its Call of Duty franchise.
Time will tell how long this genre can continue before family gamers get fatigued of the idea. Currently there seems to be plenty of new ideas to keep interest levels high.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Is UX your Achilles heel?

Today, user experience (UX) has become ‘the new black’ in product and application development with many organisations embracing a ‘build and prototype’ culture says Zahid Jiwa, VP UK&I, OutSystems.
This is quite a sea-change from as little as five years ago when UX wasn't really on anyone’s radar apart from the likes of renowned innovators like Steve Jobs. He was always obsessed with the customer experience and very much in the ‘nitty gritty’ of product design.
I remember when I did my degree in Computer Science 15 years ago - we literally had one UX module in the whole course curriculum. Astounding, isn’t it? Look at any computer science course today and UX is everywhere, in every single module, and completely integrated into the coursework syllabus.
So what does this mean for any developer with more than 10 years’ experience? I imagine that this new UX culture is causing some anxiety. Today you can’t be a developer without having UX experience, but if you’re an older generation developer, you won’t have been taught about UX. This will be your Achilles heel. And believe me, you can’t just acquire these skills overnight.
UX is an unusual combination of both art and science. It requires a cross-functional capability and new ways of thinking - a developer that appreciates good design, and a designer that understands application development. Engineers need to work in computing languages, but think in human experience.
This combined talent doesn’t naturally sit together. Sure, the new kids on the block might have both skills engrained because they’ve learnt this from the outset; the more seasoned developers, not so much.
So why is it so important to incorporate UX into the development process? From my perspective applications fail not because they are functionally poor but because they have been adopted poorly. You can have a great, functionally rich product, but if the users don’t adopt it, the product is obsolete and serves no purpose. We are also a generation that wants instant gratification - plug and play technology.
Think about it - when you use any application, if it takes more than a few milliseconds to respond then you’ll view that as a bad user experience. The best products are those that have simplicity designed into them. Look at Apple, Google, Twitter, Fly.com - all these brands are incredibly user friendly and simple to use.
But to be successful, UX can’t be an afterthought or a bolt on, it has to be at the forefront of your development strategy. It needs to be in the culture of the business and the development teams.
Good UX is ultimately about understanding your users’ experience. And I don’t mean asking them to get involved in the features and technical spec. This is more about engagement, observing, asking questions and looking to understand user needs. To use a car analogy, it’s about the driving experience, the look and feel, the design spec, it’s not about what’s under the bonnet.
Live testing is a great usability barometer. Developers watching their first live test often find the experience a revelation. They instantly get that (a) regardless of what they thought before, all users are not just like them and (b) people have a much harder time figuring out how to use products than we think.
So it becomes clear that to design something that people want to use (without getting frustrated) you need to invest a lot more care, thought and testing into the design process.
Ultimately, I view UX like a scientific experiment. Success depends on four key components:
You need to start with your research - for example, conduct a UX audit that captures the sequence of interactions your customers need to take to complete basic tasks with key products. Then involve the users along the way, using live testing to really understand their requirements - you’ll be surprised at how fast applications fail in the hands of your users - so test, test, and test again.
Your UX approach requires a methodology with change at the heart of the product development and deployment process. You need a development environment that will allow you to rapidly accommodate change, one that is flexible and can work within the ebb and flow of the UX process. 
Most importantly you need plug and play technology that allows you to seamlessly integrate best of breed design and best of breed development into your systems, which is where we can help.
And finally, you also need to develop more of a risk-taking culture that encourages your development team to explore and experiment.
While I understand the anxiety of developers who perhaps haven’t had UX engrained into their culture, I hope I have demonstrated that there are tools, support and ways to overcome this.
Let’s face it, the consequence of not having UX as a central part of your development strategy can be catastrophic; building a product that never gets adopted. So developers, don’t let UX be your Achilles heel; embrace it, understand it and adopt it.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS