30 May 2011
Last updated at 12:07
Lockheed Martin makes F-16 fighter jets
US defence firm Lockheed Martin says it has come under a significant cyber-attack, which took place last week.
Few details were available, but Lockheed said its security team had detected the threat quickly and ensured that none of its programmes had been compromised.
The Pentagon said it is working to establish the extent of the breach.
Lockheed makes fighter jets, warships and multi-billion dollar weapons systems sold worldwide.
Lt Col April Cunningham, speaking for the US defence department, said the impact on the Pentagon was “minimal and we don’t expect any adverse effect”.
Lockheed Martin said in a statement that it detected the attack on 21 May “almost immediately” and took counter-measures.
As a result, the company said, “our systems remain secure; no customer, program or employee personal data has been compromised”.
However, it is still working to restore employee access several days after the attack.
Lockheed Martin is the world’s biggest aerospace company and makes F-16, F-22 and F-35 fighter jets as well as warships.
Four years ago, the firm apparently tightened security after officials revealed hackers had breached Lockheed’s high-tech Joint Fighter programme.
Josh Shaul, chief technology officer at New York-based database security company Application Security, said other defence contractors will now be assessing their own measures.
“I guarantee you every major defence contractor is on double alert this weekend, watching what’s going on and making sure they’re not the next to fall victim.”
In Australia, too, the government has warned companies to be extra vigilant of offshore cyber attacks.
The advice comes after the outgoing head of the country’s biggest oil and gas company said attacks were coming “from everywhere”.
“It comes from eastern Europe; it comes from Russia. Just don’t pick on the Chinese; it’s everywhere,” Don Voelte, chief executive of Woodside Petroleum was quoted as saying.
Australian Attorney-General Robert McClelland singled out resource companies as being targeted.
“There is no doubt that cyber-security threats are becoming worse,” he told Reuters.
“Without talking about specific incidents, there have been a number of reports concerning our resource companies.”
27 May 2011
Last updated at 10:56
The National Museum of Computing has finished restoring a Tunny machine – a key part of Allied code-cracking during World War II.
Tunny machines helped to unscramble Allied interceptions of the encrypted orders Hitler sent to his generals.
The rebuild was completed even though almost no circuit diagrams or parts of the original machines survived.
Intelligence gathered via code-cracking at Bletchley underpinned the success of Allied operations to end WWII.
Restoration work on Tunny at the museum in Bletchley was re-started in 2005 by a team led by computer conservationists John Pether and John Whetter.
Mr Pether said the lack of source material made the rebuild challenging.
“As far as I know there were no original circuit diagrams left,” he said. “All we had was a few circuit elements drawn up from memory by engineers who worked on the original.”
The trickiest part of the rebuild, he said, was getting the six timing circuits of the machine working in unison.
The Tunny machines, like the Colossus computers they worked alongside, were dismantled and recycled for spare parts after World War II.
The first Tunny machine was built in 1942 by mathematician Bill Tutte. He drew up plans for it after analysing intercepted encrypted radio signals Hitler was sending to the Nazi high command.
The rebuild of the Tunny machine involved a formidable amount of re-wiring
These orders were encrypted before being transmitted by a machine known as a Lorenz SZ42 enciphering machine.
Bill Tutte’s work effectively reverse-engineered the workings of the SZ42 – even though he had never seen it.
Tunny worked alongside the early Colossus computer, which calculated the settings of an SZ42 used to scramble a particular message. These settings were reproduced on Tunny, the enciphered message was fed in, and the decrypted text was printed out.
By the end of WWII there were 12-15 Tunny machines in use and the information they revealed about Nazi battle plans aided the Russians during the battle of Kursk and helped to ensure the success of D-Day.
“We have a great deal of admiration for Bill Tutte and those original engineers,” said John Whetter.
“There were no standard drawings they could put together,” he said. “It was all original thought and it was incredible what they achieved.”
One reason the restoration project has succeeded, said Mr Whetter, was that the machines were built by the Post Office’s research lab at Dollis Hill.
All the parts were typically used to build telephone exchanges, he said.
“Those parts were in use from the 1920s to the 1980s when they were replaced by computer-controlled exchanges,” he said.
Former BT engineers and workers involved with The National Museum of Computing have managed to secure lots of these spare parts to help with restoration projects, said Mr Whetter.
The next restoration project being contemplated is that of the Heath Robinson machines, which were used to find SZ42 settings before the creation of Colossus.
That, said Mr Whetter, might be even more of a challenge.
“We have even less information about that than we had on Tunny,” he said.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/technology-13566878
We’ve had another good week with lots of new business coming in and some new projects launched. We’ve taken over a Joomla site that had been hacked and we’ve rebuilt the database and relaunched on our own servers. The site we took over was more than likely hacked due to poor password security. If your website is called www.mybusinessname.com don’t set your password as businessname123, it’s just an easy target!
We’ve signed ourselves up to some new clustered hosting this week which means we can offer a more robust, faster hosting service. Look out for more details of this.
Our search engine optimisation services are continuing to produce some great results for our clients. If you’re looking to improve the Google rankings of your site then get in touch and let us show you how we can make a difference to your listings.
We’ve launched a new site for Charlcombe Homes which can be found at www.charlcombehomes.co.uk so why not go take a look!
26 May 2011
Last updated at 07:47
On average, mobile download speeds were found to be quickest on the O2 network
Mobile broadband provided by O2 loads webpages quicker than any other UK network, research by Ofcom has found.
The regulator carried out 4.2 million speed tests across the country.
It found the average download speed across all networks was 1.5 megabits per second (Mbps), rising to 2.1Mbps in better coverage areas.
The report said speed varied greatly depending on location, and that consumers should check coverage before signing up to tariffs.
Orange fared worst in the research with its average download speeds slower than any other network.
T-Mobile also came out slower than Vodafone, 3 and O2.
O2′s chief technology officer Derek McManus said: “Our customers are seeing the benefit from the huge investment we have made in our network. We always aim to deliver the best network experience for our customers and these results are another indicator that we are doing just that.”
Everything Everywhere – the name given to the partnership between T-Mobile and Orange – declined to comment on Ofcom’s findings.
The report, carried out in conjunction with monitoring specialists Epitiro, ran from September to December last year and dealt with datacards and dongles, but not smartphones.
Ofcom said it hopes to run tests on smartphones soon.
As well as achieving success in the download speed tests, O2 also recorded a lower average latency than 3, Orange and Vodafone.
Latency is calculated by the time it takes for a data packet to travel from a user’s PC to a third-party server and back again.
Ofcom chief executive, Ed Richards said: “This research gives consumers a clearer picture of the performance of mobile broadband dongle and datacards as consumers use these services to complement fixed-line services or sometimes as their principal means of accessing online services.”
Continue reading the main story
It’s clear from the research that mobile broadband is a good service”
Mobile Broadband Group
Consumer research showed that 17% of UK homes are now using mobile broadband to access the internet.
Of these, 7% use it as their only means of getting online – a 4% rise since 2009.
The research discovered the average download speed for consumers was 1.5 Mbps, which produced an average load time of 8.5 seconds for a “basic” webpage.
This compared to an average of 6.2 Mbit/s for fixed line broadband, Ofcom found.
However, in areas with good 3G coverage, Ofcom found the average mobile speed rose to 2.1Mbps, dropping to 1.7Mbps at the peak times of between 8-9pm.
On the whole, urban areas performed better than rural areas due to better 3G availability.
The report noted that coverage in cities was highly variable “with no guarantee of good performance” in city centre locations.
Hamish Macleod, chairman of the Mobile Broadband Group, told the BBC that he feels the report paints an unfair picture of mobile broadband by comparing it to fixed rate speeds.
“We recognise this is a useful exercise for Ofcom to do.
“Where I am at issue with Ofcom is the way they have made headline comparisons between fixed broadband and mobile broadband just by using averages.
“It’s clear from the research that mobile broadband is a good service, that individual customers can either use it as a complement to fixed broadband or alternatively as a reliable stand alone service.”
But not everyone agreed that mobile broadband is a viable alternative to fixed line services.
Charlie Ponsonby, chief executive of comparison service SimplifyDigital said: “The Ofcom report confirms what our customers tell us every day – that mobile broadband is no great substitute for home broadband.
It is on average about three times slower than a standard home broadband connection and often offers very limited data usage, relative to a home broadband connection.”
Mobile broadband speeds will remain well below that of fixed broadband speeds until the next generation of mobile coverage – 4G – is rolled out across the UK- a process is expected to begin in 2013.
Everything Everywhere will start the first public trial of 4G in September this year.
Consultation has begun into how the 4G network will be allocated to operators, with an auction due to open early next year.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/technology-13544197
26 May 2011
Last updated at 16:24
Many people turned to Twitter to report problems logging into the net-based call service
Skype has moved quickly to fix problems that hit users around the world.
Many people started to report that they had problems making calls via the net-based phone system earlier today.
The problem did not seem confined to one group, with users on machines running Windows, OS X and Linux all reporting trouble.
Skype issued advice about how to get its service going, while it worked on a permanent fix.
Messages about problems getting Skype to start up began to be posted on social networking sites such as Twitter soon after it sent out a software update.
The update made it impossible for many people to sign in and make calls.
Skype posted an update about the outage to its blog, saying a “small number” of people have had problems and detailing how to get the service running again.
Skype said the problem predominantly affected Windows users, but it also posted advice for OS X and Linux users. All the solutions revolved around the deletion of a file called “shared.xml”.
It also said it had identified the problem and would issue a fix “in the next few hours”.
The large number of people turning to the Skype.com website for advice and information also briefly knocked that offline.
The outage comes two weeks after Microsoft confirmed that it was paying $8.5bn (£5.2bn) for the firm.
The swift response stands in contrast to the speed with which problems that plagued Skype in December 2010 were solved. That led to the service being offline for almost two days.
An investigation showed that a software bug and overloaded servers were responsible for that incident.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/technology-13565864
26 May 2011
Last updated at 11:37
The fake security software has caught out many Apple computer owners
Apple is releasing a security update that removes fake security software that has caught out thousands of Mac users.
Once installed, the fake MacDefender, MacProtector and MacSecurity programs pretend to scan a machine and then ask for cash to fix non-existent problems.
The gang behind the programs used search sites to help catch people out.
The clean-up plan starts as the creators of the fake programs release a version harder to avoid.
In a message posted to its support forums, Apple has warned users about the fake security software, also known as scareware.
It said a phishing scam had targeted Mac users by redirecting them to sites that warned them that their machine was infected with viruses.
Apple said it would release an OS X update soon to find and remove MacDefender and its variants. The message also gave advice about how to remove the software if they had already fallen victim.
MacDefender and its variants are thought to have caught some people out because the default security settings on the Safari browser allow it to download and queue itself for installation.
Those who install it can end up paying more than $70 (£43) to remove the non-existent viruses the scareware claims to have found.
As Apple was releasing its fix for Mac Defender, the gang behind it had started distributing a new version.
Like older versions, the new one – called Mac Guard – is being spread by tying it to popular phrases typed into search engines.
Mac Guard also gets round one of the factors that limited the spread of Mac Defender as it no longer needs a user’s permission to be installed.
Security firm Intego issued a warning about the variant and said those who use the Safari browser should disable a setting that lets “safe” files be installed automatically.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/technology-13560137
There are 22 million small and medium businesses in the world, and the last thing that Intel wants them to do is say to hell with servers and move all their workloads and data to public clouds.
So Intel has come up with a clever scheme that will let service and application providers cloudwash their traditional SMB setup – i.e. move it to the cloud without really moving it to the cloud.
The AppUp Small Business Service – which Intel rolled out on Tuesday with its server, service provider, and software partners – runs atop the company’s Hybrid Cloud platform. It tries to give everyone the best of the on-premises and cloud worlds by mixing them together. Perhaps more importantly, the AppUP service will allow server makers, hosting companies, and application makers to share in the wealth that can be mined from SMBs that are sick of managing applications and servers as well as paying for iron and perpetual software licenses every couple of years.
Luckily for Intel, most SMB shops are not quite ready to move their data and applications to a public cloud, especially after Amazon’s multi-day outage in its Virginia data center last month and the many public security breaches and hack attacks reported in the press on a daily basis.
That said, SMBs don’t want to compute like its 1999, either. They like the idea of pay-per-use software pricing, but service providers and their application provider partners, according to Boyd Davis, vice president of marketing at Intel’s Data Center Group, do not have a consistent means of metering software usage and packaging-up applications for easy consumption by SMBs.
“We encourage people to install hardware and packaged applications,” says Boyd. “That is a perfectly valid thing to do. But not everyone has access to capital.”
Intel could wait around for someone to create such a framework to allow software providers and SPs to work together to dispatch server applications down to SMB sites, where companies could run the applications locally but have someone else manage and monitor the systems. Instead, Intel created the Hybrid Cloud, an online catalog and brokering system for cloudy applications that service providers and application makers can use to push applications down over the Internet to virtualized servers that run under the desks and in the closets of SMBs – right where SMBs like to keep their data.
There’s nothing all that special about the servers that work in conjunction with the Hybrid Cloud application and service-brokering service and the AppUp software catalog, except that they need to be certified to support the Intel software stack, and they need to have certain chip and chipset features to work.
Sorry, AMD, but you are not invited to the AppUp party.
Required features include Intel’s Trusted Execution Technology (TXT), which is embedded in the “Westmere” and “Sandy Bridge” Xeon processors and used to ensure secure downloading and execution of applications that are packaged up in virtual machines. Also required is Active Management Technology (AMT), which allows for servers to be remotely administered no matter how badly operating systems or hypervisors are behaving. You need the Virtualization Technology (VT) extensions to support hypervisors, of course.
For the moment, Intel is using the XenServer hypervisor from Citrix Systems and Xen images to package-up applications, and chose Xen first because it was able to modify kernel-mode drivers in the hypervisor.
But Intel says that it will be hypervisor-agnostic in the future. It stands to reason that the freebie ESXi hypervisor from VMware will make it onto the servers used in conjunction with the AppUp service at some point, and ditto for Microsoft’s Hyper-V and Red Hat’s KVM.
At the moment, Intel is offering a single-socket Xeon 3460 whitebox server with 16GB of memory, dual Gigabit Ethernet ports, and up to six 1TB disks in a RAID 5 array as the application server in the AppUp service. Lenovo has also certified two of its ThinkServer TS200v configurations – an entry machine with a Xeon 3450, 4GB of memory, and a single 500GB disk and a standard machine with a Xeon 3460, 8Gb or 16GB of memory, one 128GB solid state drive for the OS and two 1TB disks for data – for use in the AppUp service as well.
Intel also says that Lenovo will put a two-socket option into the field, and other vendors including Acer and NEC have agreed to participate, as well. HP, Dell, IBM, and Fujitsu were probably invited to the AppUp party, but as of yet have made no public commitments.
Software appliances can be fired up with Microsoft’s Small Business Server 2008 and Windows Server 2008, and there is a bunch of applications in the AppUp catalog to run on top of the hypervisor and in a guest VM, including firewall, antivirus, backup, disaster recovery, VoIP and PBX telephony, and selected application software such as Intuit’s QuickBooks and Microsoft’s Sharepoint and Exchange Server. Intel is obviously keen on building out the AppUp catalog.
Here’s how the AppUp service works. You go to a service provider and ask to sign up for AppUp. Depending on the application stack, the service provider sizes up a machine and gives you a three-year lease on the box with a monthly payment. The SP sets up the apps on the box, ships it to you, and uses the Intel Hybrid Cloud to monitor and manage those apps remotely.
Intel’s cloud broker actually collects money from the service provider for the applications, and then pays suppliers for the software licenses and for the server that gets plunked down into your site. The idea is to bill for software as it is used on a monthly basis. If you uninstall software, you aren’t billed for it the following month, and as you add new software to an existing server, the VMs are passed down from the Hybrid Cloud running in Intel’s data center to the hypervisor running on your server.
Everybody gets a piece of the action, and Intel gets to keep selling Xeon processors to SMBs.
The AppUp service is being launched in North America and India right now, and will be expanded to other countries as service providers and application providers help Intel work out the kinks. ®
26 May 2011
Last updated at 06:30
Shiro Kondo is the president and chief executive of the Ricoh Group
Japan office equipment maker Ricoh has said it plans to cut its global workforce by about 10,000 people, in order to reduce costs.
The company currently has about 110,000 employees around the world.
Shares in Ricoh surged more than 7% on the news.
The company, which makes copiers and cameras, was hit by the global financial crisis and is struggling to recover.
“We have become a big company and need to re-engineer our corporate structure throughout to become more muscular,” said Shiro Kondo, president and chief executive.
“We have done very little pruning of unprofitable businesses, and we need to pull out of some.”
Ricoh, which is based in Tokyo, expects the job cuts will cost around 60bn yen ($733m; £449m) over two years.
But the measures are eventually expected to boost operating profit by 140bn yen in three years.
The company has been hurt by a stronger yen.
The earthquake and tsunami that hit Japan on 11 March damaged some of Ricoh’s facilities.
Japanese companies are finding it hard to compete against lower-priced rivals from South Korea and China.
In April, Panasonic, Japan’s top consumer electronics maker, announced 17,000 jobs around the world, also in an effort to reduce costs.
Panasonic expects to have a workforce of 350,000 people after wide-ranging reforms ending in March 2013.
It said operations at factories hit by the recent Japanese earthquake were recovering steadily, but disruptions in its supply chain were still affecting output.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/13555788
26 May 2011
Last updated at 00:01
The EU and UN have frozen assets of the Libyan leader Col Gaddafi and members of his family
Some of the biggest and best-known financial institutions in the world held billions of dollars of Libyan state funds, a leaked report has revealed.
Principal among them were HSBC, Royal Bank of Scotland, Goldman Sachs, JP Morgan Chase, Nomura and Societe General, Global Witness said.
The banks refused to say whether they held, or are still holding, the funds.
All the assets have now been frozen by the European Union and United Nations.
The document, dated June 2010, showed that HSBC held $292.7m (£179.9m) in 10 cash accounts, with a similar amount invested in a hedge fund, while Goldman Sachs had $43m in three accounts.
Almost $4bn was held in investment funds and structured products, with Societe General alone holding $1bn.
Continue reading the main story
All the banks refused to make any public comment on the funds they received and managed on behalf of the Libyan Investment Authority, citing client confidentiality”
Japanese bank Nomura and Bank of New York also held $500m each.
A much larger proportion of Libyan Investment Authority’s assets – $19bn in total – were held by Libyan and Middle Eastern Banks, the document revealed.
It also showed that the Libyan Investment Authority (LIA) holds billions of dollars in shares in global corporations such as General Electric, BP, Vivendi and Deutsche Telekom.
It had already been widely reported that the fund held stakes in UK publishing group Pearson, Italy’s Unicredit bank and industrial group Finmeccanica, as well as Canadian oil exploration group Verenex.
“It is completely absurd that HSBC and Goldman Sachs can hide behind customer confidentiality in a case like this,” said Charmian Gooch, director of campaigning group Global Witness.
“These are state accounts, so the customer is effectively the Libyan people and these banks are withholding vital information from them.”
Established in 2006, the LIA holds about $70bn of assets and is the 13th largest sovereign wealth fund in the world, according to the Sovereign Wealth Fund Institute.
The fund, built on Libya’s oil wealth, scores two out of 10 on the institute’s transparency ranking.
Earlier this month, the EU extended its economic sanctions against Libya to include the LIA and the country’s central bank.
It had already frozen assets of Libyan leader Muammar Gaddafi and some members of his family.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/business-13553259
24 May 2011
Last updated at 11:11
April’s borrowing figure was higher than analysts expected
The UK saw its worst April public sector net borrowing on record last month as tax receipts fell, the Office for National Statistics said.
Public borrowing, excluding financial interventions such as bank bail-outs, hit £10bn, compared with £7.3bn the previous year.
The ONS said tax receipts in April last year were boosted by a one-off bank payroll tax which raised £3.5bn.
April’s figure was higher than many analysts’ expectations of about £6.5bn.
Economists said the figures were a surprising disappointment.
“The public finances have got off to a pretty bad start this year,” said Hetal Mehta, at Daiwa Capital Markets. He warned that the position could worsen if economic growth was weaker than expected.
Samuel Tombs, at Capital Economics, said he believed the government would struggle to meet its borrowing forecasts this year.
Continue reading the main story
On the face of it these are embarrassing figures for the government – embarking on a deficit reduction programme, they have begun the financial year with record borrowing.
The government borrowed nearly £10bn in April, up from just over £7bn a year earlier.
But the Treasury argues that whereas the April 2010 figures benefited from the one-off bank bonus tax payment – the new bank levy is spread more evenly across the year.
And there was brighter news for George Osborne with last year’s borrowing total now revised down by £2bn.
However, he added: “Nonetheless, these are just one set of figures and the trend in borrowing should improve as more of the spending cuts kick in later this year.”
There was some good news for the government as borrowing figures for the year to March 2011 were revised downwards to £139.4bn, from £141.1bn.
The revision was mainly due to the tax take being boosted from a rise in VAT to 20% from 17.5%, said the ONS.
But the higher-than-expected borrowing in April pushed the government’s debt to a record £910.1bn, or 60.1% of GDP.
A spokesman for the Treasury said: “One-off factors affected borrowing, but it is clear from the downward revision to last year’s borrowing figures that the government’s deficit reduction strategy is making headway in dealing with our unsustainable deficit.”
Government spending in April was 5% higher than a year ago at £54.1bn.
This was mainly caused by a 26% rise in interest payments to £1bn as the government services its debts.
David Kern, chief economist at the British Chambers of Commerce, said it was clear that the government’s plans to reduce the deficit by more than £20bn over the year was proving difficult.
But he said the government must press on with its plans. “The fragility of the economic recovery is creating a difficult backdrop, but the government must not deviate from its strategy to restore stability in the UK’s public finances,” he said.
“Businesses support the measures being taken to reduce the deficit, and the emphasis should be on spending cuts rather than tax increases,” Mr Kern said.
Article source: http://www.bbc.co.uk/go/rss/int/news/-/news/business-13519792