There was an error in this gadget

Thursday, November 29, 2012

Adobe could unveil Retina version of Photoshop CS6 on Dec. 11


Adobe has a few tidbits in store for its Create Now Live event on December 11. Could a peek at the Retina version of Photoshop be one of them?

Adobe Systems is hosting a free online event on December 11 where it may reveal the new Retina edition of its flagship Photoshop program.

One of the topics of the Create Now Live event invites participants to "See what's next in Adobe Photoshop." And a YouTube video promoting the Photoshop presentation appears to show someone using the software on a Retina Display MacBook Pro.

That video clip has led Japanese blog site Macotakara and others to speculate that Adobe will show off a new update of Photoshop CS6 designed to support the high-resolution display on the 13-inch and 15-inch Retina MacBook Pros.

This past August, Adobe announced that it would update Photoshop and other Creative Suite 6 products to support Apple's Retina Display. The company promised the update to Photoshop would arrive this fall. And since fall ends December 20, it seems a fair bet that the Retina version of the famed photo editor will be part of the event's agenda.

CNET contacted Adobe for comment and will update the story if we receive any information.

Create Now Live was originally scheduled for December 5 but was pushed back to December 11.

The event will run from 10:00 a.m. to 1:30 p.m. PT, and will be live-streamed on the Adobe Creative Cloud Facebook page.

Anyone who wants to attend the virtual event can sign up at Adobe's registration page. Beyond presentations on Photoshop and Adobe Creative Cloud, the event will kick off with a keynote speech by Jeffrey Veen, vice president of Adobe products, and close out with a Q&A session.

Source: http://cnet.co/TqWXvy

Apple zaps Thunderbolt glitches with firmware update


The small update fixes communications problems with some Thunderbolt devices.

Apple has released a new update for 2012 MacBook Pro systems that fixes problems with the handling of bus-powered Thunderbolt devices.

Thunderbolt is the next-generation I/O technology that Apple is implementing in its Mac systems, which allows very high-bandwidth communication between devices, and also allows for expansion of the PCIe (PCI Express) bus as well as carrying the DisplayPort signal for external monitors.

As Thunderbolt is relatively new, some bugs are bound to crop up in various implementations, and with the MacBook Pro systems produced in mid-2012 it's been found that some bus-powered Thunderbolt devices may not work properly.

To tackle this, Apple has issued a small firmware update that should allow bus-powered devices to communicate properly with these systems. The update is a small 442KB download that should be available through the Mac App Store's Software Update service (available by selecting Software Update in the Apple menu). Alternatively, you can download the updater from Apple's Firmware Update v1.1 Web page and install it manually.

The update requires OS X 10.7.4 or later to install, and will require a restart of the computer during which you should be sure it is connected to a reliable power source and not interrupted during the installation process.

While Software Update should determine if your system qualifies for the update, you can also determine this by choosing "About this Mac" from the Apple menu and then clicking "More Info..." to see the model number of your Mac. If you see "Mid 2012" listed under the type of Mac it is, then the update applies to your system. Alternatively, you can download the updater and run it, and if your system needs it then it will continue and install, whereas if not then the installer will alert you that your system does not qualify.

Saturday, November 24, 2012

Google Launches Groups Migration API To Help Businesses Move Their Shared Mailboxes To Google Apps


Many businesses use shared mailboxes, public folders and discussion databases that, over the years, accumulate a lot of institutional knowledge. Most cloud-based email services don’t offer this feature, though, making it hard for some companies to move away from their legacy systems. But Google’s new Google Apps Groups Migration API now allows developers to create tools to move shared emails from any data source to their internal Google Groups discussion archives.

Setting up this migration process is likely a bit too involved for a small business without in-house developers, but it is very flexible and, as Google notes, “provides a simple and easy way to ‘tag’ the migrated emails into manageable groups that can be easily accessed by users with group membership.”

The Migration API is limited to 10 queries per second per account and half a million API request per day. The maximum size of a single email, including attachments, is 16MB.

This new API is mostly a complement to the existing Google Apps Provisioning API, which helps businesses create, retrieve and update their users’ accounts on the service, as well as the Google Apps Groups Settings API. The Provisioning API also includes a number of methods to work with Google Groups, but doesn’t currently feature any tools for migrating existing emails and accounts to the service.

Source: http://tcrn.ch/TconVC

No-Sweat Windows 8 Networking


You can share pictures, music, videos, documents and printers on your home or small business network by setting up a HomeGroup. You can select the libraries -- e.g., My Pictures or My Documents -- that you want to share, and you can restrict certain folders or files to prevent sharing. You can set up a password to prevent others from changing the share settings you designate.

Easy sharing and streaming of media and other assets among computers at home or in the workplace is a good reason to get networked. Other benefits include sharing Internet connections.

If you've been using email as a method for distributing files among family members or a small business, or are simply taking the Windows 8 plunge, Microsoft has never made it easier -- particularly in a Windows 7 migration, or an all-Windows 8 setup.

If you're already sharing an Internet connection, much of the work is completed for you.

Step 1: Connect Your Windows 8 PCs to the Internet

Perform this connection wirelessly or by using a wired connection, depending on the capabilities of the router. Wired connections are more robust.

Open the Settings charm on the Windows 8 PC by moving your mouse pointer to the lower right corner of the desktop and clicking on the "Settings" cogwheel-like icon. Then click on the wireless network icon -- it looks like a set of mobile phone signal bars -- or the wired network icon that looks like a small monitor screen.

Choose the network available and click "Connect." Enter the appropriate network security key if prompted.

Tip: Inspect your Internet connection device to determine the capabilities. A router will have a number of physical ports that can be used for a wired connection; an antenna is a dead giveaway that there are wireless capabilities. Some wireless routers don't have external antennas. If there's only one port, it's a modem, not a router, and you'll need to buy a router.

Step 2: Turn Sharing On

Turn network sharing on by right-clicking on the wireless or wired network icon from the previous step. Then click "Turn sharing on or off" and choose "Yes, turn on sharing and connect to devices."

Perform this step on each of the computers to be networked.

Tip: When sharing shares files and other assets, be sure you trust the other network users.

Step 3: Verify the PCs Are Added

Sign on to any of the PCs and point to the bottom right corner of the Start page, as before. Then enter the search term "Network" in the Search text box.

Click on the Network icon that will appear, and you'll see the PCs that have been shared.

Step 4: Join or Create a HomeGroup

Windows 8 automatically creates a HomeGroup, which is a set of PCs that are allowed to share files and printers.

Choose "Change PC Settings" from the cog-like "Settings" charm you used earlier and then click on HomeGroup. Then create or join a HomeGroup. You can obtain any existing passwords from the same settings location on any existing PCs that are already part of the HomeGroup you want to join.

Tip: To obtain an existing HomeGroup password from a Windows 7 machine, go to the Control Panel on that PC and choose "Network and Internet." Then choose HomeGroup and select "View or print the HomeGroup password."

Enter that password on the new Windows 8 machine to join the new machine to the legacy Windows 7 HomeGroup.

Step 5: Choose the Libraries and Devices to Share

Then perform this step on all of the computers you want to network.

Tip: Windows RT, the tablet version of Windows 8, won't allow you to create a HomeGroup -- you can only join one.

Step 5: Tweak the HomeGroup Settings

Choose and change the permissions on libraries and devices you want to share or not share by returning to the HomeGroup PC Settings area on the Windows 8 PC you want to tweak -- or in the case of a Windows 7 PC, the HomeGroup options area of the Network and Internet section of the Control Panel.

Tip: Use the search text box function from the earlier step to look for individual files or folders you want to share. Choose the asset, open the file location, and then click the "Share" tab to make permissions changes.

Source: http://bit.ly/UrJYsn

New internet name objections filed by government panel


More than 250 "early" objections to proposed new internet address endings have been filed by a panel representing about 50 of the world's governments.

The list includes references to groups of people or locations including .roma, .islam, .patagonia, .africa and .zulu.

There are also concerns about proposed suffixes covering broad sectors such as .casino, .charity, .health, .insurance and .search.

The list was submitted ahead of a planned rollout next year.

The panel - known as the Government Advisory Committee (Gac) - has published its "early warning" list on the web to give applicants a chance to address its concerns or choose to withdraw their submission and reclaim 80% of their $185,000 (£116,300) application fee.

Gac will then decide in April which of the suffixes warrant formal complaints if it still has outstanding concerns, at a meeting in Beijing.

Each warning on the list makes reference to the country which filed the objection. A suffix was only added to the register if no other members of Gac objected to its inclusion.

Anti anti-virus
The organisations and suffixes referred to on the list included:


  • Amazon for its applications for .app, .book, .movie, .game and .mail among others.
  • Google for .search, .cloud and .gmbh (a reference to a type of limited German company).
  • Johnson & Johnson for .baby.
  • L'Oreal for .beauty, .hair, .makeup, .salon and .skin.
  • The Weather Channel for .weather.
  • Symantec for .antivirus.
  • eHow publisher, Demand Media, for .army, .airforce, .engineer and .green.

Despite the large number of objections, Icann (Internet Corporation for Assigned Names and Numbers) - the internet name regulator in charge of the rollout - has indicated that it still believed it would be able to release the first suffixes for use by May 2013.

The organisation does not have to comply with the governments' wishes, but must provide "well-reasoned arguments" if it decides to deny any rejection request.

Competition concerns
A range of reasons were given for the early warnings.

France raised concerns about seven organisations that had applied for .hotel or .hotels on the grounds that it believed that if the suffix was introduced it should be reserved for hotel businesses.

"The guarantee of a clear information of the customer on hotel accommodation services is the best way to promote the tourism industry," it said. "Behind the term hotel as a generic denomination, any customer in the world must have the guarantee that will be directly connected to a hotel."

It is feasible that some of the applicants might be able to give this guarantee, allowing the ultimate owner of the suffix to profit by charging individual businesses to register their hotel websites.

But other issues may be harder to resolve.

For example, Australia objected to Amazon's application for the Japanese-language suffix meaning "fashion" on the grounds it might give the firm an unfair advantage.

"Restricting common generic strings for the exclusive use of a single entity could have unintended consequences, including a negative impact on competition," the country's government wrote.

Religions and reputations
An objection by the United Arab Emirates to Asia Green IT System's application for .islam may also be impossible to reconcile.

"It is unacceptable for a private entity to have control over religious terms such as Islam without significant support and affiliation with the community it's targeting," it said.

"The application lacks any sort of protection to ensure that the use of the domain names registered under the applied for new gTLD (generic top-level domain) are in line with Islam principles, pillars, views, beliefs and law."

Other problems stemmed from more commercial concerns.

For example. Samoa is opposed to three applications for the suffix .website on the grounds that it is too similar to the .ws suffix it already controls, which provides the South Pacific country with revenue.

Corporate reputations emerged as another sticking point.

Australia has challenged applications for .gripe, .sucks and .wtf on the basis they had "overtly negative or critical connotations" which might force businesses to feel they had to pay to register their brands alongside the suffix to prevent anyone else from doing so.

The only address that the UK filed an objection to was .rugby.

It objected to two applicants which it said did "not represent the global community of rugby players, supporters and stakeholders".

The UK suggested the proposals be rejected in favour of a third submission for the suffix from the International Rugby Board.

Source: http://bbc.in/SRB1c0

Intel's Two-Pronged Evolution


Intel's new Itanium 9500 and Xeon Phi coprocessors are impressive, evolutionary steps for the company and myriad current customers. However, the new processors also cast considerable light on how Intel will succeed in developing and delivering innovative solutions for core existing and emerging new markets.

It's hard to think of an IT vendor with a stronger leadership position than Intel, but the company is having trouble shaking off the perception that it is on the ropes or headed for disaster in some of its core markets.

On one hand, Intel's mission-critical Itanium platform suffered when Oracle and HP publicly butted heads in an altercation that ended up in court. On the other, the use of graphics processors in high-performance computing and supercomputing applications has caused some to doubt Intel's future in those markets.

Both of these issues are reflected in the company's new Itanium 9500 and Xeon Phi announcements, though the light each casts on Intel is significantly different.

Partner Problems

Regarding Itanium, Intel was stuck between the rock and hard place of two significant partners -- HP and Oracle -- when the latter claimed the platform was headed toward demise. Both HP -- by far, the largest producer of IA-64 systems -- and Intel denied this vociferously. In fact, Oracle's claims contradicted numerous Itanium roadmaps and publicly stated strategies.

However, Oracle refused to back down and said it would not develop future versions of its core database products for the platform. HP fought back, noting an agreement Oracle signed after hiring its former CEO, Mark Hurd, and in August the judge overseeing the case issued a ruling supporting HP. Though Oracle said it will appeal, it also resumed Itanium development and support.

So where do things stand today? Along with providing a significant performance boost over previous-generation processors -- a point that will please loyal IA-64 customers and OEMs -- Intel's new Itanium 9500 is also likely to bolster HP's claims against Oracle and the case for the platform's health and well-being. That isn't just because of the Itanium 9500's capabilities, which are formidable, but also due to Intel's new Modular Development Model, which aims to create common technologies for both the Itanium and Xeon E7 families.

That will certainly add to the mission-critical capabilities of Xeon E7 and it should also take a significant bite out of the cost of developing future Itanium solutions. In the end, not only does Intel's Itanium 9500 deliver the goods for today's enterprises; it also represents a significant advance in Itanium's long term prospects.

The High End

A completely different corner of the IT market -- HPC and supercomputing -- is at the heart of Intel's new Xeon Phi coprocessors. The latest top-ranked system on the Top500.org list, Titan, at the DOE's Oak Ridge Laboratory, is based on AMD Opteron CPUs and NVIDIA GPUs. It certainly is the foremost event in the trend of using GPUs for parallel processing chores, but other systems using similar technologies are also cropping up.

A curious thing about supercomputing is that while these systems deliver eye-popping performance and bragging rights for owners and vendors, their immediate effect on mainstream computing is less significant. Sure, such technologies eventually find their way into commercial systems, but those are typically not high-margin products for OEMs and many customers -- particularly those in the public sector, such as universities -- aren't exactly rolling in dough.

In fact, maximally leveraging existing resources, including the knowledge and training of programmers, technicians and managers, is crucial for keeping these facilities up, running and solvent.

That's a key point related to the Xeon Phi coprocessors, the first commercial iteration of Intel's longstanding MIC development effort. Not only do the new Xeon Phi solutions deliver impressive parallel processing capabilities, they do so in a notably efficient power envelope.

Power Saver

The Intel Xeon/Phi-based Beacon supercomputer at the University of Tennessee is the most energy-efficient system on the latest Top500.org list. More importantly, however, Xeon Phi supports programming models and tools that are common in Intel Xeon-based HPC and supercomputing systems.

That will be welcome news to the owners of high-end Xeon-based systems, which constitute more than 75 percent of the current Top500.list, but the effect should also ripple downstream into the commercial HPC and workstation markets, benefiting end users, their vendors and developers.

Overall, Intel's new Itanium 9500 and Xeon Phi coprocessors are impressive, evolutionary steps for the company and myriad current customers. However, the new processors also cast considerable light on how Intel will succeed in developing and delivering innovative solutions for core existing and emerging new markets.

Source: http://bit.ly/S21Agw

Mozilla quietly ceases Firefox 64-bit development


Mozilla's engineering manager has requested that developers stop work on Windows 64-bit builds of Firefox.

Mozilla engineering manager Benjamin Smedberg has asked developers to stop nightly builds for Firefox versions optimized to run on 64-bit versions of Windows.

A developer thread posted on the Google Groups mozilla.dev.planning discussion board, titled "Turning off win64 builds" by Smedberg proposed the move.

Claiming that 64-bit Firefox is a "constant source of misunderstanding and frustration," the engineer wrote that the builds often crash, many plugins are not available in 64-bit versions, and hangs are more common due to a lack of coding which causes plugins to function incorrectly. In addition, Smedberg argues that this causes users to feel "second class," and crash reports between 32-bit and 64-bit versions are difficult to distinguish between for the stability team.

Users can still run 32-bit Firefox on 64-bit Windows.

Although originally willing to shelve the idea for a time if it proved controversial, Smedberg later, well, shelved that idea:

Thank you to everyone who participated in this thread. Given the existing information, I have decided to proceed with disabling windows 64-bit nightly and hourly builds. Please let us consider this discussion closed unless there is critical new information which needs to be presented.

The engineer then posted a thread titled "Disable windows 64 builds" on Bugzilla, asking developers to "stop building windows [sic] 64 builds and tests." These include the order to stop building Windows 64-bit nightly builds and repatriate existing Windows 64-bit nightly users onto Windows 32-bit builds using a custom update.

In order to stave off argument, even though one participant suggested that 50 percent of nightly testers were using the system, perhaps as an official 64-bit version of Firefox for Windows has never been released, Smedberg said it was "not the place to argue about this decision, which has already been made."

Source: http://cnet.co/T6YHtm

Wednesday, November 21, 2012

Apple may name its next version of OS X 'Lynx'


Apple will continue its catty theme of OS X names by referring to OS X 10.9 as Lynx, claims a new report.

Mac OS X 10.9 could be dubbed "Lynx," says blog site AppleScoop.

The rumor sounds plausible. It would continue Apple's trend of naming its OS after ferocious felines. Over the past few years, OS X has leaped from Leopard to Snow Leopard to Lion and then to Mountain Lion.

However, the intel is decidedly second-hand.

The information comes from a "reliable source" who claims to have talked to someone inside Apple. The person reportedly saw some internal papers that indicated Apple was finalizing the name of OS X 10.9. But the source couldn't say when Apple would actually decide on the name or reveal it to the public.

Lynx, however, is on the list of available names for OS X. Way back in 2003, Apple trademarked several related names, including Lynx, Cougar, Leopard, and Tiger, MacRumors reported at the time. The company has already bagged Leopard and Tiger, so both Lynx and Cougar are still available.

A name can sometimes prove tricky, even one that's trademarked. Apple was sued by computer retailer Tiger Direct in 2005 over use of the name Tiger. But the judge eventually found in favor of Apple.

AppleScoop suggests that Apple may unveil its name for OS X 10.9 at next year's Worldwide Developers Conference in June.

Little is known about the next flavor of OS X at this point, though Apple may import a couple of features from iOS. A report out today from 9to5Mac says that OS X 10.9 will include the voice assistant Siri and support for Apple Maps. Based on the increased pace of the last two updates, the new OS could debut next year.

Mac Mini users unable to install OS X 10.8.2


The special 10.8.2 updater for 2012 Mac Mini systems is currently not available from the Mac App Store.

If you own one of Apple's latest 2012 Mac Mini systems and are attempting to update it to the latest version of OS X, you may find the update is not available via the App Store's Software Update service. Furthermore, if you download the manual updater for 10.8.2 and attempt to install it, you will see an error that states: "Error: OS X Update can't be installed on this disk. This volume does not meet the requirements for this update."

The reason for this error is the 2012 Mac Mini requires a special version of the 10.8.2 update, which Apple pulled from the App Store last Friday and has not yet released a new one.

While some have speculated Apple may have pulled the update for the upcoming 10.8.3 release, while Apple has removed prior versions of OS X such as OS X Lion from the App Store after Mountain Lion was released, this is very unlikely as the new release is not yet available and is still under testing. Furthermore, Apple still maintains update packages so older versions of OS X can still be updated to their latest versions.

Apple has not issued an explanation as to why the update was pulled, but it is more likely because of a glitch in the updater or a problem with the App Store, and the update will likely be reissued soon once the problem is fixed. However, until this happens, affected users will be at a small disadvantage because they will not be able to install other recent software updates that requires OS X 10.8.2 such as those for iPhoto and Aperture.

If you have a 2012 Mac Mini system and the OS X 10.8.2 update is not showing up in your Software Update list, then for now the only option is to wait. Do not try to install the manual delta or combo updaters that are available online, as these will not work and will instead result in an error. Hopefully Apple will address the problem ASAP and allow Mac Mini users to get the latest version of OS X.

Source: http://cnet.co/Q5Da7i

How to restart a FileVault-protected Mac remotely


How to restart a FileVault-protected Mac remotely

If necessary, you can restart a FileVault-enabled Mac and have it automatically unlock the volume and load the operating system.

OS X's encryption service, FileVault, originally stored users' home folder contents in encrypted disk images. In OS X Lion, FileVault now uses Apple's new CoreStorage volume manager to encrypt the entire disk. With CoreStorage, the OS configures a small hidden partition with a preboot welcome screen that looks like the standard OS X log-in window and contains user accounts that are authorized to unlock the volume and cause the system to load and automatically log in to the account specified on the preboot screen.

Unfortunately, while more secure and while offering a relatively seamless experience when sitting at your computer, the preboot authentication requirement for FileVault does pose a bit of a problem for those who access their systems remotely, such as through Screen Sharing (using Back To My Mac) or through SSH and other remote-access technologies.

If you make a configuration change and need to restart the system, the computer will require preboot authentication before the system and any remote-access services load. In effect, this creates a bit of a hurdle for those who wish to keep their systems secure with FileVault but who also want to be able to restart their systems remotely.

Luckily, Apple does provide a way to restart a FileVault-encrypted system and have it boot back to a working state. To do this, open the Terminal and run the following command:

sudo fdesetup authrestart

This command will ask for the current user's password or the recovery key for the FileVault volume, and then store the current user's credentials so when the system is restarted the computer can use these credentials to unlock the volume at the preboot screen. This means when the system reboots it will automatically unlock the volume so the OS will load, dropping you at the standard log-in window so you can log in to the user account of your choice.

This approach to restarting a system is useful if you have made manual changes to a FileVault-protected system, but also if the system has software updates available for it that are automatically installed. While the App Store or Software Update service will prompt you to restart the system, avoiding these prompts and using the above command will apply the updates and restart the system to a usable state for remote access.

In addition to aiding in remote management of a system, this command can be used locally to restart a system without needing to manage the preboot authentication screen again. If you are configuring updates on a local server and simply need to restart it to a working state, then you can issue this command and move on to other tasks instead of having to wait for it to restart and then manually unlock the encrypted boot drive.

This command does require administrative access to run, and you need to know either the password of a FileVault-enabled user account (likely the same admin account) or the recovery key for the FileVault volume that is displayed for you when you enable FileVault. These credentials are stored in memory for the restart process, but are then cleared when the system boots. As a result, while some may have concerns about such commands providing a means around the system's standard security measures, the command should maintain the same security requirements for FileVault.



Monday, November 19, 2012

New malware variant recognizes Windows 8, uses Google Docs as a proxy to phone home


Windows 8 may block most malware out of the box, but there is still malware out there that thwarts Microsoft’s latest and greatest. A new Trojan variant, detected as Backdoor.Makadocs and spread via RTF and Microsoft Word document marked as Trojan.Dropper, has been discovered that not only adds a clause to target Windows 8 and Windows Server 2012, but also uses Google Docs as a proxy server to phone home to its Command & Control (C&C) server.

Symantec believes the threat has been updated by the malware author to include the Windows 8 and Windows Server 2012 references, but doesn’t do anything specific for them (yet). This is no surprise: the two operating systems were released less than a month ago but of course they are already popular, and cybercriminals are acting fast.

Yet the more interesting part is the Google Docs addition. Backdoor.Makadocs gathers information from the compromised computer (such as host name and OS type) and then receives and executes commands from a C&C server to do further damage.

In order to do so, the malware authors have decided to leverage Google Docs to ensure crystal clear communications. As Google Docs becomes more and more popular, and as businesses continue to accept it and allow the service through their firewalls, this method is a clever move.

The reason this works is because Google Docs includes a “viewer” function that retrieves resources of another URL and displays it, allowing the user to view a variety of file types in the browser. In violation of Google’s policies, Backdoor.Makadocs uses this function to access its C&C server, likely in the hopes of preventing the link to the C&C from being discovered since Google Docs encrypts its connection over HTTPS.

Symantec says “It is possible for Google to prevent this connection by using a firewall.” Since the document does not leverage vulnerabilities to function (it relies on social engineering tactics instead) it’s unlikely Google will be able to do much beyond participating in a game of cat and mouse with the malware authors.

Nevertheless, we have contacted Google and Microsoft about this issue. We will update this article if and when we hear back.

Update at 4:30PM EST: “Using any Google product to conduct this kind of activity is a violation of our product policies,” a Google spokesperson said in a statement. “We investigate and take action when we become aware of abuse.”

Source: http://tnw.co/S5ATHu

Microsoft Offers Guide to Adapting Your Site for IE 10



Microsoft’s Windows Phone 8 offers much better HTML5 support than its predecessors thanks to the Internet Explorer 10 web browser.
Unfortunately, if you’ve been building WebKit-centric sites IE 10 users won’t be able to properly view your site, which is why Microsoft has published a guide to adapting your WebKit-optimized site for Internet Explorer 10.
If you’ve been following CSS best practices, using prefixes for all major browsers, along with the unprefixed properties in your code, then there’s not much to be learned from Microsoft’s guide (though there are a couple of differences in touch APIs that are worth looking over).
But if you’ve been targeting WebKit alone, Microsoft’s guide will get your sites working in IE 10, WebKit, and other browsers that have dropped prefixes for standardized CSS properties.
Sadly, even some the largest sites on the web are coding exclusively for WebKit browsers like Chrome, Safari and Mobile Safari. The problem is bad enough that Microsoft, Mozilla and Opera are planning to add support for some -webkit prefixed CSS properties.
In other words, because web developers are using only the -webkit prefix, other browsers must either add support for -webkit or risk being seen as less capable browsers even when they aren’t. So far Microsoft hasn’t carried through and actually added support for -webkit to any versions of IE 10, but Opera has added it to its desktop and mobile browsers.
Microsoft’s guide to making sites work in IE 10 for Windows Phone 8 also covers device detection (though it cautions that feature detection is the better way to go) and how to make sure you trigger standards mode in your testing environment, since IE 10 defaults to backward-compatibility mode when used on local intranets.
For more details on how to make sure your site works well in IE 10 for Windows Phone 8, head on over to the Windows Phone Developer Blog (and be sure to read through the comments for a couple more tips).


Wednesday, November 14, 2012

Storage buying guide


Looking for new storage devices and need some tips? You're in the right place.

Information runs the computing world, and handling it is crucial. So it's important that you select the best storage device to not only keep your data, but also distribute it. In this guide, I'll explain the basics of storage and list the features that you should consider when shopping. But if you're ready to leave for the store right now, here are my top picks.

Hard-core users hoping to get the most out of a home storage solution should consider a network-attached storage (NAS) server from Synology, such as the DS1511+, DS412+, or the DS213air. Both offer superfast speeds, a vast amount of features, and state-of-the-art user interfaces and are more than worth the investment.

Alternatively, if you want to make your computer faster, a solid-state drive (SSD) such as the Samsung 830, the Intel 520, or the OCZ Vertex 4 will significantly boost speeds of your current hard-drive-based system.

Do you have information that you just can't afford to lose? Then I'd recommend the heavyweight, disaster-proof IoSafe Solo G3. You can't damage this drive even if you try. However, if you just want to casually extend your laptop's storage space, a nice and affordable portable drive, such as the Seagate Backup Plus, the WD My Passport, or the Buffalo MiniStation Thunderbolt will do the trick.

Now if you want to know more about storage, I invite you to read on. On the whole, there are three main points you should consider when making your list: performance, capacity, and data safety. I'll explain them briefly here. And after you're finished, check out this related post for an even deeper dive into the world of storage.

Performance
Storage performance refers to the speed at which data transfers within a device or from one device to another. Currently, the speed of a single consumer-grade internal drive is defined by the Serial ATA (SATA) interface standard, which determines how fast internal drives connect to a host (such as a personal computer or a server), or to one another. There are three generations of SATA, with the latest, SATA 3, capping at 6Gbps (or some 770MBps). The earlier SATA 1 and SATA 2 versions cap data speeds at 1.5Gbps and 3Gbps, respectively.

So what do those data speeds mean in the real world? Well, consider that at top speed, an SATA 3 drive can transfer a CD's worth of data (about 700MB) in less than one second. The actual speed may be slower due to mechanical limitations and overheads, but you get the idea of what's possible. Solid-state drives (SSDs), on the other hand, offer speeds much closer to the top speed of the SATA standard. Most existing internal drives and host devices (such as computers) now support SATA 3, and are backward-compatible with previous revisions of SATA.

Since internal drives are used in most other types of storage devices, including external drives and network storage, the SATA standard is the common denominator of storage performance. In other words, a single-volume storage solution -- one that has only one internal drive on the inside -- can be as fast as 6Gbps. In multiple-volume solutions, there are techniques that aggregate the speed of each individual drive into a faster combined data speed, but I'll discuss that in more detail below.

Capacity
Capacity is the amount of data that a storage device can handle. Generally, we measure the total capacity of a drive or a storage solution in gigabytes (GB). On average, one GB can hold about 500 iPhone 4 photos, or about 200 iTunes digital songs.

Currently, the highest-capacity 3.5-inch (desktop) internal hard drive can hold up to 4 terabytes (TB) or 4,000GB. On laptops, the top hard drive has up to 2TB of space and solid-state drive (SSD) can store up to 512GB before getting too expensive to be practical.

While a single-volume storage solution's capacity will max out at some point, there are techniques to combine several drives together to offer dozens of TB, and even more. I'll discuss that in more detail below.

Data safety
Data safety depends on the durability of the drive. And for single drives, you also have to consider both the drive's quality and how you'll use it.

Generally, hard drives are more susceptible to shocks, vibration, heat, and moisture than SSDs. For your desktop, durability isn't a big issue since you won't move your computer very often. For a laptop, however, I'd recommend an SSD or a hard drive that's designed to withstand shocks and sudden movements.

For portable drives, you can opt for a product that comes with layers of physical protection, such as the LaCie Rugged Thunderbolt, the IoSafe Rugged Portable, or the Silicon Power Armor A80. These drives are generally great for those working in rough environments.

But even when you've chosen the optimal drive for your needs, don't forget to use backup, redundancy, or both. Even the best drive is not designed to last forever, and there's no guarantee against failure, loss, or theft.

The easiest way to back up your drive is to regularly put copies of data on multiple storage devices. Many external drives come included with automatic backup software for Windows. Macs users, on the other hand, can take advantage of Apple's Time Machine feature. Note that all external drives work with both Windows and Macs, as long as they are formatted in the right file system: NTFS for Windows or HFS+ for Macs. The reformatting takes just a few seconds. For those who are on a budget, here's the list of top five budget portable drives.

Yet, this process isn't foolproof. Besides taking time, backing up your drive can leave small windows in which data may be lost. That's why for professional and real-time data protection, you should consider redundancy.

RAID
The most common solution for data redundancy is RAID, which stands for redundant array of independent disks. RAID requires that you use two internal drives or more, but depending on the setup, a RAID configuration can offer faster speeds, more storage space, or both. Just note that you'll need to use drives of the same capacity. Here are the three most common RAID setups.

RAID 1
Also called mirroring or striping, RAID 1 requires at least two internal drive drives. In this setup, data writes identically to both drives simultaneously, resulting in a mirrored set. What's more, a RAID 1 setup continues to operate safely even if only one drive is functioning (thus allowing you to replace a failed drive on the fly). The drawback of RAID 1 is that no matter how many drives you use, you get the capacity of only one. RAID 1 also suffers from slower writing speeds.

RAID 0
Like RAID 1, RAID 0 requires at least two internal drives. Unlike RAID 1, however, RAID 0 combines the capacity of each drive into a single volume while delivering maximum bandwidth. The only catch is that if one drive dies, you lose information on all devices. So while more drives in a RAID 0 setup means higher bandwidth and capacity, there's also a greater risk of data loss. Generally, RAID 0 is used mostly for dual-drive storage solutions. And should you chose RAID 0, backup is a must.
For a storage device that uses four internal drives, you can use a RAID 10 setup, which is the combination of RAID 1 and RAID 0, for both performance and data safety.

RAID 5
This setup requires at least three internal drives, but it distributes data on all drives. Though a single drive failure won't result in the loss of any data, performance will suffer until you replace the broken device. Still, because it balances storage space (you lose the capacity of only one drive in the RAID), performance, and data safety, RAID 5 is the preferred setup.

Most RAID-capable storage devices come with the RAID setup preconfigured; you don't need to configure the RAID setup yourself.

Now that you've learned how to balance performance, capacity, and data safety, let's consider the three main types of storage devices: internal drives, external drives, and network-attached storage (NAS) servers.

Internal drives
Though they share the same standard the same SATA interface, the performance of internal drives can vary sharply. Generally, hard drives are much slower than SSDs, but SSDs are much more expensive than hard drives, gigabyte-to-gigabyte.

That said, if you're looking to upgrade your system's main drive (the one that hosts the operating system), it's best to get an SSD. You can get an SSD with a capacity of 256GB or less (currently cost some $200 or less), which is enough for a host drive. You can always add more storage via an external drive, or in the case of a desktop, another regular secondary hard drive.

Though not all SSDs offer the same performance, the differences are minimal. To make it easier for you to choose, here's the list of current best five internal drives.

External drives
External storage devices are basically one or more internal drives put together inside an enclosure and connected to a computer using a peripheral connection.

There are four main peripheral connection types: USB, Thunderbolt, FireWire, and eSATA. Most, if not all, new external drives now use just USB 3.0 or Thunderbolt or both. There are good reasons why.

USB 3.0 offers the cap speed of 5Gbps and is backward-compatible with USB 2.0. Thunderbolt caps at 10Gbps, and you can daisy chain up to six Thunderbolt drives together without degrading the bandwidth. Thunderbolt also allows for RAID when you connect multiple single-volume drives of the same capacity. Note that there are more computers that support USB 3.0 than Thunderbolt, especially among Windows computers. All existing computers support USB 2.0, which also works with USB 3.0 drives (though at USB 2.0 data speed).

Generally, speed is not the most important factor for non-Thunderbolt external drives. That may seem counterintuitive, but the reason is that the USB 3.0 connectivity standard, which is the fastest among all non-Thunderbolt solutions, is slower than the speed of SATA 3 internal drives.

Capacity, however, is a bigger issue. USB external drives are the most affordable external storage devices on the market and they come with a wide range of capacities to fit your budget. Make sure you get a drive that offers at least the same capacity as your computer. Check out the list of best five external drives for more information.

Currently, Thunderbolt storage devices are more popular for Macs and, unlike other external drives, deliver very fast performance. Yet, they are significantly more expensive than USB 3.0 drives, with prices fluctuating a great deal depending on the number of internal drives you use. Here's the top five Thunderbolt drives on the market.

Network-attached storage (NAS) devices
A NAS device (aka NAS server) is very similar to an external drive. Yet, instead of connecting to a computer directly, it connects to a network via a network cable (or Wi-Fi) and offers storage space to the entire network at the same time.

As you might imagine, NAS servers are ideal for sharing a large amount of data between computers. Besides storage, NAS servers offer many more features, including (but not limited to) the ability to stream digital content to network players, downloading files on its own, back up files from network computer, sharing data over the Internet, and much more.

If you're in the market for an NAS server, note that its data rate is capped to that of a Gigabit network connection, which is about 130MBps at most -- far less than the speed of the internal drives themselves. That said, you should focus on the capacities of the internal drives used. Also, it's a good idea to get hard drives that use less energy and are designed to work 24-7 since NAS servers are generally left on all the time. Go to the list of best five NAS servers to see my top list.

Source: http://cnet.co/QH9Zc1

WD ships 802.11ac My Net router and media bridge


WD announces the availability of its 802.11ac router and media bridge.

WD today announced the availability of its 802.11ac Wi-Fi products, including a new router and a media bridge.

WD, one of the largest hard-drive makers in the world, jumped into home networking just recently with the My Net router family, which includes the already reviewed My Net N900 HD and My Net N900 Central. The two new devices announced today complete the company's Wi-Fi portfolio by adding support for the latest 802.11ac standard.

The two new products include a 802.11ac router, the My Net AC1300 HD Dual-Band router, and a 802.1ac-compatible media bridge, the My Net AC Bridge. The media bridge helps users to experience the new 802.11ac since there are currently no hardware clients that support this new Wi-Fi standard. It can add up to four Ethernet-ready devices, such as game consoles, computers, printers, and so on, to the Wi-Fi network created by the My Net AC1300 router, or any other compatible routers.

Similar to other 802.11ac routers on the market, such as the Asus RT-AC66u or the Netgear R6300, the new My Net AC1300 is a true dual-band router and supports all existing Wi-Fi devices. When coupled with 802.11ac clients, it's able to offers the wireless speed up to 1300Mbps (or 1.3Gbps). When working with the popular wireless-N clients, it offers the Wi-Fi speed up to 450Mbps on each of its 5Ghz and 2.4Ghz frequency band, at the same time.

WD says the new router also supports a customized QoS feature called FastTrack, available in previous select My Net routers, which automatically prioritizes traffic for entertainment services, such as Netflix, YouTube, or online gaming. It's also a Gigabit router, providing fast networking for wired clients, and offers many features for home users.

The My Net AC1300 Router and My Net AC Bridge are available now and are slated to cost $190 and $150, respectively.

Source: http://cnet.co/UpVk08

Adobe Hacker Says He Used SQL Injection To Grab Database Of 150,000 User Accounts


Exposed passwords were MD5-hashed and 'easy to crack' via free cracking tools, he says.

Adobe today confirmed that one of its databases has been breached by a hacker and that it had temporarily taken offline the affected Connectusers.com website.
The attacker who claimed responsibility for the attack, meanwhile, told Dark Reading that he used a SQL injection exploit in the breach.

Adobe's confirmation of the breach came in response to a Pastebin post yesterday by the self-proclaimed Egyptian hacker who goes by "ViruS_HimA." He says he hacked into an Adobe server and dumped a database of 150,000 emails and passwords of Adobe customers and partners; affected accounts include Adobe employees, U.S. military users including U.S. Air Force users, and users from Google, NASA, universities, and other companies.

The hacker, who also goes by Adam Hima, told Dark Reading that the server he attacked was the Connectusers.com Web server, and that he exploited a SQL injection flaw to execute the attack. "It was an SQL Injection vulnerability -- somehow I was able to dump the database in less requests than normal people do," he says.

Users passwords for the Adobe Connect users site were stored and hashed with MD5, he says, which made them "easy to crack" with freely available tools. And Adobe wasn't using WAFs on the servers, he notes.

"I just want to be clear that I'm not going against Adobe or any other company. I just want to see the biggest vendors safer than this," he told Dark Reading. "Every day we see attacks targeting big companies using Exploits in Adobe, Microsoft, etc. So why don't such companies take the right security procedures to protect them customers and even themselves?"

The hacker leaked only some of the affected emails, including some from @ "adobe.com", "*.mil", and "*.gov," with a screen shot in his Pastebin post, where he first noted that his leak was because Adobe was slow to respond to vulnerability disclosures and fixes.

"Adobe is a very big company but they don't really take care of them security issues, When someone report vulnerability to them, It take 5-7 days for the notification that they've received your report!!" he wrote. "It even takes 3-4 months to patch the vulnerabilities!"

Adobe didn't provide details of how the breach occurred. Guillaume Privat, director of Adobe Connect, in a blog post this afternoon said Adobe took the Connectusers.com forum website offline last night and is working on getting passwords reset for the affected accounts, including contacting the users. Connect is Adobe's Web conferencing, presentation, online training, and desktop-sharing service. Only the user forum was affected.

"Adobe is currently investigating reports of a compromise of a Connectusers.com forum database. These reports first started circulating late during the day on Tuesday, November 13, 2012. At this point of our investigation, it appears that the Connectusers.com forum site was compromised by an unauthorized third party. It does not appear that any other Adobe services, including the Adobe Connect conferencing service itself, were impacted," Privat said.

This is the second public breach of the software firm this year. In October, Adobe revealed that an internal server with access to its digital certificate code-signing infrastructure was hacked by "sophisticated threat actors."

The attackers had created at least two malicious files that they digitally signed with a valid Adobe digital certificate. Adobe revoked the certificate and issued updates for its software signed by it, including Windows-based apps and Adobe AIR.

Tal Beery, a security researcher at Imperva, analyzed the data dump in the Connectusers Pastebin post. He found that the list appears to be valid and that the hacked database is relatively old. "I have analyzed some of the leaked data and compared some names in that leaked files against linkedin.com and found out they did work for Adobe but no longer employed there," he says. "The list include both Adobe and other companies email, which suggests that this may be a customer related" database, he says.

The Adobe hacker Hima, meanwhile, warned in his post that his next leak would be for Yahoo.

Source: http://bit.ly/ZvHVtq

New WiFi protocol boosts congested wireless network throughput by 700%


Engineers at NC State University (NCSU) have discovered a way of boosting the throughput of busy WiFi networks by up to 700%. Perhaps most importantly, the breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily — instantly improving the throughput and latency of the network.

As wireless networking becomes ever more prevalent, you may have noticed that your home network is much faster than the WiFi network at the airport or a busy conference center. The primary reason for this is that a WiFi access point, along with every device connected to it, operates on the same wireless channel. A channel is basically a single-lane road, a lot like an electrical (copper wire) bus. Each channel, depending on the wireless technology being used, has a maximum bandwidth (say, 100 megabits per second), with that bandwidth being distributed between all connected devices.

At home, you might have exclusive use of that road, meaning you can drive as fast as you like and suck up every last megabit — but at a busy conference center, you are fighting tens or hundreds of people for space. In such a situation, your bandwidth allocation rapidly dwindles and your latency quickly climbs. This single-channel problem is also compounded by the fact that the road isn’t just one-way; the access point also needs to send data back to every connected device.

In short, WiFi networks have good throughput and low latency up until a point — and then they very quickly degrade into a horrible mess where no one can use the network properly. If the channel becomes congested enough that the access point can no longer send out data, then the show’s over, basically.

To solve this problem, NC State University has devised a scheme called WiFox. In essence, WiFox is some software that runs on a WiFi access point (i.e. it’s part of the firmware) and keeps track of the congestion level. If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal.

We don’t have the exact details of the WiFox scheme/protocol (it’s being presented at the ACM CoNEXT conference in December), but presumably it switches between normal and high priority states very rapidly. If we use the single-lane road analogy, WiFox is basically playing the role of a traffic policeman — allowing data to flow freely in one direction for a while, and then reversing the flow. Presumably the trick is designing an algorithm that is capable of detecting congestion very rapidly, and designing a traffic cop that switches priority for exactly the right amount of time to clear the backlog.

All told, the NCSU researchers report that their testbed — a single WiFi access point with 45 connected devices — experienced a 700% increase in throughput. Exact figures aren’t given, but if we’re talking about a modern 802.11n network, we’re probably looking at a jump from around 1Mbps to around 7Mbps. Furthermore, latency is also decreased by 30-40%. There is no word when WiFox might be deployed commercially, but considering it could be rolled out as a firmware update it will hopefully be rather soon.

Source: http://bit.ly/SVUVTR

Tuesday, November 13, 2012

E-mailed malware disguised as group coupon offers on the rise


Spammers take advantage of the rising popularity of e-mailed advertisements by mimicking them and attaching viruses.

Be sure to double check that Groupon you received in your e-mail -- spammers are using the popularity of e-mailed advertisements for group discount deals to send more malware.

The rise of malware through fake e-mail advertisements and notifications are on the rise, according to a study released today by security firm Kaspersky Lab.

"They are primarily doing so by sending out malicious e-mails designed to look like official notifications. Kaspersky Lab is seeing more and more malicious spam designed to look like coupon service notifications," the report said.
The firm said it also noted these coupon spam mail in its spring report but has found that the trend is increasing. Instead of attaching viruses as files to these types of e-mails, spammers are now adding malicious links. Ads mimicking Groupon seem to be most prevalent, the firm said.

"Kaspersky Lab experts expected to see the appearance of this type of spam since coupons are very popular among Internet users and they trust coupon services," the study said. "An e-mail from a coupon service is an ideal disguise for malicious users."

In July, the firm found an e-mail that looked like a notification for a new promotion for Groupon, complete with links to the Groupon Web site. It included an attached ZIP file named Gift coupon.exe. This executable file, a file that can run an application on your computer, contained a Trojan malware program.

Kaspersky Lab is now starting to see that many of these fake advertisements no longer have attachments -- they have malicious links instead.

To avoid being duped, users should remember that coupon services never include attachments in their e-mails, and users should double check if a seemingly legitimate e-mail is actually from the service it is claiming to be. You can check this by looking at the sender name, or hovering your mouse over the links to get a preview of what URLs they're linking to.

Other types of popular spammy e-mail disguised as notifications included fake letters from hosting services, banking systems, social networks, online stores, and hotel confirmations. General spam currently makes up 71.5 percent of e-mails, with e-mails containing malicious attachments like the fake Groupon ones accounting for 3.9 percent of e-mails, according to Kaspersky Lab.

Thanks to the election, spammers also favored using President Barack Obama's name in e-mails, along with the name of his wife, Michelle Obama. The First Lady's name was actually used to add a presidential twist to the Nigerian scammers scheme. The e-mailer claimed to be Michele Obama sitting on a pile of cash at the White House and promised the recipient millions of dollars if they would just reply with their addresses, telephone numbers, and $240.

To see more examples of scams, click here. It's a real link to the report, but you might want to hover your mouse over it just in case.

Source: http://cnet.co/W6vhMR

WD® Offers Mac Users USB 3.0 Connectivity With New My Book® Studio™ External Hard Drive


4 TB Capacity; Premium Aluminum Enclosure and Hardware-Based Encryption That Protects Against Unauthorized Access to Valuable Content

IRVINE, Calif., Nov. 13, 2012 /PRNewswire/ --  WD®, a Western Digital company (NASDAQ: WDC), and a world leader in external storage and connected life solutions, today announced a new version of the My Book® Studio™ external hard drive. USB 3.0 capability is now extended to this family of My Book Studio hard drives and provides data transfer speeds up to three times faster than USB 2.0. WD has also introduced the addition of a massive 4 TB capacity in a single-drive configuration making it a perfect solution for backing up large amounts of digital content. The My Book Studio drive is designed with a premium aluminum enclosure and will be available in 1 TB, 2 TB, 3 TB and 4 TB capacities.

The My Book Studio drive's features and benefits have made it a favorite among creative professionals and Mac computer enthusiasts, including working seamlessly with Apple® Time Machine, for protecting and backing up their valuable professional and personal content. The My Book Studio external hard drive includes WD Security™, which allows users to password protect their drive along with 256-bit hardware-based encryption for added security against unauthorized access to the drive and its contents.

"WD's My Book Studio with USB 3.0 delivers extreme transfer speeds while maintaining the standard USB and FireWire connections computer users prefer," said Jim Welsh, executive vice president and general manager of WD's branded and CE products. "Its large capacity, combined with a premium aluminum enclosure, hardware-based encryption, and compatibility with Apple Time Machine, provide Mac users with a fast, secure and complete system for preserving their valuable content."

Pricing and Availability
The My Book Studio external hard drive comes with a 3-year limited warranty and is available on the WD store at www.wdstore.com and at select retailers and distributors. MSRP for My Book Studio 1 TB is $159.99 USD; the 2 TB is $189.99 USD; the 3 TB is $239.99 USD and the 4 TB is $299.99. WD will continue to offer its My Book Studio drive with USB 2.0 and FireWire® 800 connectivity for legacy systems.

About WD
WD, a Western Digital company, is a long-time innovator and storage industry leader. As a storage technology pacesetter, the company produces reliable, high-performance hard disk drives and solid state drives. These drives are deployed by OEMs and integrators in desktop and mobile computers, enterprise computing systems, embedded systems and consumer electronics applications, as well as by the company in providing its own storage products. WD's leading storage devices and systems, networking products, media players and software solutions empower people around the world to easily save, store, protect, share and experience their content on multiple devices. WD was established in 1970 and is headquartered in Irvine, California. For more information, please visit the company's website at www.wd.com.

Western Digital Corp. (NASDAQ: WDC), Irvine, Calif., is a global provider of products and services that empower people to create, manage, experience and preserve digital content. Its companies design and manufacture storage devices, networking equipment and home entertainment products under the WD, HGST and G-Technology brands. Visit the Investor section of the company's website (www.westerndigital.com) to access a variety of financial and investor information.

Western Digital, WD, the WD logo, and My Book are registered trademarks in the U.S. and other countries; My Book Studio is a trademark of Western Digital Technologies, Inc. Other marks may be mentioned herein that belong to other companies. All other brand and product names mentioned herein are the property of their respective companies. As used for storage capacity, one terabyte (TB) = one trillion bytes. Total accessible capacity varies depending on operating environment.

Thursday, November 8, 2012

Intel to slip future Xeon E7s, Itaniums into common socket


Chipzilla debuts Poulson, plots convergence for Kittson.

They may be coming a little bit later than expected, but the next generation of Intel's server processors, code-named "Poulson" and sold under the Itanium 9500 brand, are out. Intel has also finally disclosed its plans to more fully converge the Itanium and Xeon server platforms, giving Itanium a more secure footing in the data center.

The installed bases of HP-UX, NonStop, and OpenVMS users can breathe a sigh of relief if they were hitting a performance ceiling, and so can other server makers such as Bull, NEC, and Fujitsu that have proprietary operating systems that also run on Itanium iron.

The bigger sigh of relief is that Intel is converging the Xeon and Itanium processor and system designs such that future Xeon E7 chips and "Kittson" Itanium processors will share common elements – and, more importantly, share common sockets.

This is something that Intel has been promising for years, and something that HP – the dominant seller of Itanium-based systems – has been craving, as evidenced by its Project Kinetic. Convergence was the plan of record for HP in June 2010 – nine months ahead of the Oracle claim that Itanium was going the way of all flesh – and HP wanted to converge its ProLiant and Integrity server platforms, which used x86 and Itanium processors, respectively. A common socket helps that effort in a big way.

The new Itanium 9500 chips arrive in the wake of Oracle being told in August by the Judge James Kleinberg of the Santa Clara County Superior Court that Oracle was under contractual obligation to continue to write code and support it on Itanium platforms, and the following month Oracle did indeed hit reboot on software development for the Itanium platform.

Oracle has said it will appeal Judge Kleinberg's ruling, and a jury trial will determine what damages HP might be due if and when the lawsuits between the two companies ever get that far.

As the Itanium 9500s come out on Thursday, the Oracle 11g database is certified to run on HP-UX 11i v3, which runs on systems using either "Tukwila" Itanium 9300s or Poulson Itanium 9500s. The future Oracle 12c database, previewed at the OpenWorld event in October, will be available on HP-UX running on either Itanium processor concurrent with Oracle's support on IBM's AIX running on its Power processors.

Poulson not exactly a surprise
Over the past year and a half, Intel has let out a lot of data about the Poulson Itaniums, as much to show off the core-out processor design, with a high-speed ring interconnect, similar to that used in the Xeon E5 family of chips, as to demonstrate to companies still using Itanium based processors that it is still doing real engineering work on the line, not just shrinking the chip as its fab processes progress down the Moore's Law curve.

The big core dump on the eight-core Poulson Itaniums came ahead of the IEEE's International Solid-State Circuits Conference in February 2011, when Intel's chip techs discussed the basic design of the chip. At the ISSCC event, Intel actually divulged a whole lot moreabout the Poulson design. Intel bragged at the Hot Chips 23 conference in August 2011 about the much-better core design in the Poulson Itaniums compared to their Tukwila predecessors.

This summer, Intel accidentally published the Itanium 9500 Series Reference manual, outing the name of the Poulson family and the four SKUs it would have. This document was quickly pulled off the web, and only had a few more details about the chip, but it is back live on the intertubes now (PDF) if you really want to read it.

Here's the cut and dry of it. The Poulson chips have a new core and microarchitecture that only takes 89 million transistors per core, compared to 109 million for the Tukwilas. The Poulson chip has twice the cores and a twelve-wide instruction issue that has twice the instruction throughput as well, and thanks to a shrink to 32 nanometer processes, Poulson can crank the clocks up as high as 2.53GHz, 46 per cent higher than the fastest Tukwila.

Add it all up, and Intel says socket-for-socket you can get something on the order of 2 to 2.4 times the performance based on its lab tests, with an 8 per cent lower thermal design point (TDP). And thanks to microarchitecture changes that gate power to the cores, caches, and other elements of the chip, the idle power of the Poulson chip is 80 per cent lower than that of the Tukwilas.

Incidentally, those performance figures above are on code that was compiled for the six-wide issue instruction pipeline used in the Tukwilas. If you recompile those applications to take advantage of the twelve-wide pipeline, you would see even greater performance leaps.

That's what Rory McInerney, vice president of Intel's Architecture Group and director of its Server Development Group, said during a launch event in San Francisco on Thursday. McInerney also walked through some of the more important features in the new Itanium 9500s, and talked about the converged platform that Intel was cooking up.

The Instruction Replay Technology, detailed in August 2011, is a big one, moving a function that was formerly done in firmware into the chip itself. IRT allows the Itanium processor to automatically retry instructions instead of just failing and therefore crashing an application, or possibly the whole system.

Cache Safe Technology, which allows for a Xeon chip to block out bad areas in the memory hierarchy so they are not used, has been ported into the Poulson chips – and is an example of the kind of convergence that Intel will accelerate in the future between Xeons and Itaniums.

The Poulson chips also have QuickPath Interconnect (QPI) buses, which are used to link multiple processors to each other and to other parts of the system, that run at 6.4GT/sec, 33 per cent faster than what was on the Tukwilas. The sockets for the Tukwila chips, which support the new Poulsons, already had this speed bump designed into them.

There are four different Itanium 9500s: two have eight cores, and two have four cores. They have varying amounts of L3 cache memory, different clock speeds, and are aimed at different customers.

The top-end Itanium 9560 is the full-on chip, with all eight cores a-blazin' at 2.53GHz, and 32MB of L3 cache feeding those cores. This is the chip that is labeled the "performance" model in the Intel specs.

The Itanium 9550 has only four cores and drops down to 2.4GHz, but is characterized as giving the best performance per core in the Intel docs, presumably because the kinds of workloads that run on Itanium chips are cache sensitive, and with double the L3 cache available per core (8MB instead of 4MB with the Itanium 9560) more than makes up for the lower clock speed. (It could be that the Intel docs have it backwards, but this is what it says.) The peculiar thing is that both the 9560 and 9550 processors have a TDP of 170 watts.

The Itanium 9540, with eight cores, yields the best price/performance, and the four-core Itanium 9520 is the value chip, with only four cores and running at only 1.73GHz. That is the top-end speed of the Tukwila chip, by the way, and it fits into a 130 watt TDP instead of the 185 watts of the equivalent Tukwila part. And at $1,350, this chip is about a third the price of the Itanium 9350 in the Tukwila line.

That said, Intel is not giving all that performance away for free, with the per-socket prices going up, 9500 SKU for equivalent 9300 SKU. But on a raw aggregate clock basis (the number of cores times the clocks), the Poulsons are roughly 45 to 65 per cent cheaper. But remember, those Poulson clocks can do more work, too.

While the new Itanium 9500s support Turbo Boost, it's a slightly different variant than we are used to with prior Itanium generations. Here is how an Intel spokesperson described it to El Reg: "Itanium 9500 continues to support Intel Turbo Boost Technology, but in a different derivation. Itanium 9500 features a concept called sustained boost, which dynamically allocates power to the area where power is needed most, to deliver the maximum overall performance under a given power envelope. Itanium 9500 can therefore always run at the maximum Intel Turbo Boost Technology frequency for all cores at all times."

We'll figure out how that works some other time.

The Itanium-Xeon mashup
Intel has been chatting and sometimes whispering about the convergence of the high-end Xeon and Itanium processors since AMD launched the original "SledgeHammer" Opterons back in April 2003. Chipzilla has already been moving in the direction of convergence with the Xeon 7500/E7 processors on the x86 side, and with the Itanium 9300s on the Itanium side. But now Intel is going a bit further.

McInerney did not release any precise details, but the concept is simple enough: the Itanium and Xeon E7 processors shared the same "Boxboro" chipset and the same main memory architecture and buffer chips.

In a future generation of Xeon E7 processors timed to coincide with the future Kittson Itaniums, Intel will move to a common processor socket and common packaging for the chips, and will go even further and use common elements on the chip. McInerney said it could involve sharing memory and I/O controllers or other reliability features, for instance, and reminded everyone that the precise plan has not yet been set.

"It is a sustainable path forward," explained McInerney, and it will not in any way involve converging the Xeon and Itanium instruction sets. "We want to invest where we need to to drive the instruction performance of Itanium and leverage as much of the volume economics of the Xeon as we can."

IBM has faced the same issue with its Power and System z mainframe engines, and several generations ago started using common transistor blocks on both types of processors while retaining their very different instruction sets and architectures. The gap between Xeon and Itanium is not as large as between a Power machine and a System z mainframe, so Intel's job should be quite a bit easier.

So will HP's job of converging its ProLiant and Integrity systems over the long haul, too, which is all wrapped up in its "Project Odyssey" effort, announced a year ago.

Intel is making no commitments, but if El Reg had to guess, the socket convergence will appear with the "Haswell-EX" processor due several years hence. There is an outside possibility that the "Ivy Bridge-EX" Xeon E7 processor due next year will sport a new socket, and if it does the convergence could happen a little earlier. We'll have to wait and see.

Beyond Kittson, Intel is making no commitments except to say that the common-platform approach allows Intel and its partners to continue to invest in Itanium technology. The Itanium is "on a two to three year timeline" now, McInerney explained, and that puts whatever might be after Kittson out in the second half of the decade.

Just like Intel is not talking about Xeons that far out, it's not going to talk about Itaniums that far out.

Source: http://bit.ly/UyxyiO

LaCie’s New Waterproof USB Key Is Smaller Than Your House Key


The problem with little USB jump drives is that no matter how many you own, you've never got one when you need it. You could've have sworn you tossed one in your bag, but when it's time to pass files around, it's not there. That's why we're tempted to keep LaCie's new PetiteKey dangling from our keychains.
The PetiteKey comes in 8GB, 16GB, and 32GB for $15, $23, and $40. It's waterproof down to 100-meters, scratch-resistant, and tough. There's a two-year warranty, should you want to test that toughness out. But best of all, it's much tinier than its cute-but-not-as-useful ancestor, the iamaKey: The little USB drive weighs 0.25 ounces and measures just 1.5 inches long. In other words, it's smaller and lighter than most actual keys, which means it's actually convenient enough to carry with your everywhere. 

Twitter: Oops, we reset passwords we didn't need to

Many users received an unusual-sounding email from Twitter explaining that it had reset their passwords after a suspected security breach. Now the company says it went too far -- but doesn't explain the breach.

Now that Twitter has reported a security breach and reset the passwords of many users, the company has issued a statement explaining why it did what it did:

We're committed to keeping Twitter a safe and open community. As part of that commitment, in instances when we believe an account may have been compromised, we reset the password and send an email letting the account owner know this has happened along with information about creating a new password. This is a routine part of our processes to protect our users.

In this case, we unintentionally reset passwords of a larger number of accounts, beyond those that we believed to have been compromised. We apologize for any inconvenience or confusion this may have caused.

In other words, while some individuals' accounts have been accessed and used to send spam, Twitter did not suffer any large-scale hack.

Source: http://bit.ly/TcaV55

Wednesday, November 7, 2012

LaCie ships plug-and-play CloudBox NAS server


LaCie announces the availability of the CloudBox NAS server that it first introduced more than a year ago.

LaCie today announced the availability of its latest network attached storage (NAS) server, the CloudBox.

Originally, the storage vendor announced this device more than a year ago, suggesting it'd be a unique network storage server that offers 100GB of local storage and a matching 100GB of online storage, with the two being synced over the Internet. The final device shipping today, however, appears to be quite traditional.

The CloudBox is now available in either 1TB, 2TB, or 3TB capacities and doesn't seem to come with any online storage. Instead it works by itself as a personal cloud server for the owner, similar to the Cloud Station feature of a Synology NAS server, such as the DS412+. The CloudBox is a single-volume storage device, meaning that it doesn't have any RAID capability for data-safety redundancy.

LaCie said that the NAS server will have a very simple two-step setup process that allows users to "simply plug the LaCie CloudBox into a power outlet and connect the Ethernet cable into a router," and the rest will be done without software installation. If this is true, it'd make it the most user-friendly NAS server to date.

In addition, the CloudBox offers file sharing, and backup capability for both Mac and Windows computers within the local network. Remote users can also access files for the server via smartphones, tablets, or laptops.

Like most storage devices from LaCie, the CloudBox is designed by Neil Poulton and has the signature squarish look. It's available now, with the 1TB version costing $120.

Source: http://cnet.co/PBcZ8d

Using existing Windows apps remotely on Surface RT


Can you run existing Windows apps on Surface RT and other Windows RT devices? If you have the right back-end infrastructure and licenses, Remote Desktop may provide a way.

Among the top free apps listed in Microsoft's new Windows Store right now are IHeartRadio, Newegg, Kindle, and... Remote Desktop.

Remote Desktop may not be sexy, but it does allow Windows 8 and Windows RT users to connect to a remote Windows PC and access resources from it. And on the Windows RT front, given its restrictions on use of almost any existing Win32 applications (other than Office 2013 Home & Student, Internet Explorer 10, and some other Microsoft-developed utilities), Remote Desktop sounds -- at least in theory -- like a great way for users to continue to use apps they already have on new hardware like the Microsoft Surface RT.

As usual with most things Microsoft, the reality is a lot more complicated. In order to use Remote Desktop and Remote Desktop Services (the renamed "Terminal Services") product, users need certain back-end infrastructure and licenses.

Here's what a Microsoft representative told me regarding use of Remote Desktop and RDS by Windows RT users:

Q: Will Windows RT users, including Surface RT users, be able to run legacy/Win32 apps using Remote Desktop and RDS. If yes, how?

A: The best way is through the Remote Desktop Client, which is available through the Windows Store. When this is combined with Windows Server 2012 or Windows Server 2008 R2, Windows RT devices (including Surface RT) users can access data and easily launch apps -- including legacy/Win32 apps -- one of two ways:

* RemoteApp: RemoteApp grants access to line-of-business applications and data, allowing a user to easily launch apps from Windows RT devices and securely access corporate data while avoiding storing it on a consumer device, ensuring compliance requirements of the business.

* VDI (Virtual Desktop Infrastructure): This option offers a richer user experience than the above. When using Windows Server 2012 for your VDI infrastructure, additional benefits including WAN enablement, USB redirection, and better multitouch support can be achieved. This is the best value for VDI and offers a more simplified administration for IT, with single console and full PowerShell support.

Q: How is this licensed/what are the limitations?

A: Appropriate server and client licenses are required for both of the above scenarios. To access a Remote Desktop Server for VDI or RemoteApp, you would need an RDS CAL (Client Access License) and a Windows Server CAL.

Q: What's the deal with VDA (Virtual Desktop Access) license and CDL (Companion Device License)? It seems folks using iPads and Android devices to access Windows desktops remotely also need these. Do Windows RT users need to purchase these additional two licenses in order to use Remote Desktop?

A: RT devices don't need a VDA or CDL license when used as a secondary device for accessing remote desktops. The user still needs an RDS CAL/VDA license for their primary corporate desktop.

Q: In the description of the Remote Desktop app in the Windows Store, Microsoft mentions that Remote Desktop allows users to "experience rich interactivity with RemoteFX in a Remote Desktop client." What does this really mean?

A: RemoteFX is the new branding for the user experience. Previously it referenced just the vGPU for VDI,
but now the term also includes the WAN experience, USB Redirection, etc. There's not a GPU or Hyper-V requirement to take advantage of the improvements. And it's not limited to just Windows 8 -- a recent update provides the RemoteFX functionality to Win7 desktops as well.

Q: Anything else that might help users trying to wade through these licensing details?

A: Here are a couple of additional documents that might help:
* Windows Server RDS Volume Licensing Brief
* VDI/RemoteApp licensing information

OK. My brain hurts now. Anyone out there who has used Remote Desktop and RDS to gain access to their "legacy" apps on Windows RT hardware? Feel free to chime in with any additional caveats and lessons learned. And anyone with other questions on this, send them in through the comments and we can all try to wade through this together....

Also: If you're still wondering at this point what does and doesn't work on Windows RT, Microsoft's Windows RT "disclaimer statement" is worth a read.

Source: http://cnet.co/Sr3sh2

Tuesday, November 6, 2012

Intel intros high-endurance solid-state drive

Intel intros high-endurance solid-state drive.

Intel says its upcoming DC S3700 solid-state drive has a capacity of up to 800GB and a much longer life span than usual.

Intel on Monday introduced its latest solid-state drive, the DC S3700, which offers up to 800GB of storage space and will become available in early 2013.

The chip maker says the drive incorporates a new technology called High Endurance Technology (HET) that leads to much better endurance. Basically, the new drive can handle 10 full drive writes per day and still last a full 5-year life span for its lowest capacities. For the highest, 800GB capacity, this translates into a life span of almost 200 years should you choose to use it for video editing constantly.

The reason endurance is important in SSDs is that flash memory generally suffers from limited program/erase (PE) cycles, meaning each of the memory cells can be written on only so many times before it becomes unreliable.

In general, this is not a big problem since most drives will still last much longer than needed before their PE cycles run out. Nonetheless, Intel's HET further increases this PE limit.

According to Intel, the upcoming DC S3700 SSD offers write performance of up to 36,000 IOPS and random read performance of up to 75,000 IOPS.

The drive will be available in the traditional 2.5-inch design in 100GB, 200GB, 400GB, and 800GB capacities, which are estimated to cost $235, $470, $940, and $1,880, respectively. There are also 200GB and 400GB capacities that come in the new 1.8-inch design for ultraportable devices, priced at $495 and $965 respectively.

Source: http://bit.ly/RghzXR

Monday, November 5, 2012

Spiceworks Woos IT Admins With Free Everything


Spiceworks makes software and online services for use inside businesses, but it’s not like most business-software outfits. The Austin, Texas-based company gives its software away for free — all of it.

Yes, so many other companies offer free software and services. But they typically make their money by selling some sort of “premium” tool alongside the free stuff — a version that includes a bunch of stuff you can’t get for free. But Spiceworks doesn’t do premium versions. All of the company’s software and services are designed to make money from advertising.

Up until now, the company has focused on offering an online forum for IT professionals and tools for managing IT assets and infrastructure, but on Monday, it launched a new service called Spiceworks Cloud Program, which will let businesses manage cloud services such as Google Apps and Dropbox. This means it’s now taking on a wide range of larger and more established companies, including some of its own advertisers: VMware and SolarWinds.

Spiceworks was founded in 2006 by four former employees of Motive — an Austin-based broadband and data service management company that was acquired by Alcatel-Lucent in 2008 — and they brought experience from a wide range of other tech outfits as well, including Apple, NeXT, Tivoli, and 3M. According to co-founder and vice president of marketing and business development Jay Hallberg, the new company’s mission arose from conversions he and the other founders had with various system administrators and other IT pros.

“Really, we were just shocked by how few tools they had to do their jobs,” Hallberg says. “And what software they had, candidly, sucked.”

Hallberg says that when he asked system admins what software they loved, he got blank stares. “One guy said he loved playing Doom while he ran defrag,” he says. But when Hallberg asked what software people hated, they went on and on.

With Spiceworks, Hallberg and team decided to build an IT management tool that was actually easy to use. “We wanted to make the iTunes of IT management,” he says. The original product offered an inventory tool and a hosted application for help desk workers, but the company now wants to make it possible to manage, not just monitor, your IT infrastructure from within Spiceworks.

It made its first foray into cloud services in May when it released a tool that enabled system admins to determine which cloud services were being used within a company by monitoring network traffic. If a department was sharing files through Dropbox, for example, IT could easily find out.

The new Cloud Program expands on this service by adding tools for monitoring, managing and deploying cloud services. The first release focuses on four types of cloud service: hosted e-mail, online backup, file sharing and cloud servers such as Amazon EC2. Microsoft Office 365, Google Apps, and Rackspace’s cloud and e-mail services have all been integrated into Spiceworks already. The company plans to expand into other types of cloud service, such as CRM, over the next year.

Hallberg says Spiceworks will continue to expand its offerings with the goal of becoming a true “single pane of glass” for IT admins. Mobile device management will be another area of expansion, as might purchasing and vendor relationship management.

Will this create tension between Spiceworks and advertisers such as Vmware? VMware just launched IT management and cloud service account management tools as part of its vCenter Operations Management Suite and the VMware IT Business Management Suite products earlier this year, but Hallberg says this sort of thing has never been an issue. He says Spiceworks is vastly different from the competition in two distinct ways: it’s free, and it oversees a community of 2 million IT professionals.

“What Spiceworks did was connect IT pros who were working alone or in small teams to let them be part of a larger community,” he says. They have come for the free tools, but what may keep them around is that chance to connect with other people who share their concerns — even if they wind up buying a competing solution from one of Spiceworks’ advertisers.

 Source: http://bit.ly/RBUkZP

EMC offers a somewhat more Hurricane Sandy proof array


Data can be in two places at the same time.

EMC's array-virtualising and federating VPLEX product now supports vMotion workload flow between active-active data centres up to 200km (a little over 125 miles) apart, twice as far as before.

VPLEX is a box, running the Geosynchrony OS, that sits on front of an array and communicates with another VPLEX box sitting in front of a second array and enables them to federate and work together. Customers can access a single version of data at the two locations. They can also move applications, data and VSphere virtual machines non-disruptively between the two arrays. On top of that VPLEX can also virtualise third-party arrays.

The two arrays can be in the same data centre, and until now, could be in different ones up to 100km apart when in an active-active relationship. However, the farther apart two data centres are, the better placed they are for disaster recovery. VPLEX Metro now supports two data centres up to 200km apart with a 10ms round-trip latency. EMC says that this ensures "that updated VMware vSphere vMotion data can be in two places at the same time during a move — with twice the distance as before."

EMC has also devised VPLEX Metro Express Edition that combines VPLEX with RecoverPoint continuous data protection software, and says this "enables mission-critical availability between two data centres.

A third part of EMC's announcement is that VPLEX and RecoverPoint can automatically load balancing WANs to improve network availability and utilisation, "through support for EMC RAPIDPath technology, which transforms VPLEX and RecoverPoint WAN connectivity from active-passive to active-active." It means that the network connectivity between two VPLEX locations can continue across a link failure. In that situation VPLEX will route traffic over a second link without breaking network connectivity or requiring a WAN failover process to take place.

EMC implies that the two VPLEX boxes can be in a hybrid cloud, with one in an enterprise data centre and the other in a public cloud data centre. This could be a chicken-and-egg scenario though. Public cloud service providers won't take up VPLEX to provide active failover to the public cloud from private cloud data centres using VPLEX until they see lots of private cloud data centres using VPLEX, and vice versa.

It might be the case that someone in EMC is thinking of Hopkinton itself providing a public cloud data centre, probably with partners, to provide a VPLEX-front-ended public cloud data centre facility to be the active:active fall-back for private cloud data centres using VPLEX.

Source: http://bit.ly/REy0gv