You are currently browsing the archives for February 2017.
Displaying 11 - 20 of 27 entries.

Parallel and Concurrent Programming in Haskell

  • Posted on February 14, 2017 at 1:35 pm

As one of the developers of the Glasgow Haskell Compiler (GHC) for almost 15 years, I have seen Haskell grow from a niche research language into a rich and thriving ecosystem. I spent a lot of that time working on GHC’s support for parallelism and concurrency. One of the first things I did to GHC in 1997 was to rewrite its runtime system, and a key decision we made at that time was to build concurrency right into the core of the system rather than making it an optional extra or an add-on library. I like to think this decision was founded upon shrewd foresight, but in reality it had as much to do with the fact that we found a way to reduce the overhead of concurrency to near zero (previously it had been on the order of 2%; we’ve always been performance-obsessed). Nevertheless, having concurrency be non-optional meant that it was always a first-class part of the implementation, and I’m sure that this decision was instrumental in bringing about GHC’s solid and lightning-fast concurrency support.

Haskell has a long tradition of being associated with parallelism. To name just a few of the projects, there was the pH variant of Haskell derived from the Id language, which was designed for parallelism, the GUM system for running parallel Haskell programs on multiple machines in a cluster, and the GRiP system: a complete computer architecture designed for running parallel functional programs. All of these happened well before the current multicore revolution, and the problem was that this was the time when Moore’s law was still giving us ever-faster computers. Parallelism was difficult to achieve, and didn’t seem worth the effort when ordinary computers were getting exponentially faster.

Around 2004, we decided to build a parallel implementation of the GHC runtime system for running on shared memory multiprocessors, something that had not been done before. This was just before the multicore revolution. Multiprocessor machines were fairly common, but multicores were still around the corner. Again, I’d like to think the decision to tackle parallelism at this point was enlightened foresight, but it had more to do with the fact that building a shared-memory parallel implementation was an interesting research problem and sounded like fun. Haskell’s purity was essential—it meant that we could avoid some of the overheads of locking in the runtime system and garbage collector, which in turn meant that we could reduce the overhead of using parallelism to a low-single-digit percentage. Nevertheless, it took more research, a rewrite of the scheduler, and a new parallel garbage collector before the implementation was really usable and able to speed up a wide range of programs. The paper I presented at the International Conference on Functional Programming (ICFP) in 2009 marked the turning point from an interesting prototype into a usable tool.

All of this research and implementation was great fun, but good-quality resources for teaching programmers how to use parallelism and concurrency in Haskell were conspicuously absent. Over the last couple of years, I was fortunate to have had the opportunity to teach two summer school courses on parallel and concurrent programming in Haskell: one at the Central European Functional Programming (CEFP) 2011 summer school in Budapest, and the other the CEA/EDF/INRIA 2012 Summer School at Cadarache in the south of France. In preparing the materials for these courses, I had an excuse to write some in-depth tutorial matter for the first time, and to start collecting good illustrative examples. After the 2012 summer school I had about 100 pages of tutorial, and thanks to prodding from one or two people (see the Acknowledgments), I decided to turn it into a book. At the time, I thought I was about 50% done, but in fact it was probably closer to 25%. There’s a lot to say! I hope you enjoy the results.

Audience

You will need a working knowledge of Haskell, which is not covered in this book. For that, a good place to start is an introductory book such as Real World Haskell (O’Reilly), Programming in Haskell (Cambridge University Press), Learn You a Haskell for Great Good! (No Starch Press), or Haskell: The Craft of Functional Programming (Addison-Wesley).

How to Read This Book

The main goal of the book is to get you programming competently with Parallel and Concurrent Haskell. However, as you probably know by now, learning about programming is not something you can do by reading a book alone. This is why the book is deliberately practical: There are lots of examples that you can run, play with, and extend. Some of the chapters have suggestions for exercises you can try out to get familiar with the topics covered in that chapter, and I strongly recommend that you either try a few of these, or code up some of your own ideas.

As we explore the topics in the book, I won’t shy away from pointing out pitfalls and parts of the system that aren’t perfect. Haskell has been evolving for over 20 years but is moving faster today than at any point in the past. So we’ll encounter inconsistencies and parts that are less polished than others. Some of the topics covered by the book are very recent developments: Chapters 4, 5, 6, and pass:[14 cover frameworks that were developed in the last few years.

The book consists of two mostly independent parts: Part I and Part II. You should feel free to start with either part, or to flip between them (i.e., read them concurrently!). There is only one dependency between the two parts: Chapter 13 will make more sense if you have read Part I first, and in particular before reading “The ParIO monad”, you should have read Chapter 4.

While the two parts are mostly independent from each other, the chapters should be read sequentially within each part. This isn’t a reference book; it contains running examples and themes that are developed across multiple chapters.

Princeton Presents Duo Monitor 17 Inches White-Black With 4 Backlit That Conserve Electricity Consumption As much as 25%

  • Posted on February 14, 2017 at 4:15 am

Princeton Presents Duo Monitor 17 Inches White-Black With 4 Backlit That Conserve Electricity Consumption As much as 25%
LG Electronics Japan likely will soon add to the power line of ultra-wide LCD monitors as the flagship model of the new duo will Dihadirkannya in the near future.
Both models of 29-inch LCD monitor LG is an ultra-wide shared a cinematic 21:9 aspect ratio, accompanied by a support that can be adjusted height. LG 29EA73-P has a size of 699.7 × 387.0 × 197.2mm and weighs 5.9kg, while the LG 29EB73-P has a size of 699.7 × 395.3 × 225.0mm and weighs 6.9kg.
Despite having the size and weight of different devices, the LG monitor both models are equally supported by screen resolution 2560 × 1080 pixels with 16.7 million colors display. Not to mention the multiple input terminals that contributed equally complementary, ie: DVI-D (Dual-Link), HDMI × 2 (MHL × 1), Display Port, USB hub (Up × 1/DownUSB3.0 × 2, USB2 .0 × 1), Audio input.
Besides compatible with MHL, there are two models have the same 7W +7 W stereo speakers built-in and 4-Screen Split function that can divide the screen into four segments.
Subject availability alone, LG 29EA73-P will be released in early August 2013, while LG 29EB73-P in early September.

DataCore Software Builds on Software-Defined Storage Momentum and Names Paul Murphy as Vice President of Worldwide Marketing

  • Posted on February 14, 2017 at 1:00 am

DataCore Software, the premier provider of storage virtualization software, today announced the appointment of Paul Murphy as the vice president of worldwide marketing. Murphy will oversee DataCore’s demand generation, inside sales and strategic marketing efforts needed to expand and accelerate the company’s growth and presence in the storage and virtualization sectors.  He brings to DataCore a proven track-record and a deep understanding of virtualization, storage technologies and the pivotal forces impacting customers in today’s ‘software-defined’ world. Murphy will drive the company’s marketing organization and programs to fuel sales for DataCore’s acclaimed storage virtualization software solution, SANsymphony™- V.

“Our software solutions have been successfully deployed at thousands of sites around the world and now our priority is to reach out to a broader range of organizations that don’t yet realize the economic and productivity benefits they can achieve through the adoption of storage virtualization and SANsymphony-V,” said DataCore Software’s Chief Operating Officer, Steve Houck. “Murphy brings to the company a fresh strategic marketing perspective, the ability to simplify our messaging, new ways to energize our outbound marketing activities and the drive to expand our visibility and brand recognition around the world.”

With nearly 15 years of experience in the technology industry, Murphy possesses a diverse range of skills in areas including engineering, services, sales and marketing, which will be instrumental in overseeing DataCore’s marketing activities around the globe. He was previously Director Americas SMB Sales and Worldwide Channel Development Manager at VMware, where he developed go-to-market strategies and oversaw both direct and inside channel sales teams in both domestic and international markets.

Prior to that, Murphy was senior product marketing manager at NetApp, focusing on backup and recovery solutions and their Virtual Tape Library product line. In this role, Murphy led business development activities, sales training, compensation programs and joint-marketing campaigns. An excellent communicator, he has been a keynote speaker at numerous industry events, trade shows, end-user seminars, sales training events, partner/reseller events and webcasts. Before moving into sales and marketing, Murphy had a successful career in engineering.

“The timing is perfect. DataCore has just updated its SANsymphony-V storage virtualization platform and it is well positioned to take advantage of the paradigm shift and acceptance of software-defined storage infrastructures,” said Murphy. “After doing the market research and getting feedback from numerous customers, it is clear to me that there is a large degree of pent-up customer demand. Needless to say, I’m eager to spread the word on DataCore’s value proposition and make a difference in this exciting and critical role.”

About DataCore Software

DataCore Software develops storage virtualization software leveraged in virtual and physical IT environments to obtain high availability, fast performance and maximum utilization from storage. DataCore’s SANsymphony-V storage hypervisor is a comprehensive, yet hardware-independent solution which fundamentally changes the economics of provisioning, replicating and protecting storage for large enterprises and small to midsize businesses. For additional information, visit the DataCore website at or call (877) 780-5111.

DataCore, the DataCore logo and SANsymphony are trademarks or registered trademarks of DataCore Software Corporation. Other DataCore product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners.

Opera Next 16 hints at new features

  • Posted on February 13, 2017 at 11:39 pm

Norwegian browser developer Opera Software has confirmed the switch of its browser development to a rapid release cycle with the launch of Opera Next 16. The new version number comes less than a month after Opera 15 FINAL was released, which saw Opera switch from its own proprietary Presto web engine to the Blink engine used by Google Chrome.

As with all rapid release cycle updates, there are no major overhauls to be found in Opera Next 16, although a number of interesting new features have been showcased as the next iteration starts its journey towards final release.

Opera 16 — which is based on Chromium 29, the engine that powers Chrome 29 (currently in beta) — comes with support for the W3C Geolocation API, a form auto-filler tool and opera:flags, a shortcut to settings that allows adventurous users to play with experimental features.

Users will also find a new setting under Browser > Start Page called “Preload Discover contents”, which allows users to switch this feature off.

Platform-specific updates include support for Jump Lists in Windows 7 and 8, plus the addition of Presentation mode to the Mac platform.

In addition to these existing features, Opera has revealed the next set of features it’s working on, with the promise that early versions of these will be rolled out into the Opera Next build over the next few weeks. These include proper bookmarks support, synchronization via Opera Link, improved tab handling and themes.

Opera Next 16 is considered “alpha” software, which is why — like Firefox Aurora — it’s designed to run alongside an existing stable build of Opera, allowing users to experiment with new features without affecting their day-to-day browsing. Updates are frequent as bugs are discovered and fixed, but users should not attempt to rely on Opera Next as their primary browser, hence the separate installation.

Galaxy Note III, “Gadget” First Use 3 GB of RAM?

  • Posted on February 13, 2017 at 5:34 pm

Phablet news about Samsung Galaxy Note III from re-emerge. This time in the form of hardware specifications that will be used.

Obtained from the leaked tech site Slashgear, Friday (07/05/2013), one of the popular series of Samsung’s devices will use large-capacity RAM, amounting to 3 GB.

If true, the Galaxy Note III will be the first mobile device that is equipped with 3 GB RAM capacity.

Just for the record, smart phone devices and premium phablet circulating lately generally use 2 GB of RAM.

Galaxy Note screen measuring 5.99 inches allegedly III with Full HD resolution (1920 x 1080) and a Super AMOLED display panel types. The size of a half-inch larger than its predecessor, the Galaxy Note II, which has a 5.5-inch landscape display. As for the Galaxy Note to be launched in 2011 and carries the 5.3-inch screen.

Bodi Galaxy Note III allegedly bit slimmer than its predecessor. If the Galaxy Note II has a weight of 182 grams, the Galaxy Note III a little lighter by 180 grams with a thickness of 8 mm.

At the time of its release later, the Galaxy Note III will run on the latest Android operating system, 4.3 Jelly Bean. He will also support 4G LTE technology-Advanced network.

Just like the Galaxy S4, there will be two versions of the Galaxy Note III, which was launched to the market. In a particular market, this device will be armed with the Qualcomm Snapdragon quad-core 800. As in other markets will be using processors made by Samsung’s own Exynos 5 “Octa” SoC.

Same as before, Samsung is rumored to be introducing a new model of the series Galaxy Note at this year’s IFA to be held in Berlin, Germany, in September 2013.

Seven signs of dysfunctional engineering teams

  • Posted on February 13, 2017 at 3:48 am

I’ve been listening to the audiobook of Heart of Darkness this week, read by Kenneth Branagh. It’s fantastic. It also reminds me of some jobs I’ve had in the past.

There’s a great passage in which Marlow requires rivets to repair a ship, but finds that none are available. This, in spite of the fact that the camp he left further upriver is drowning in them. That felt familiar. There’s also a famous passage involving a French warship that’s blindly firing its cannons into the jungles of Africa in hopes of hitting a native camp situated within. I’ve had that job as well. Hopefully I can help you avoid getting yourself into those situations.

There are several really good lists of common traits seen in well-functioning engineering organizations. Most recently, there’s Pamela Fox’s list of What to look for in a software engineering culture. More famous, but somewhat dated at this point, is Joel Spolsky’s Joel Test. I want to talk about signs of teams that you should avoid.

This list is partially inspired by Ralph Peters’ Spotting the Losers: Seven Signs of Non-Competitive States. Of course, such a list is useless if you can’t apply it at the crucial point, when you’re interviewing. I’ve tried to include questions to ask and clues to look for that reveal dysfunction that is deeply baked into an engineering culture.

Preference for process over tools. As engineering teams grow, there are many approaches to coordinating people’s work. Most of them are some combination of process and tools. Git is a tool that enables multiple people to work on the same code base efficiently (most of the time). A team may also design a process around Git — avoiding the use of remote branches, only pushing code that’s ready to deploy to the master branch, or requiring people to use local branches for all of their development. Healthy teams generally try to address their scaling problems with tools, not additional process. Processes are hard to turn into habits, hard to teach to new team members, and often evolve too slowly to keep pace with changing circumstances. Ask your interviewers what their release cycle is like. Ask them how many standing meetings they attend. Look at the company’s job listings, are they hiring a scrum master?

Excessive deference to the leader or worse, founder. Does the group rely on one person to make all of the decisions? Are people afraid to change code the founder wrote? Has the company seen a lot of turnover among the engineering leader’s direct reports? Ask your interviewers how often the company’s coding conventions change. Ask them how much code in the code base has never been rewritten. Ask them what the process is for proposing a change to the technology stack. I have a friend who worked at a growing company where nobody was allowed to introduce coding conventions or libraries that the founding VP of Engineering didn’t understand, even though he hardly wrote any code any more.

Unwillingness to confront technical debt. Do you want to walk into a situation where the team struggles to make progress because they’re coding around all of the hacks they haven’t had time to address? Worse, does the team see you as the person who’s going to clean up all of the messes they’ve been leaving behind? You need to find out whether the team cares about building a sustainable code base. Ask the team how they manage their backlog of bugs. Ask them to tell you about something they’d love to automate if they had time. Is it something that any sensible person would have automated years ago? That’s a bad sign.

Not invented this week syndrome. We talk a lot about “not invented here” syndrome and how it affects the competitiveness of companies. I also worry about companies that lurch from one new technology to the next. Teams should make deliberate decisions about their stack, with an eye on the long term. More importantly, any such decisions should be made in a collaborative fashion, with both developer productivity and operability in mind. Finding out about this is easy. Everybody loves to talk about the latest thing they’re working with.

Disinterest in sustaining a Just Culture. What’s Just Culture? This post by my colleague John Allspaw on blameless post mortems describes it pretty well. Maybe you want to work at a company where people get fired on the spot for screwing up, or yelled at when things go wrong, but I don’t. How do you find out whether a company is like that? Ask about recent outages and gauge whether the person you ask is willing to talk about them openly. Do the people you talk to seem ashamed of their mistakes?

Monoculture. Diversity counts. Gender diversity is really important, but it’s not the only kind of diversity that matters. There’s ethnic diversity, there’s age diversity, and there’s simply the matter of people acting differently, or dressing differently. How homogenous is the group you’ve met? Do they all remind you of you? That’s almost certainly a serious danger sign. You may think it sounds like fun to work with a group of people who you’d happily have as roommates, but monocultures do a great job of masking other types of dysfunction.

Lack of a service-oriented mindset. The biggest professional mistakes I ever made were the result of failing to see that my job was ultimately to serve other people. I was obsessed with building what I thought was great software, and failed to see that what I should have been doing was paying attention to what other people needed from me in order to succeed in their jobs. You can almost never fail when you look for opportunities to be of service and avail yourself of them. Be on the lookout for companies where people get ahead by looking out for themselves. Don’t take those jobs.

There are a lot of ways that a team’s culture can be screwed up, but those are my top seven.

Microsoft readies IE 11 for Windows 7, too

  • Posted on February 12, 2017 at 10:36 am

Browser aficionados and haters alike will be overjoyed that Microsoft is keeping its promise to keep the new Internet Explorer up-to-date on Windows 7 as well as Windows 8.

The Internet Explorer 11 Developer’s Preview for Windows 7, which debuted on Thursday, introduces the most of the new features and functionality of the Windows 8.1 default browser.

Roger Capriotti, Microsoft’s marketing director for Internet Explorer, said that, like IE 10 forWindows 7, IE 11 won’t have the modern interface, but it will have performance benefits.

“We’re faster than the folks at Chrome or the folks at Firefox,” he said. “We’ve got better CPU [processing times], better usage, and better load time overall.”

In addition to usual pitch of improved overall performance, the specifics of the Windows 7 IE 11 Developer’s Preview include updated standards support and overhauled developer tools.

Repeating a talking point that’s been at the forefront of Microsoft’s campaign to revitalize its previously-moribund browser, Capriotti said that the team building IE wants developers “to spend more time developing and less time on standards.”

Most of the backend changes to IE 11 for Windows 7 are in the Windows 8.1 version. These include several firsts that result in faster site loading, according to Microsoft.

The browser is the first to implement the W3C Resource Priorities standard, so that developers can tell the browser which parts of a page to load first; it’s the first to render text on the graphics processing unit (GPU), to more directly accelerate page loading; and it’s the first to natively decodes JPG images in real-time on the GPU; which reduces overall battery drain as well as speeds up site loading.

IE 11 also includes support for WebGL, and it supports the security- and speed-focused SPDY protocol, which originated at Google. These are notable because both previously had been sniffed at by Microsoft. The ECMAScript 6 standard scripting language is supported, too, as is more “just-in-time” compiler in the browser’s JavaScript engine, Chakra.

And to get developers to stand up and take notice, the Microsoft team has overhauled its “F12” developer tools. There’s a new memory management tool that shows in cleanly-designed, real-time charts memory spikes and other problems.

This interface responsiveness report is good news for developers using IE11, but most people will notice changes under the hood: WebGL and SPDY support, for better graphics and faster page-loading, respectively.

(Credit: Microsoft-News.com)

A new emulation tool allows the developer to mimic how their site will look on screen sizes for different devices. It also has a geolocation tool for region-specific debugging.

The new User Interface Responsiveness tool isn’t not working in the current developer’s preview, but Capriotti says it will ship in the final version. It uses more graphics than before to show developers how a site is behaving, with color-coded problems areas and detailed, real-time charts.

To sweeten the deal, the company has revamped its debugging modern.IE Web site, offered a 25 percent discount on Parallels Desktop 8 for Mac, and made virtual machines available for Internet Explorer 11.

Basically, Internet Explorer 11 is a highly competitive browser. Or at least, that’s Microsoft’s plan. But the competition updates on a six-week release cycle, and Capriotti wouldn’t say when IE 11 would be ready for the public. If it follows the release pattern of IE 10 for Windows 7, it will be around four months after IE 11 arrives with Windows 8.1.

Besides swimming upstream against years of negative marks for IE, Microsoft struggles with rapid standards implementation. Part of that is market dynamics. According to NetApplications’ market share statistics, IE still has more people using it than Chrome and Firefox combined, and making changes on a six-week release cycle as Google and Mozilla do would likely anger customers.

However, that means that those browsers, which are developed in a far more open manner than IE, are able to implement new technologies and standards much more rapidly than IE. Capriotti wouldn’t confirm any IE development on the latest in browser tech such as ASM.js, which Firefox has and Chrome is looking at adopting; or WebRTC, an HTML5-based real-time communications protocol that eliminates the need for plug-ins like Skype.

“There’s this tension of what rapid release should be,” he said. “Is it 6 weeks? Six months? We don’t think it should be six years,” he joked.

The truth is that all of Microsoft’s Windows divisions are moving towards updating more rapidly than in the past, as Steve Ballmer said at the recent Build conference, and that includes Internet Explorer.

But while Microsoft is concerned with keeping Internet Explorer up with the Joneses, the Joneses — mostly Chrome and even Firefox a bit as it moves into the mobile OS world — are looking beyond the traditional markets.

HP Releases Desktop PC with Military Standards

  • Posted on February 11, 2017 at 12:53 pm

Global desktop PC sales continue to slow. However, this does not preclude Hewlett-Packard (HP) to continue releasing the latest desktop PC product lines.

No half-hearted, at an event in Jakarta, Wednesday (07/24/2013), HP released 4 series desktop PCs at once, namely EliteOne 800 G1, G1 EliteDesk 800, ProOne 600 G1, and G1 ProDesk 600. The four were aimed at the business segment.

Present as a business device, HP equip these devices with a variety of features that do not exist in the PC consumer.

For example, four of which are equipped with a safety feature called HP Client Security. Using these features, users can protect the devices at every layer, including hardware, software, and BIOS.

Other value, the products have passed the test of military standards. According to Ricky Handrian, MDM Business Desktop PC HP Indonesia, standardization include durability, shock, and temperature.

These devices have to endure or withstand heavy overwritten with a maximum weight of 75 kg. In addition, products with military standards must withstand shock, although it has been taken away in a car in the distance.

“Devices that pass this certification should still be able to operate, though it was taken in a truck and suffered shock, for example, 1,000 miles,” said Ricky.

HP Series 800 G1 EliteOne a version All-in-one PC from EliteDesk 800 G1. This product is available in one version a touch screen and non-touch also.

Those who are interested can choose the specifications of the product. To EliteOne 800 G1, specification available is generation Intel Core processors up to 4 Haswell, Windows 8 Pro, up to 16 GB of RAM, up to 1 TB HDD, and a 23-inch LED screen.

Meanwhile, the desktop version of this device, EliteDesk 800 G1, comes with a slightly higher specification, namely 4-generation Intel Core processor Haswell, Windows 8 Pro, 32 GB RAM, and up to 2 TB HDD.

Carrying the same concept with the EliteBook 800, ProOne 600 HP version of the G1 is the all-in-one PC from HP ProDesk 600 G1. For the problem specification, these two products have the same choice as the Elite 800 series.

So, what distinguishes the two series? According to Ricky, the two series have different processor platforms support. 800 Series uses Intel Q87 platform. While the Pro 600 series using Q85.

“ProDesk Series 600 is a downgraded version of the EliteBook 800. ProDesk device 600 is certainly cheaper than EliteBook 800. Yeah, let’s just ProDesk 600 as the sister of the EliteBook 800,” added Ricky.

Google Maps Can Detect Traffic Accidents

  • Posted on February 9, 2017 at 8:40 am

Jakarta – Google has just updated the Google Maps application with new features. This feature comes reports of traffic accidents and the number of ways to access the various facilities.
Google Maps with accident information can be used on Android and iOS based devices. Overall, this application displays maps and their reliable navigation and traffic information.
Warning about the accident will appear on the map showing traffic flow and road construction. This information is also recommended that these options are not stuck in traffic around the accident site.
Last June, Google bought Waze, the creator of popular apps that inform traffic flow. But Google has not confirmed whether Waze’s data associated with this application or not.
The new application also allows users to find out if the place you want to target viable or not. Features »Explore” display greeting cards enjoy a meal and good night.
Through these features, users can be helped with a variety of information about the place in detail. There is also a rating system that allows users to find somewhere assessment.
Google Maps with navigation devices had previously been released for the Android and iOS platform. Product Manager for Google Maps, Nobuhiro Makida, said the superior feature of this application is the My Location, search, and referrals.
“Through My Location, users can know of its existence through the map, even if the device does not have GPS,” said Makida.
Next is a local search to find a business category. While referrals are the best route to a destination, even if the user is driving, walking, or taking public transportation.
Features can indicate the distance and direction of travel time to get to the destination. The Google Maps Navigation can be run via voice commands.

SkyDrive Windows 8.1, Download File Without Internet

  • Posted on February 9, 2017 at 12:00 am

Washington – After releasing a preview version of Windows 8.1 recently, Microsoft said that the final version will be released in August 2013. Windows 8.1 users will soon be able to access files on a Windows cloud-based storage service, SkyDrive, without having to connect to the Internet.
Microsoft announced that SkyDrive will be accompanied by support for offline access. Through SkyDrive service, users will be able to determine which files can be accessed without connecting to the Internet and then downloaded to the user’s device automatically.
Files that can be accessed offline will be easily identified when the user opens SkyDrive. In addition, Windows 8.1 users can also store files on SkyDrive in offline mode, which then can be directly uploaded when connected to the Internet network.
Tami Reller, Chief Financial Officer said the company’s Windows Windows 8.1 will be completed in August 2013. Reller did not say when the user can install updates to Windows 8.1. But, Reller showed several new features and functionality in Windows 8.1.
Windows 8.1 users will be looking for music that is integrated with Xbox Music and can share web pages into Xbox Music application to create playlists. Another breakthrough designed in Windows 8.1, namely, Miracast. This displays renewal HD video and audio from Wi-Fi to the other views, such as TV. And many more other renewal in Windows 8.1.