1 2 3 Previous Next

IT Peer Network

1,267 posts

In my blog Use Data To Support Arguments, Not Arguments To Support Data, I articulated how better-informed decisions are typically made and the role that business intelligence (BI) should play. Shortly after I wrote the blog, I experienced a real-life event that clearly illustrates three main phases of “data-entangled decisions.”

 

Since my family likes to take a day off from cooking on Fridays, we recently visited the deli of our favorite organic grocery store. At the take-out bar, I noticed an unusually long line of people under a large sign reading, “In-House Made Wing Buckets. All You Can Fill. On Sale for $4.99, Regular $9.99.” Well, I love wings and couldn’t resist the temptation to get a few.

 

The opportunity was to add wings (one of my favorite appetizers) to my dinner. But instead of using the special wings bucket, I chose the regular salad bar container, which was priced at $8.99 per pound regardless of the contents. I reasoned that the regular container was an easier-to-use option (shaped like a plate) and a cheaper option (since I was buying only a few wings). My assumptions about the best container to use led to a split-second decision—I “blinked” instead of “thinking twice.”

 

Interestingly, a nice employee saw me getting the wings in the regular container and approached me. Wary of my reaction, he politely reminded me of the sale and pointed out that I may pay more if I use the regular container because the wing bucket had a fixed cost (managed risk).

 

Although at first this sounded reasonable, when I asked if it would weigh enough to result in a higher cost, he took it to one of the scales behind the counter and discovered it was less than half a pound. This entire ordeal took less than 30 seconds and now I had the information I needed to make a better-informed decision.

 

This clinched it, because now two factors were in my favor. I knew that a half pound of the $8.99, regular-priced option was less than the $4.99, fixed-priced bucket option. And I knew that they would deduct the weight of the regular deli container at the register, resulting in an even lower price. I ended up paying $4.02.

 

This every-day event provides a good story to demonstrate the three phases as it relates to the business of better-informed decisions and the role of BI—or data in general.

 

Phase 1: Reaction

When the business opportunity (wing purchase) presented itself, I made some assumptions with limited data and formed my preliminary conclusion. If it weren’t for the store employee, I would have continued to proceed to the cash register ignorant of all the data. Sometimes in business, we tend to do precisely the same thing. We either don’t validate our initial assumptions and/or we make a decision based on our preliminary conclusions.

 

Phase 2: Validation

By weighing the container, I was able to obtain additional data and validate my assumptions to quickly take advantage of business opportunities —exactly what BI is supposed to do. With data, I was able to conclude with a great degree of confidence that I had mitigated the risk that it was the right approach. This is also typical of how BI can shed more light on many business decisions.

 

Phase 3: Execution

I made my decision by taking into account reliable data to support my argument, not arguments to support data. I was able to do this because I (as the decision maker) had an interest in relying on data and the data I needed was available to me in an objective form (use of the scale). This allowed me to eliminate any false personal judgments (like my initial assumptions or the employee’s recommendation).

  • From the beginning, I could have disregarded the employee’s warning or simply not cared much about the final price. If that had been my attitude, then no data or BI tool would have made a difference in my final decision. And I might have been wrong.
  • On the other hand, if I had listened to the initial argument by that nice employee without backing it up with data, I would have been equally wrong. I would have made a bad decision based on what appeared to be a reasonable argument that was actually flawed.
  • When I insisted on asking the question that would validate the employee’s argument, I took a step that is the business equivalent of insisting on more data because we may not have enough to make a decision.
  • By resorting to an objective and reliable method (using the scale), I was able to remove personal judgments.

 

In 20/20 Hindsight

Now, I realize that business decisions are never this simple. Organizations’ risk is likely measured in the millions of dollars, not cents. And sometimes we don’t have the luxury of finding objective tools (such as the scale) in time to support our decision making. However, I believe that many business decisions mirror the same sequences.

 

Consider the implications if this were a business decision that resulted in a decision of $100 in the wrong direction. Now simply assume that these types of less-informed or uninformed decisions were made once a week throughout the year by 1000 employees. The impact would be $5 million.

 

Hence, the cost to our organization increases as:

  • The cost of the error rises
  • Errors are made more frequently
  • The number of employees making the error grows

 

Bottom Line

Better-informed decisions start and end with leadership that is keen to promote the culture of data-driven decision making. BI, if designed and implemented effectively, can be the framework that enables organizations of all sizes to drive growth and profitability.

 

What other obstacles do you face in making better-informed decisions?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

The new Google Tone extension is simple and elegant.  On one machine, the browser can generate audio tones which browsers on other machines will listen to and then open a website.  Brilliant.  No need to be connected to the same network, spell out a long URL to your neighbor, or cut/paste a web address into a text message for everyone to join.  But it has some serious potential risks.

Chrome Tone.jpg

Imaging being on an audio bridge, in a coffee shop, or a crowded space with bored people on their phones, tablets, or laptops.  One compromised system may be able to propagate and infect others on different networks, effectively jumping the proverbial ‘air gap’.  Malware could leverage the Tone extension and introduce a series of audible instructions which, if enabled on targeted devices, would direct everyone to automatically open a malicious website, download malware, or be spammed with phishing messages. 

 

Will such tones eventually be embedded in emails, documents, and texts?  A Tone icon takes less space than a URL.  It is convenient but obfuscates the destination, which may be a phishing site or dangerous location.  Tone could also be used to share files (an early usage for the Google team).  Therefore it could also share malware without the need for devices to be on the same networks.  This bypasses a number of standard security controls.  

 

On the less malicious side, but still annoying, what about walking by a billboard and having a tone open advertisements and marketing pages in your browser.   The same could happen as you are shopping in a store to promote sales, products, and coupons.  Will this open a new can of undesired marketing pushing into our lives?

 

That said, I must admit I like the technology.  It has obviously useful functions, fills a need, and shows the innovation of Google to make technology a facilitator of information sharing for people.  But, we do need controls to protect from unintended and undesired usages as well as security to protect from equally impressive malicious innovations.  My advice: use with care.  Enterprises should probably not enable it just yet, until the dust settles.  I for one will be watching how creative attackers will wield this functionality and how long it takes for security companies to respond to this new type of threat.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Bomb2.jpgBe afraid. Seriously. Ransomware is growing up fast, causing painful disruptions across the Internet, and it will get much worse in 2015.


Ransomware is the criminal activity of taking hostage a victims important digital files and demanding a ransom payment to return access to the rightful owner. In most cases files are never removed, simply encrypted in place with a very strong digital lock, denying access to the user. If you want the key to restore access to precious family photos, financial documents, or business files, you must pay. 


An entertaining and enlightening opinion-editorial piece in The New York Times highlighted how an everyday citizen was impacted, the difficulties in paying the ransom, and how professional the attackers support structure has become. 

 

Everyone is at risk. Recently, several law enforcement agencies and city governments were impacted.  Some of which paid the attackers for their “Decrypt Service.” This form of digital extortion has been around for some time, but until recently it has not been too much of a concern.  It is now rapidly gaining in popularity as it proves an effective way of fleecing money from victims both large and small. 

 

With success comes the motivation to continue and improve. Malware writers are investing in new capabilities, such as Elliptic Curve Cryptography for more robust locks, using the TOR network for covert communications, including customer support features to help victims pay with crypto-currency, and expanding the technology to target more than just static files.

 

Attackers are showing how smart, strategic, and dedicated they are. They are working hard to bypass evolving security controls and processes. It is a race. Host based security is working to better identify malware as it lands on the device, but a new variant, Fessleak, bypasses the need to install files on disk by delivering malicious code directly into system memory. TorrentLocker has adapted to avoid spam filters on email systems.  OphionLocker sneaks past controls via web browsing by using malicious advertising networks to infect unsuspecting surfers.   

 

One of the most disturbing advances is a newcomer RansomWeb’s ability to target databases and backups. This opens up an entirely new market for attackers. Web databases have traditionally been safe from attacks due to technical complexities of encrypting an active database and the likelihood of good backups which could be used in the event of an infection. RansomWeb and the future generations which will use its methods, will target more businesses.  Every person and company on the web could come across these dastardly traps and should be worried.


Cybersecurity Predictions

 

In this year’s Top10 Cybersecurity Predictions, I forecasted the growth of ransomware and a shifting of attacks to become more personal. The short term outlook is definitely leaning toward the attackers. In 2015 we will see the likes of CryptoWall, CoinVault, CryptoLocker, RansomWeb, OphionLocker, Fessleak, TeslaCrypt, TorrentLocker, Cryptobit and others, continue to evolve and succeed at victimizing users across the globe.  It will take the very best security minds and a depth of capabilities working together to stunt the growth of ransomware. 


Security organizations will eventually get the upper hand, but it will take time, innovation, and a coordinated effort. Until then, do the best you can in the face of this threat. Be careful and follow the top practices to protect from ransomware:


  1. A layered defense (host, network, web, email, etc.) to block malware delivery
  2. Savvy web browsing and email practices to reduce the inadvertent risk of infection
  3. Be prepared to immediately disconnect from the network if you suspect malware has begun encrypting files
  4. Healthy regular backups in the event of you become a victim and must recover

 

Alternatively, if you choose not to take protective measures, I recommend becoming familiar with cryptocurrency transfers and stress management meditation techniques.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Rob.pngWe're in the midst of an explosion in data volumes and computing requirements, and even faster growth is on the way. Analysts have estimated that there will be as many as 50 billion intelligent machines and sensors connected to the Internet by 2020, increasing global data center traffic by nearly 300 percent.

 

The good news is that cloud computing offers a way to handle the growth without overwhelming IT budgets. Hybrid clouds allow enterprises of all sizes to combine on-premise private clouds with public cloud resources to utilize the efficiency and flexibility of automated IT service delivery and resource orchestration.

 

One of the fastest growing private and hybrid cloud architectures are based on OpenStack, an open source software-based modular architecture backed by a broad ecosystem of contributors and vendors. It provides a path forward to organizations that prefer the flexibility, choice and economic models of an open sourced-based solution.

 

OpenStack, now celebrating it's 5th birthday, is backed by a thriving community of more than 18,500 members from 463 companies, and more than 1,300 active contributors. It has been deployed by more than 1,200 businesses and organizations and has demonstrated the value it can provide in today's modern data centers.

 

One of the hallmark practices of the the OpenStack community, is a twice a year event (OpenStack Summit) to allow the community to share information and best practices for deploying and managing OpenStack-based clouds. This week, May 18-22, approximately 6,000 cloud architects, developers and technology leaders are meeting in Vancouver to hear and learn firsthand from their peers and the leading OpenStack developers and experts.

 

As cloud computing becomes the dominant paradigm for IT service delivery, OpenStack offers a unique value proposition: a flexible, open, and affordable cloud platform that is supported by a robust ecosystem. Whether you run an enterprise data center, a public cloud, or a telecommunications network, OpenStack on Intel architecture can help you achieve the efficiencies of a hybrid cloud without limiting your future options.

 

For more information, read the Intel OpenStack white paper: "An Open, Trusted Platform for your Private Cloud"

In the previous post focused on The Digitization of the Utility Industry Part I, I mentioned Metcalfe's Law...

 

"Metcalfe's law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2). First formulated in this form by George Gilder in 1993,[1] and attributed to Robert Metcalfe" (Source : Wikipedia

 

So what you ask has this to do with the Utility Industry?

 

Odonovan.jpgI'd propose the following. What's become known as the 'Smart Grid' is simply a use case for IoT. IoT is all about securely connecting more and more devices to a network, collecting the data from these devices and either locally or centrally, analyzing that data to create insight. Today we have more and more devices being added to the grid. Be these Smart Meters, Renewable Energy Generation devices (think solar panels, wind turbines..), electric vehicle charging stations, or simply automating more of the existing transmission and distribution Grid. It's all about more and more devices getting connected to the grid.

 

In addition, there are deployments of Smart Thermostats, Building Energy Management systems, Electric vehicles etc etc... Now you may say, these devices are not connected 'physically' to the electric grid. This is true. While the business models for what's connected via a private or public network are evolving, there will always be valid reasons to have extremely secure and low latency private networks for given parts of the grid. However the data from all these devices will be being combined in the 'cloud' to uncover all sorts of insights that lead to be services and business models. This is what is already going on for example via Opower, Google/Nest, C3, Alstom, AWS/Splunk, British Gas Hive and many others.

 

Now each of the use cases called out above have their own current value propositions, return of investment criteria.. For MetCalfe's Law to hold for the smart gird, then we'll have to see exponential value getting created as a result of more and more devices getting added and value accrued. This is already happening. Utilities in the US are using data from Smart Meters to respond to the effects of earthquakes faster, thus adding huge economic value. Energy Savings are being accrued as people get insight into how their homes can better consumer energy and utilities can use this to then plan future load profiles in the grid thus maximizing investment. All are examples of Metcalfe’s Law beginning to kick in. And it will only accelerate as new ways of combining big data from more and more devices come on stream. 

 

So if it were not for Moore's Law, Metcalfe's Law and human innovation, then the concept of the Smart Grid would never come about.


Agree or disagree?  Let me know your thoughts. 


- Kev


Let's continue the conversation on Twitter: @Kevin_ODonovan

How many smartphones are there in your household? How about laptops, tablets, PCs? What about other gadgets like Internet-enabled TVs or smart room temperature sensors? Once you start to think about it, it’s clear that even the least tech-savvy of us has at least one of these connected devices. Each device is constantly sending or receiving data over the Internet, data which must be handled by a server somewhere. Without the data centres containing these servers, the devices (or the apps they run) are of little value. Intel estimates that for every 400 smartphones, one new server is needed. That’s about one server per street I’d say.

 

We’re approaching 2 billion smartphones in service globally, each with (Intel estimates) an average of 26 apps installed. We check our phones an average of 110 times per day, and on top of that, each app needs to connect to its data centre around 20 times daily for updates. All of this adds up to around one trillion data centre accesses every day. And that’s just for smartphones. Out-of-home IoT devices like wearable medical devices or factory sensors need even more server resource.

 

Sounds like a lot, right? Actually, if we were watching a movie about the Internet, it’d be an epic and we’d still just be in the opening credits. Only about 40 percent of the world’s population is connected today, so there’s a huge amount of story yet to tell as more and more people come to use, like and expect on-demand, online services. With use of these applications and websites set to go up, and connected devices expected to reach 50 billion by 2020, your data centre is a critically important piece of your business.

 

Man-On-Subway-Reading-Tablet.pngHere Comes the Hybrid Cloud

 

What fascinates me about all this is the impact it’s going to have on the data centre and how we manage it. Businesses are finding that staggering volumes of data and demand for more complex analytics mean that they must be more responsive than ever before. They need to boost their agility and, as always, keep costs down – all in the face of this tsunami of connected devices and data.

 

The cost point is an important one. Its common knowledge that for a typical organisation, 75 percent of the IT budget goes on operating expenditure. In a bid to balance this cost/agility equation, many organizations have begun to adopt a Hybrid Cloud approach.

 

In the hybrid model, public cloud or SaaS is used to provide some of the more standard business services - such HR, expenses or CRM systems; but also to provide overspill capacity in times of peak demand. In turn, the private cloud hosts the organizations most sensitive or business-critical services, typically those delivering true business differentiating capabilities.

 

This hybrid cloud model may mean you get leading edge, regularly updated commodity services which consume less of your own valuable time and resource. However, to be truly effective your private cloud also needs to deliver highly efficient cost/agility dynamics – especially when faced with the dawning of the IoT age and its associated demands.

 

For many organizations the evolution of their data centre(s) to deliver upon the promise of private cloud is a journey they’ve been on for a number years, but one that’s brought near term benefits on the way. In fact, each stage in the journey should help drive time, cost and labour out of running your data centre.

 

The typical journey can be viewed as a series of milestones:

 

  • Stage 1: Standardization. Consolidating storage, networking and compute resources across your data centres can create a simplified infrastructure that delivers cost and time savings. With standardized operating system, management tools and development platform, you can reduce the tools, skills, licensing and maintenance needed to run your IT.
  • Stage 2: Virtualization. By virtualising your environment, you enable optimal use of compute resources, cutting the time needed to build new environments and eliminating the need to buy and operate one whole server for each application.
  • Stage 3: Automation. Automated management of workloads and compute resource pools increases your data centre agility and helps save time. With real-time environment monitoring and automated provisioning and patching, you can do more with less.
  • Stage 4: Orchestration. Highly agile, policy-based rapid and intelligent management of cloud resource pools can be achieved with full virtualization of compute, storage and networking into software-defined resource pools. This frees up your staff to focus on higher-value, non-routine assignments.
  • Stage 5: Real-time Enterprise. Your ultra-agile, highly optimized, real-time management of federated cloud resources enables you to meet business-defined SLAs while monitoring your public and private cloud resources in real time. Fully automated management and composable resources enable your IT talent to focus on strategic imperatives for the business.


Man-Works-On-Servers.png

A typical reaction from organizations first considering the journey is “That sounds great!” However, this is quickly followed by two questions, the first being “Where do I begin?”


Well, let’s start with the fact that it’s hard to build a highly efficient cloud platform that will enable real-time decision making using old infrastructure. The hardware really does matter, and it needs to be modern, efficient and regularly refreshed – evergreen, if you will. If you don’t do this, you could be losing an awful lot of efficiency.

 

Did you know, for example, that according to a survey conducted by a Global 100 company in 2012, 32 percent of its servers were more than four years old? These servers made up just four percent of total server performance capabilities but yet they constituted 65 percent of the total energy consumption. Clearly, there are better ways to run a data centre.

 

It’s All About Meeting Business Expectations

 

And as for that second question? You guessed it, “How can we achieve steps 4 and 5?” This is a very real consideration, even for the most innovative of organisations. Even those companies considered leaders in their private cloud build-out are generally only at Stage 3: Automation, and experimenting with how to tackle Stage 4: Orchestration.

 

The key thing to remember is that your on-line services, web sites and apps run the show. They are a main point of contact with your customers (both internally and externally), so they must run smoothly and expectations must be met. This means your private cloud must be elastic – flexing on-demand as the businesses requires. Responding to business needs in weeks to months is no longer acceptable as the clock speed of business continues to ramp. Hours to minutes to seconds is the new order.

 

Time for a New Data Centre Architecture

 

I believe the best way to achieve this hyper-efficient yet agile private cloud model is to shift from the hardware-defined data centre of today to a new paradigm that is defined by the software: the software-defined infrastructure (SDI).

 

Does this mean I’m saying the infrastructure doesn’t matter? Not at all, and we’ll come on to this later in this blog series. I’ll be delving into the SDI architecture model in more detail, looking at what it is, Intel’s role in making it possible, and how it’ll enable your private cloud to get the Holy Grail – Stage 5: Real-time Enterprise.

 

In the meantime, I’d love to hear from you. How is your organization responding to the connected device deluge, and what does your journey to the hybrid cloud look like?


To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Cybersecurity is poised for a notorious year. The next 12 to 18 months will see greater, bolder, and more complex attacks emerge. This year’s installment for the top computer security predictions highlights how the threats are advancing, outpacing defenders, and the landscape is becoming more professional and organized. Although the view of our cybersecurity future is obscured, one thing is for certain: We’re in for an exciting ride.

 

In this blog I’ll discuss my top 10 predictions for Cybersecurity in 2015.

 

Top Predictions:

 

1. Cyber warfare becomes legitimate

Cyber-Warefare.png

Governments will leverage their professional cyber warfare assets as a recognized and accepted tool for governmental policy. For many years governments have been investing in cyber warfare capabilities, and these resources will begin to pay dividends.

 

 

 

 



2. Active government intervention

Governement-Intervention.png

Governments will be more actively involved in responding to major hacking events affecting their citizens. Expect government response and reprisals to foreign nation-state attacks, which ordinary business enterprises are not in a position to act or counter. This is a shift in policy, both timely and necessary to protect how the public enjoys life under the protection of a common defense.



 

 

 

3. Security talent in demand

Security-Talent.png

The demand for security professionals is at an all-time high, but the workforce pool is largely barren of qualified candidates. The best talent has been scooped up. A lack of security workforce talent, especially in leadership roles, is a severe impediment to organizations in desperate need of building and staffing in-house teams. We will see many top-level security professionals jump between organizations, lured by better compensation packages. Academia will struggle to refill the talent supply in order to meet the demand.

 

 

 

 

 

4. High profile attacks continue

High-Profile-Attacks.png

High-profile targets will continue to be victimized. As long as the return is high for attackers while the effort remains reasonable, they will continue to target prominent organizations. Nobody, regardless of how large, is immune. Expect high-profile companies, industries, government organizations, and people to fall victim to theft, hijacking, forgery, and impersonation.

 

 

 

 

 

5. Attacks get personal

Attacks-Get-Personal.png

We will witness an expansion in strategies in the next year, with attackers acting in ways that put individuals directly at risk. High profile individuals will be threatened with embarrassment, exposing sensitive healthcare, photos, online activities, and communication data. Everyday citizens will be targeted with malware on their devices to siphon bank information, steal crypto-currency, and to hold their data for ransom. For many people this year, it will feel like they are being specifically targeted for abuse.

 

 

 

 

6. Enterprise risk perspectives change

Enterprise-Risk-Perspectives.png

Enterprises will overhaul how they view risks. Serious board-level discussions will be commonplace, with a focus on awareness and responsibility. More attention will be paid to the security of products and services, with the protection of privacy and customer data beginning to supersede “system availability” priorities. Enterprise leaders will adapt their perspectives to focus more attention on security as a critical aspect of sustainable business practices.



 



7. Security competency and attacker innovation increase

Security-Competency.png

The security and attacker communities will make significant strides forward this year. Attackers will continue to maintain the initiative and succeed with many different types of attacks against large targets. Cybercrime will grow quickly in 2015, outpacing defenses and spurring smarter security practices across the community. Security industry innovation will advance as the next wave of investments emerge and begin to gain traction in protecting data centers, clouds, and the ability to identify attackers.

 

 

 

 

 

8. Malware increases and evolves

Malware-Evolves.png

Malware numbers will continue to skyrocket, increase in complexity, and expand more heavily beyond traditional PC devices. Malicious software will continue to swell at a relentless pace, averaging over 50 percent year-over-year growth. The rapid proliferation and rising complexity of malware will create significant problems for the security industry. The misuse of stolen certificates will compound the problems, and the success of ransomware will only reinforce more development by criminals.

 

 

 

 

 

9. Attacks follow technology growth

Attacks-Technology-Growth.png

Attackers move into new opportunities as technology broadens to include more users, devices, data, and evolving supporting infrastructures. As expansion occurs, there is a normal lag for the development and inclusion of security. This creates a window of opportunity. Where the value of data, systems, and services increases, threats surely follow. Online services, phones, the IoT, and cryptocurrency are being heavily targeted.

 

 

 

 

 

10. Cybersecurity attacks evolve into something ugly

Cybersecurity-Attacks.png

Cybersecurity is constantly changing and the attacks we see today will be superseded by more serious incursions in the future. We will witness the next big step in 2015, with attacks expanding from denial-of-service and data theft activities to include more sophisticated campaigns of monitoring and manipulation. The ability to maliciously alter transactions from the inside is highly coveted by attackers.

 

 

 

Welcome to the next evolution of security headaches.


I predict 2015 to be an extraordinary year in cybersecurity. Attackers will seek great profit and power, while defenders will strive for stability and confidence. In the middle will be a vicious knife fight between aggressors and security professionals. Overall, the world will take security more seriously and begin to act in more strategic ways. The intentional and deliberate protection of our digital assets, reputation, and capabilities will become a regular part of life and business.

 

If you’d like to check out my video series surrounding my predictions, you can find more here.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Postshttps://communities.intel.com/people/MatthewRosenquist/blog/2015/03/04/why-ransomware-will-rise-in-2015

LinkedIn: http://linkedin.com/in/matthewrosenquist

Did you know that many reptiles, marine mammals, and birds sleep with one side of their brains awake? This adaptation lets these creatures rest and conserve energy while remaining alert and instantly ready to respond to threats and opportunities. It also enables amazing behaviors such as allowing migrating birds to sleep while in flight. How’s that for maximizing productivity?

 

Taking a cue from nature, many new desktop PCs challenge how we define sleep with Intel® Ready Mode Technology. This innovation replaces traditional sleep mode with a low-power, active state that allows PCs to stay connected, up-to-date, and instantly available when not in use—offering businesses several advantages over existing client devices.

 

Man-On-Computer.png1. Always current, available, and productive

 

Users get the productivity boost of having real-time information ready the instant that they are. Intel Ready Mode enhances third-party applications with the ability to constantly download or access the most current content, such as the latest email messages or media updates. It also allows some applications to operate behind the scenes while the PC is in a low-power state. This makes some interesting new timesaving capabilities possible—like, for example, facial recognition software that can authenticate and log in a user instantly upon their arrival.

 

In addition, when used with third-party apps like Dropbox*, Ready Mode can turn a desktop into a user’s personal cloud that both stores the latest files and media from all of their mobile devices and makes it available remotely as well as at their desks. Meanwhile, IT can easily run virus scans, update software, and perform other tasks on user desktops anytime during off hours, eliminating the need to interrupt users’ workdays with IT admin tasks.

 

2. Efficiently energized

 

PCs in Ready Mode consume only about 10 watts or less (compared to 30 – 60 watts active) while remaining connected, current, and ready to go. That’s enough energy to power an LED lamp equal to 60 watts of luminosity. Energy savings will vary, of course; but imagine how quickly a six-fold energy-consumption reduction would add up with, say, 1,000 users who actively use their PCs only a few hours a day.

 

In the conference room, a desktop-powered display setup with Intel Ready Mode will wait patiently in an energy-sipping, low-power state when not in use, but will be instantly ready to go for meetings with the latest presentations and documents already downloaded. How much time would you estimate is wasted at the start of a typical meeting simply getting set up? Ten minutes? Multiply that by six attendees, and you have an hour of wasted productivity. Factor in all of your organization’s meetings, and it’s easy to see how Ready Mode can make a serious contribution to the bottom line.

 

3. Streamlined communication

 

Desktops with Intel Ready Mode help make it easier for businesses to move their landline or VoIP phone systems onto their desktop LAN infrastructures and upgrade from regular office phones to PC-based communication solutions such as Microsoft Lync*. Not only does this give IT fewer network infrastructures to support, but with Ready Mode, businesses can also deploy these solutions and be confident that calls, instant messages, and videoconference requests will go through even if a user’s desktop is idle. With traditional sleep mode, an idle PC is often an offline PC.

 

Ready to refresh with desktops featuring Intel® Ready Mode Technology today? Learn how at: www.intel.com/readymode

I am always happy when technology makes my job as a client security engineer easier.

 

Intel’s recent deployment of hardware-based encryption using the Intel® Solid-State Drive (Intel® SSD) Professional Family (currently consisting of the Intel® SSD Pro 1500 Series and the Intel® SSD Pro 2500 Series), combined with McAfee® Drive Encryption 7.1 encryption software, has done exactly that. For some organizations, the deployment of Opal-compliant drives might disrupt encryption management policies and procedures -- but not at Intel, thanks to the level of integration between McAfee Drive Encryption and McAfee® ePolicy Orchestrator (McAfee ePO).

 

Intel IT has used ePO for several years to manage other McAfee security solutions, such as virus protection and firewalls. Now, as we transition to Opal drives, ePO’s integration with encryption management means that end users don’t have to learn a new user interface or process when they change from software-based to hardware-based encryption. They just enter their encryption password and they’re in -- the same as before when using software-based encryption.

 

Mixed Environment? Not a Problem

We are transitioning to the new drives using our standard refresh cycle. Therefore, our computing environment still contains a fair number of older Intel SSDs that must use software-based encryption. But for IT staff, there’s no difference between provisioning one of the Opal-compliant drives and a non-Opal-compliant drive. McAfee Drive Encryption provides a hybrid agent that can detect whether software- or hardware-based encryption can be used, based on the configuration of the drive and rules defined by the IT administrator. The same policy is used, regardless of the drive manufacturer or whether the drive needs hardware-based or software-based encryption. The technician just tags the computer for encryption, and that’s it. Decryption, when necessary, is just as easy.

 

When McAfee releases a new version of Drive Encryption, or when a new version of the Opal standard is released (the Intel SSD Pro 2500 Series, in initial phases of deployment at Intel, are Opal 2.0-compliant), the policies won’t change, and the update will be transparent. We can just push the new version to the client PCs -- employees don’t have to visit service centers, and IT technicians don’t need to make desk-side visits with USB sticks. The system tree organization of ePO’s policies enables us to set different policies for different categories of systems, such as IT-managed client PCs and servers and Microsoft Active Directory Exchange servers.

 

The transition to Opal-compliant drives is also transparent to the rest of the IT department: there is no change is the system imaging process -- the same image and process is used whether the drive is an older SSD or a new Intel SSD Pro 1500 Series. The recovery process is also identical regardless of whether the drive is hardware or software encrypted. It is all performed from the same console, using the same process. Intel Help Desk technicians do not need to learn a new method of recovery when a new drive is introduced.

 

Bird’s Eye View of Encryption Across the Enterprise

McAfee ePO enables us to easily determine the encryption status of all PCs in the environment. The ePO query interface is easy to use (you don’t even have to know SQL, although it is available for advanced users). The interface comes with most common reports already built-in (see the figure for examples) and allows for easy customization. Some reports take less than 30 seconds to generate; some take a little longer (a few minutes).

 

Using ePO, we can obtain a bird’s-eye view of encryption across the enterprise. The ePO dashboard is customizable. For example, we can view the entire encryption state of the environment, what Drive Encryption version and agent version are being used, and if there are any incompatible solutions that are preventing encryption from being enforced. We can even drill down to a particular PC to see what is causing an incompatibility.

 

Blog-ScreenRebuilds.png

Sample McAfee® ePolicy Orchestrator Dashboard (from left to right): encryption status, McAfee® Drive Encryption versions installed, encryption provider. These graphs are for illustrative purposes only and do not reflect Intel’s current computing environment.


Encryption can be removed in one of the following ways:

  • The IT admin applies the decrypt policy. This method requires communication between the client PC and server.
  • The IT Service Center uses a recovery image with an identification XML file exported from the server, or the users’ password, to decrypt the drive.

 

Decrypting in this manner guarantees that the encryption status reported in ePO is in fact the status of the drive.

 

The information displays in near real-time, making it helpful if a PC is lost or stolen. Using ePO, we can find the state of the drive. If it was encrypted, we know the data is safe. But if not, we can find out what sort of data was on the PC, and act accordingly. ePO lets IT admins customize the time interval for communication between a specific PC and ePO.

 

Customizable Agent

Although the McAfee agent reports a significant amount of information by default, the product developers realized that they probably couldn’t think of everything. So, they built in four client registry values that provide even more maneuverability. For example, we needed a way to differentiate between tablets and standard laptops, because we needed to assign a policy based on the availability of touch capabilities during preboot. So, during the build, we set one of the four registry values to indicate whether the PC has a fixed keyboard. The McAfee agent reports this property to ePO, which in turn, based on the value, assigns a compatible policy.

 

Single Pane of Glass

Before integrating Drive Encryption, ePO, and the Opal-compliant Intel® SSD Professional Family, some IT support activities, such as helping users who forgot their encryption password, were time-consuming and inefficient. Recovery keys were stored in one location, while other necessary information was stored elsewhere. Now, one console handles it all. If a user calls in, the IT technician has everything necessary, all in one place -- a one-stop shop for everything encryption.

 

We have found the combination of McAfee Drive Encryption 7.1 software and Opal-compliant Intel SSDs featuring hardware-based encryption to provide a more robust solution than would be possible with either technology alone. I’d be interested to hear how other IT organizations are faring as the industry as a whole adopts Opal-compliant drives. Feel free to share your comments and join the conversation at the IT Peer Network.

I continuously think about the endurance aspect of our products, how SSD users understand it and use it for its positive benefits. Sadly, endurance is often underestimated and sometimes overestimated. I see customers buying High Endurance products for the benefit of protection, without understanding the real requirements of the application. Now that piece of night thoughts goes to my blog..

 

How do you define the SSD endurance?

 

By definition, endurance is the total amount of data that can be written to the SSD. Endurance can be measured in two different ways:

  • First called TBW – terabytes written, which exactly follows the meaning, total data amount during life span. It’s estimated for every SSD SKU individually even within product line.
  • Second way is DWPD – drive writes per day.  This is multiplier only, same for all SKUs in the product line. By saying DWPD=10 (high endurance drive), we mean the TBW = DWPD * CAPACITY * 365 (days) * 5 (years warranty).  That looks to be simple math, but that’s not just it… It uses another dimension  - time. I’ll explain this later.

 

Three main factors affect the endurance.rusty-ship-5-164395-m.jpg

 

  • NAND quality. It’s measured in the number of Programming/Erase cycles. Better NAND has higher count.  High Endurance Technology NAND is used in the x3700 Series product families. So, the NAND between the S3700 and S3500 Series, for example, is physically different. Please, take a moment to learn more about Validating High Endurance on the Intel® Solid-State Drive white paper.
  • Workload, different workload pattern, such as random big block or random small block writes, can have the difference on endurance up to 5x. For data center SSDs we’re using JESD-219 workload set (the mix of small random I/o to big blocks) which represents the worst-case scenario for customer. In reality, this means in most of the usage cases, customers will see better endurance in his environment.

 

Real life example:

Customer says he uses the drive as a scratch/temp partition. He thinks he needs the highest endurance for the SSD. Do you agree the SCRATCH use case (even with small blocks access) is worst I/O scenario? Notat all :), First, it’s 50/50 R/W mix, everything we write, will be read after. However R/W ratio is not a significant factor in workload nearly as much as random vs sequential.  In this case, scratch files are typically saved in a small portion of the drive, and without threading are sequential.  Even small files are “big” to an SSD.

 

  • Spare Area capacity. Bigger spare area allows the SSD to decrease Write Amplification Factor. WAF is the ration of amount of data writes to NAND to the amount of data host writes to SSD. Target to 1 if the SSD controller doesn’t have the compression. But it can never be the one due to NAND structure – we read the data in sectors, write in pages (multiple sectors) and erase in blocks (number of pages). That’s HW limitation of the technology, but genius engineers were able to control it in a FW and make WAF of Intel SSDs lowest in the industry.

 

Firmware means a lot, does it?

 

Of course, on top of these three main influencers we add FW tricks, optimizations, and the reporting. Two similar SSDs from different vendors never are the same if they have different FW. Let’s have a look at the features of our FW:

  • SMART reporting – common for the industry. Allows seeing the current status of the drive, errors, endurance (TBW to the date) and remaining lifetime. That’s what every vendor has and absolutely every user needs for daily monitoring.
  • Endurance Analyzer – very special FW feature of Intel DC SSDs. Allows to forecast expected lifetime based on the user workload. Works simple - you reset specific SMART attribute timer, run your workload for few hours better days, and then read another SMART value which tells you estimated life time in days/months/years of exactly that SSD and exactly your workload. That’s the amazing advantage of our products.

 

How to run Endurance Analyzer?111.jpg

 

Definitely it’s not the rocket science, let me point to this document as the reference. There are some hints, which will help you to go through that process easier. Endurance Analyzer is supported on both Intel Data Center SSD product families – SATA and PCIe NVMe SSDs such as P3700/P3600/P3500. For the SATA case you need to make sure you can communicate to the drive by SMART commands. That can be the limitation for some specific RAID/HBA configurations where vendors don’t support pass through mode for AHCI commands. In such cases a separate system with SATA ports routed from PCH (or other supported configuration) should be used. Next, you need correct SW tool, which is capable to reset required timer. We’re some open source tools, but I advise to use Intel SSD Data Center Tool which is cross-platform, supports every Intel DC SSD and can do lot more than basic management tools. Here are the steps:

 

1.      Reset SMART Attributes using the reset option. This will also save a file that contains the base SMART data. This file is needed, and used, in step 4 when the life expectancy is calculated.

                    isdct.exe set –intelssd # enduranceanalyzer=reset

2.      Remove the SSD and install in test system.

3.      Apply minimum 60-minute workload to SSD.

4.      Reinstall SSD in original system. Compute endurance using the show command.

                    isdct.exe show –a –intelssd #

5.      Read the Endurance Analyzer value, which represents the drive’s life expectancy in years.

 

Another Real life example here:

Big trip reservation agency has complained to Intel SSD endurance behind the RAID array, saying it’s not enough for their workloads. And according to I/O traces under OS the drive must have higher endurance to support lots of writes. My immediate proposal was to confirm it with the Endurance Analyzer. It has provided the understanding of what happened on the SSD device level, taking off OS and the RAID controller. After we ran the test for a week (including work week and a weekend), we got 42 years of expected life time on that week workload. Customer might be right if he measured peak workload only and projected it for a whole week, which is not the case for the environment.

 

Wrapping up…

Now you understand there are three important factors that effect endurance. We’re able to change two of them – workload profile and increase the over provisioning. But don’t confuse yourself – you can’t make High Endurance Technology SSD (such as P3700 or S37x0) from Standard or Mid Endurance drive (P3600/S3610, P3500/S35x0). They use different NAND with a different maximum number of erase/programming cycles. Likely, you can use the Endurance Analyzer to make an optimal choice of the exact product and requirements for the over provisioning.

At the end I have another customer story…

 

Final real life example here:

I want to address my initial definition of the endurance and two ways to measure it – TBW and DWPD. Look, how tricky is it…

Customer A did an over provisioning of the drive by 30%. He was absolutely happy with write performance improving on 4k block writes. He tested it with his real application and confirmed the stunning result. Then he decided to use Endurance Analyzer to understand the endurance improvement estimated in a days. He ran the procedure with a few days test. He was surprised with the result. Endurance in TBW has increased significantly, but the performance was increased too, so, now with 30% over provisioning on his workload he was not able to meet 5 years life span. The only way to avoid such was setting the limit for the write performance.

 

Andrey Kudryavtsev,

SSD Solution Architect

Intel Corp.

Fig3.png

At this start of this series on Big Data and analytics, I mentioned that Intel derived US$351 million in value from its analytics programs during 2014.  While access to tools such as Cloudera Hadoop is very important, the best analytic tools in the world are not useful unless you can feed them correct, valid, and relevant data.   And even if you do get that data and are able to analyze it, analysis capabilities still are not useful unless the results can get them to the people able to take actions based on them.

 

Ashok Agarwal talks about some of these issues in his blog post on connected data.  15 years of data stored in some 4000 spreadsheets and other sources were integrated in a single database in an 18 month long program of data cleansing and consolidation.    That data, when fed into a decision support system, yielded US$264 million in increased revenue.

 

Gathering and analyzing data is not the only way to get value from this data.  Ashok also mentioned that how that data will be coupled with an advanced data visualization tool called the Info Wall. The Info Wall was designed so the executives could easily manipulate the historical data, through touch, to make actionable decisions that can improve business.

 

Fig1.png

I had a chance to experiment with the Info Wall.  The paper mentions hosting multiple data sources, and these can be simultaneously visualized like the many graphs shown to the right.  Other data sources can be television. I experienced opening a feed from a television channel with my hands.  The title of the white paper on the Info Wall is “Collaborative Visual Data Analysis Enables Faster, Better Decisions," and the Info Wall is definitely capable of being used collaboratively. Multiple people can manipulate data sources at the same time, and they have plenty of room as the Info Wall takes up one entire wall of a conference room.  The Info Wall will gaining more capability in the future as people from multiple sites will be able to collaborate.

 

You can find out more about the capabilities of the Info Wall and the data cleansing and integration efforts needed to make it work in this white paper.

BI.jpgThe concept of “better-informed” decisions is distinctly different than the concept of “better” decisions— the former is generally a choice, whereas the latter often results from an action. Better-informed leaders don’t always make better decisions, but better decisions almost always start with better-informed leaders. Business intelligence (BI) can be the framework that enables organizations of all sizes to make faster, better-informed business decisions.

 

BI Should Play a Role in Better-Informed Decisions

 

This same principle equally applies to individuals such as better-informed patients or better-informed consumers. Ultimately, when the final decision lies with us (humans), we either choose to ignore the data or choose to use it in our decision making—assuming, of course, that it exists and we can trust it. However, even the best implementations of BI solutions can neither change nor prevent uninformed or less-informed decisions if we choose to ignore the data.

 

Typically, “data-entangled decisions” involve potential use of data for analysis compared to other decisions that may be driven purely by our emotional states or desires. Most business decisions are data-entangled decisions. In these, existing or new data can play an important role compared to a personal decision, such as when to go to sleep. A data-entangled decision in general follows three main phases when a business question, challenge, or opportunity presents itself. BI, if designed and implemented effectively, should support all three phases.

 

Phase One: Reaction

 

In the reaction phase, the initial course is fashioned out of an immediate reaction to a threat or an opportunity. Typically, some preliminary figures are accompanied by known assumptions that form the initial direction. In this early stage, initial data is still “entangled” and only the requirement for additional information can be outlined. In some cases, however, the decision may be already made and if so, the effort to gather additional data for further analysis becomes a futile exercise.

 

Phase Two: Validation

 

Additional data produces opportunities for in-depth analysis, which should eventually lead to actionable insight. But these results need to be validated first using some type of critical thinking. Moreover, who validates the results is as critical as how it’s done.

 

Just as we don’t ask programmers to validate their own code, we don’t ask analysts or managers to validate their own conclusions of data. If available or feasible, objective methods that can remove assumptions or personal deductions from this phase provide the fastest and clearest path to actionable insight.

 

Phase Three: Execution

 

The execution phase is where the final decision will be made and the use of data will be completely up to the person in charge of the decision. There are three possibilities before the final decision is made and action is taken:

  1. The conclusion is supported by data, and we choose to take it into account for our decision.
  2. The conclusion is supported by data, and we choose to ignore it.
  3. The conclusion isn’t or can’t be supported by data, and we are left to our own judgment to make the decision.

 

In business, better-informed decisions often start with a strong appetite for data, followed by a healthy dose of skepticism for it. If available, our collective insight becomes the guiding light for our decisions enhanced by data. In the absence of it—when we are left to decide by ourselves—we seek wisdom in our own experiences to fill the void where we can’t find or rely on data.

 

Bottom Line

 

The bottom line is, we need to use data to support our arguments instead of using arguments to support our data. And BI, if designed and implemented effectively, should be the framework that supports all of this by enabling us to make faster, better-informed decision at all levels of our organization. This, in turn, helps us drive growth and profitability.

 

Where do you see the biggest challenge in making better-informed decisions?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Caesars.jpgHappy customers are the lifeblood of the entertainment industry.  But before you can make customersand potential customershappy, you've got to understand what they want. For Caesars Entertainment, that meant putting together a big data analytics system that could handle a new, expanded set of customer data for its hotels, shows, and shopping venueswith faster processing and better results.

 

Expanded Data Environment

To improve customer segmentation and build more effective marketing campaigns, Caesars needed to expand its customer data analysis to include both unstructured and semi-structured data. It was also important to speed up processing for analytics and marketing campaign management.

Caesars built a new data analytics environment based on Cloudera’s Distribution Including Apache Hadoop (CDH) software running on servers equipped with the Intel® Xeon® processor E5 family. The new system reduced processing time for key jobs from 6 hours to just 45 minutes and expanded Caesars' capacity to more than 3 million records processed per hour. It also enables fine-grained segmentation to improve marketing results and improves security for meeting Payment Card Industry (PCI) and other key security standards.

Caesars Infographic.jpgReaching New Customers

The new environment makes it easier for Caesars to reach out to younger customers, who are likely to prefer using smart phones or tablets to get their information. Caesars' new mobile app lets customers use their mobile devices to check rates and make reservations. That data goes to the Hadoop environment for analysis in real time, where Caesars can use it to fine-tune its operations. Caesars can even use this data to tailor mobile responses and offers on the fly based on factors like the customer’s preferences and location.

Creating Personalized Marketing

With faster, better analysis of all data types, Caesars can now create and deliver personalized marketing, reaching a new generation of customers and winning their loyalty.

You can take a look at the Caesars Entertainment solution here or read more about it here. To explore more technology success stories, visit www.intel.com/itcasestudies or follow us on Twitter.

Old-PCs-Put-Business-At-Risk.pngHere’s an example: An employee receives an email—apparently from a legitimate source—asking him to update an account password. As requested, he enters his old password and then types in a new one. Unfortunately, that’s all it takes for a hacker to steal $200,000 from his small business’s bank account, much of it unrecoverable. It’s a simple but extremely costly mistake, and it could happen to anyone. In fact, over the past few years, it’s been happening a lot. Did you know that some of the biggest security breaches in recent memory—including attacks on Sony, Target, and JPMorgan Chase—started with a phishing email sent to an employee?

 

If you own a business, you’re at risk. No matter how diligent you and your employees are about security, mistakes can happen. And the results can be disastrous. Virus protection and other software solutions—though useful and necessary—only get you so far, especially if your business is using PCs that are more than two years old. The problem is that software-only security solutions from even a few years ago can’t keep up with today’s cybercriminals and are not sufficient to protect your devices and vital business data.

 

So what can you do to stay safe? Don’t rely on software alone. You need your hardware to do the heavy lifting.

 

What You Can Do to Make Your Business More Secure

 

New desktops with at least 4th generation Intel Core processors have hardware-enhanced security features that allow hardware and software to work together, protecting your business from malware and securing the important, private data and content you create and share. Features such as Intel® Identity Protection Technology (Intel® IPT), Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI), and others are crucial to making your business more secure.

 

With hackers working around the clock to identify the next potential victim, it’s more important than ever for you to prioritize security. Read the new Intel white paper to learn more about what’s at risk, five new hardware-enhanced security features that help combat cybercrime, and why replacing your pre-2013 PC is a smart move.

 

In the meantime, join the conversation using #IntelDesktop, and get ready to rediscover the desktop.


This is the fifth installment of the Desktop World Tech Innovation Series.

 

To view more posts within the series click here: Desktop World Series

The software-defined storage (SDS) appliance concept of hyper-convergence is an attractive alternative to traditional Storage Area Network (SAN) and Network Attached Storage (NAS) for small to medium as well as large-sized businesses. Hyper converged infrastructure seems to be popular right now. So what is hyper-converged storage, and why should you care?

 

A hyper-converged storage system allows IT to manage compute, storage and virtualization resources as a single integrated system through a common tool set. The resulting system, often referred to as an appliance, consists of a server, storage, networking, and a hypervisor with management framework. Hyper-converged appliances can be expanded through the addition of nodes to the base unit to suit the compute and storage needs of a business in a manner know as scale-out. 

 

Hyper-converged scale-out storage differs from the older scale-up approach. In a scale-up system, the compute capacity is confined as storage is added, while in a scale-out system, new compute nodes can be added as the need for compute and storage arises. Scaling-up storage has often been cost prohibitive and often lack the necessary random IO performance (IOPS) needed by virtualized workloads. The scale-out approach is a more efficient use of hardware resources, as it moves the data closer to the processor. When scale-out is combined with solid-state drive (SSD) storage it offers far lower latency, better throughput, and increased flexibility to grow with your business. Scale-out is commonly used for virtualized workloads, private cloud, data bases, and many other business applications. 

 

Today, Atlantis Computing introduced a new all-flash hyper-converged appliance that extends the concept of a software defined scale-out storage to the cloud. Atlantis HyperScale™, a turn-key hyper-converged appliance, delivers all-flash performance storage based on Intel® SSD Data Center Family  for enterprise wide applications. What is different is that HyperScale™ based on Atlantis USX pools existing enterprise SAN, NAS , and DAS storage and accelerates its performance by use of Intel SSDs. By abstracting the storage, USX delivers virtual storage volumes to enterprise applications. It further provides a context aware data service that performs deduplication and IO acceleration in real time for quality of service even when using public cloud services.

 

The Intel SSD Data Center Family holds the key for the HyperScale™ all-flash appliance. The Intel® SSD is designed for read- and write-intensive storage workloads with fast, consistent performance for smooth data center operation. The reliability, data integrity, and cost-effectiveness of the storage volumes in the HyperScale™ appliance helps protect your data with enterprise class features at a reasonable cost. The architecture of Intel's SSDs ensure the entire read and write path and logical-block address (LBA) mapping has data protection and error correction. Many enterprise workloads depend not only on reliable data, but consistency in how quickly that data can accessed.  Consistent latency, quality of service, and bandwidth no matter what background activities are happening on the drive is the basis of the Intel Data Center Family.  Rigorous testing ensures a highly reliable SSD with consistent performance.

 

Today HyperScale™ all flash hyper-converged appliance introduces a new order of scale-out storage. The turn-key appliance can eliminate scalability and performance bottlenecks and allow computing and storage capacity to grow with the business need.

 

Filter Blog

By date:
By tag: