1 2 3 Previous Next

IT Peer Network

1,209 posts

Once the dust settles from Black Friday doorbusters and end-of-year clearance sales, retailers — and the tech vendors that work with them — gather in New York City for the BIG Show, the National Retail Federation’s (NRF) Annual Convention and EXPO.

 

BIG-Show.png

Every year, the NRF EXPO offers intriguing glimpses into where the future of retail technology is heading, and this year was no exception. For 2015, I noted several big trends with the potential to revolutionize how retailers engage with and delight their customers. They included the following:

 

Endless Aisles

 

Consumer research reveals that even though tech-savvy shoppers do a significant amount of their buying online, they still love the in-store shopping experience. Intelligent endless aisle solutions enable retailers to offer the best of both worlds with self-service kiosks that expand inventory selection to include not only product in store, but also product in other retail locations. Advances in design for the classic PC have ignited innovation within retail solutions. In addition to the traditional tower design, new PC categories including All-in-One and Mini PC are enabling manageable and engaging virtual merchandizing solutions that easily tap into a store’s ecommerce systems to give customers a convenient way to explore an infinite array of additional products, sizes, colors, and options and arrange for fast home delivery.

 

4K Signage

 

Store signage has taken a huge leap forward with 4K ultra-high-definition (UHD) displays. With four times higher resolution than 1080p HD, 4K displays are not only capable of blowing customers’ minds with stunning detail and color, but, as part of a digital signage solution, they also allow retailers to offer customers richer shopping experiences with dynamic, personalized promotions and immersive, interactive displays throughout their stores.

 

3D Meets VR

 

Another recent breakthrough pushing retail solutions into the next dimension is the ability to capture 3D images. Intel RealSense 3D cameras make it increasingly easy to scan anything from auto parts to xylophones in highly detailed 3D, and give customers a more complete and appealing view of products.

 

Meanwhile, augmented reality solutions such as MemoryMirror enable Neiman Marcus and other clothing retailers to offer large screen digital fitting rooms that delight their customers with virtual try-ons, 360-degree views, and the ability to remember and share outfits.

 

A common thread linking all of these emerging retail solutions is how they utilize the performance and versatility of today’s Tower, Mini, and All-in-One PCs to blur the lines between online and in-store to offer customers consistently outstanding experiences everywhere.

 

Want more trends and highlights from retail’s BIG Show? Visit Intel’s NRF 2015 page.

 

To continue this conversation on Twitter, please use #IntelDesktop.

In my last insight into the Intel IT Business Review I am looking at the impact of one of the BIGGEST trends in business IT, Big Data or as I prefer to use, Analytics.

 

In an age when organizations such as Intel are rich in data, finding value in this data lies in the ability to analyze it and derive actionable business intelligence (BI). Intel IT continues to invest in tools that can transform data into insights to solve high-value business problems. We have seen significant BI results from our investments in a number of areas.

 

For example, Intel IT have developed a recommendation engine to help Intel sales teams strategically focus their sales efforts to deliver greater revenue. This engine uses predictive algorithms and real-time data analysis to prioritize sales engagements with resellers that show the greatest potential for high-volume sales. We saw USD 76.2 million in revenue uplift for 2014 through the use of this capability.

 

Integrating multiple data sources has enabled Intel to use its decision support system to significantly impact revenue and margins by optimizing supply, demand, and pricing decisions. This work resulted in revenue optimization of USD 264 million for 2014.

 

And the big data platform for web analytics is yielding insights that enable more focused and effective marketing campaigns, which, in turn, increase customer engagement and sales.

 

The exploration and implementation of Assembly Test Manufacturing (ATM) cost reduction initiatives involve complex algorithms and strong computation capabilities due to the high volume and velocity of data that must be processed quickly. The ATM data sets–containing up to billions of rows–cannot be effectively processed with traditional SQL platforms. To address this gap, IT have implemented a reusable big data analytics correlation engine. This tool will support various high-value projects. The estimated value for the first of these projects, a pilot project for one of Intel’s future processors, is greater than USD 13 million.

 

Intel IT are exploring additional use cases for data collection and analytics across Intel’s manufacturing, supply chain, marketing, and other operations to improve Intel’s operational efficiency, market reach, and business results. In 2014 alone, Intel IT’s use of BI and analytics tools increased Intel revenue by USD 351 million.

 

To read the Intel IT Business Review in full go to www.intel.com/ITAnnualReport

MHB

Creating Confidence in the Cloud

Posted by MHB Mar 30, 2015

In every industry, we continue to see a transition to the cloud. It’s easy to see why: the cloud gives companies a way to deliver their services quickly and efficiently, in a very agile and cost-effective way.

 

Financial services is a good example — where the cloud is powering digital transformation. We’re seeing more and more financial enterprises moving their infrastructure, platforms, and software to the cloud to quickly deploy new services and new ways of interacting with customers.

 

But what about security? In financial services, where security breaches are a constant threat, organizations must focus on security and data protection above all other cloud requirements.

 

This is an area Intel is highly committed to, and we offer solutions and capabilities designed to help customers maintain data security, privacy, and governance, regardless of whether they’re utilizing public, private, or hybrid clouds.

 

Here’s a brief overview of specific Intel® solutions that help enhance security in cloud environments in three critical areas:

  • Enhancing data protection efficiency. Intel® AES-NI are instructions in the processor that accelerate encryption based on the widely-used Advanced Encryption Standard (AES) algorithm.  These instructions enable fast and secure data encryption and decryption, removing the performance barrier to allow more extensive use of this vital data protection mechanism. With this performance penalty reduced, cloud providers are starting to embrace AES-NI to promote the use of encryption.
  • Enhancing data protection strength. Intel® Data Protection Technology with AES-NI and Secure Key is the foundation for cryptography without sacrificing performance. These solutions can enable faster, higher quality cryptographic keys and certificates than pseudo-random, software-based approaches in a manner better suited to shared, virtual environments.
  • Protecting the systems used in the cloud or compute infrastructure. Intel® Trusted Execution Technology (Intel® TXT) is a set of hardware extensions to Intel® processors and chipsets with security capabilities such as measured launch and protected execution. Intel TXT provides a hardware-enforced, tamper resistant mechanism to evaluate critical, low level system firmware and OS/Hypervisor components from power-on. With this, malicious or inadvertent code changes can be detected, helping assure the integrity of the underlying machine that your data resides on. And at the end of the day, if the platform can’t be proven secured, the data on it probably can’t really be considered secured.

 

Financial services customers worldwide are using the solutions to provide added security at both the platform and data level in public, private, and hybrid cloud deployments.

 

Cloud-Security.jpgPutting It into Practice with our Partners

 

At Intel®, we are actively engaged with our global partners to put these security-focused solutions into practice. One of the more high-profile examples is our work with IBM. IBM is using Intel TXT to deliver a secure, compliant, and trusted global cloud for SoftLayer, its managed hosting and cloud computing provider. When IBM SoftLayer customers order cloud services on the IBM website, Intel TXT creates an extra layer of trust and control at the platform level. We are also working with IBM to offer Intel TXT-enhanced secure processing solutions including VMware/Hytrust, SAP, and the IBM Cloud OpenStack Services.

 

In addition, Amazon Web Services (AWS), a major player in financial services, uses Intel AES-NI for additional protection on its Elastic Compute Cloud (EC2) web service instances. Using this technology, AWS can speed up encryptions and avoid software-based vulnerabilities because the solution’s encryption and decryption instructions are so efficiently executed in hardware.

 

End-to-End Security

 

Intel security technologies are not only meant to help customers in the cloud. They are designed to work as end-to-end solutions that offer protection — from the client to the cloud. In my previous blog, for example, I talked about Intel® Identity Protection Technology (Intel® IPT), a hardware-based identity technology that embeds identity management directly into the customer’s device. Intel IPT can offer customers critical authentication capabilities that can be integrated as part of a comprehensive security solution.

 

It’s exciting to see how our technologies are helping financial services customers increase confidence that their cloud environments and devices are secure. In my next blog, I’ll talk about another important Intel® initiative: data center transformation. Intel® is helping customers transform their data centers through software-defined infrastructures, which are changing the way enterprises think about defining, building, and managing their data centers.

 

 

Mike Blalock

Global Sales Director

Financial Services Industry, Intel

 

This is the final installment of a seven part series on Tech & Finance. Click here to read blog 1, blog 2, blog 3, blog 4, blog 5, and blog 6.

In my second insight into the IT Business report I am focusing on the impact of Cloud inside Intel.

 

The cloud is changing the business landscape and here at Intel it has transformed the IT culture to align with the strategies of the business groups. Intel IT brings technical expertise and business acumen to bear on the highest priority projects at Intel to accelerate business at a faster pace than ever before. Intel IT have simplified the way Intel’s business groups interact with IT to identify workflow and process improvements that IT can drive. Because they understand their businesses, they can tailor cloud hosting decisions to specific business priorities.

 

Our private cloud, with on-demand self-service, enables Intel business groups to innovate quickly and securely. In the annual Intel IT Business Review Intel IT reveals that 85 percent of all new services installed for our Office, Enterprise and Services divisions are hosted in the cloud.

 

Intel IT attribute the success of our private cloud to implementing a provider-like cloud hosting strategy, advancing self-service infrastructure as a service and platform as a service, and enabling cloud-aware applications. Intel's private cloud saves about USD 7.5 million annually while supporting an increase of 17 percent in operating system instances in the environment.

 

Cloud-aware applications can maximize cloud advantages such as self-service provisioning, elasticity, run-anywhere design, multi-tenancy, and design for failure. To enhance Intel developers’ skill sets, in 2013 Intel IT delivered 8 code-a-thons in 3 geographical regions, training over 100 Intel developers in how to build cloud-aware applications.

 

To increase our understanding of how hybrid clouds can benefit Intel, IT are also conducting a hybrid cloud Proof of Concept using open source OpenStack APIs. Hybrid cloud hosting can provide additional external capacity to augment our own private cloud while enabling us to optimize our internal capacity.

 

Hybrid cloud hosting also increases flexibility, allowing us to dynamically adjust capacity when needed to support business initiatives efficiently.

 

Intel IT have accelerated hosting decisions for the business customers by developing a methodical approach to determine the best hosting option. They consider security, control, cost, location, application requirements, capacity, and availability before arriving at a hosting decision for each use case. Offering optimized hosting solutions improves business agility and velocity while reducing costs.

 

For more go to www.intel.com/ITAnnualReport

 

There are many ways and software tools available for benchmarking SSDs today. Many of them are consumer oriented with very nice looking interface, others are command line based, ugly looking, doing something strange. I’m not going to criticize none of these in this blog, I’ll share the approach we’re using at Solution Architecture team at Intel NVM Solutions Group.

 

There are two proven software tools for IO benchmark used there – Iometer (http://www.iometer.org) for Windows and FIO (http://freecode.com/projects/fio) for Linux OS. Both of them offer many advanced features for simulating different types of workloads. Unfortunately, FIO lacks of GUI interface, it’s only command based. Having an amazing feature set, simply was not enough to be used as a demo tool. That’s how an idea of a FIO Visualizer (http://01.org/fio-visualizer) appeared, developed at Intel and released to the Open Source.

 

What is FIO Visualizer? - It’s a GUI for the FIO. It parses console output in real-time, displays visual details for IOPS, bandwidth and latency of each device's workload. The data is gathered from FIO console output at assigned time intervals and updates the graphs immediately. It is especially valuable for benchmarking SSDs, particularly those based on NVMe specifications.

 

Let’s have a quick look on the interface features:


  • Real time. Minimum interval is 1 second, can be adjusted to even lower value by simple FIO source code change.
  • Monitors IOPS, bandwidth, latency for reads, writes and unique QoS analytics.
  • Multithread / multi jobs support makes a value for NVMe SSD benchmarking.
  • Single GUI Windows, no overlap windows or complicated menus.
  • Customizable layout. User defines which parameter needs to be monitored.
  • Workload manager for FIO settings. Comes with base workload settings used in all Intel SSD datasheets.
  • Written on Python with QtGraph; uses third-party libraries to simplify GUI code.

 

fiovisualizer.pngFIO Visualizer GUI screen with an example of running workload.

 

Graph screen is divided for two vertical blocks corresponding for read / write statistic. It’s also divided for three horizontal segments displaying IOPS, bandwidth and latency. Every graph supports auto-scaling in both dimensions. Individual zoom is also supported for each graph. Once zoomed, it can roll back to auto-scaling by popup button. There is possibility to disable certain graphs and change the view for the control panel on the right.

 

multijob.PNG

This example demonstrates handling of multi-job workloads, which are executed by FIO in separate threads.

 

 

Running FIO Visualizer.

 

Having a GUI written in Python gives us great flexibility to make the changes and adopt the enhancements. However it uses few external python libraries, which are not the part of default installation.

This results in the OS compatibility/dependency:

 

Here are exact steps to make it running under CentOS 7:

 

  0. You should have python and PyQt installed with the OS

 

  1. Install pyqtgraph-develop (0.9.9 required) form http://www.pyqtgraph.org

        $ python setup.py install

 

  2. Install Cyphon from http://cython.org Version 0.21 or higher is required.

        $ python setup.py install

 

  3. Install Numpy from http://numpy.org

        $ python setup.py build

        $ python setup.py install

 

  4. Install FIO 2.1.14 (latest supported at the moment) from http://freecode.com/projects/fio

        # ./configure

        # make

        # make install

 

  5. Run Visualizer under root.

        # ./fio-visualizer.py

 

 

SSD Preconditioning.


Before running the benchmark you need to prepare the drive. This usually calls “SSD Preconditioning”, i.e. achieving sustained performance state on "fresh" drive. Here are basic steps to follow to get reliable results at the end:

 

  • Secure Erase SSD with vendor tools. For Intel® Data Center SSDs this tool called Intel® Solid-State Drive Data Center Tool.
  • Fill SSD with sequential data twice of it's capacity. This will guarantee all available memory is filled with a data including factory provisioned area. DD is the easiest way to do so:

          dd if=/dev/zero bs=1024k of=/dev/"devicename"

  • If you're running sequential workload to estimate the read or write throughput then skip the next step.
  • Fill the drive with 4k random data. The same rule, total amount of data is twice drive's capacity.

          Use FIO for this purpose. Here is an example script for NVMe SSD:

      [global]

        name=4k random write 4 ios in the queue in 32 queues

        filename=/dev/nvme0n1

        ioengine=libaio

        direct=1

        bs=4k

        rw=randwrite

        iodepth=4

        numjobs=32

        size=100%

        loops=2   

        [job1]

  • Now you’re ready to run your workload. Usually measurements start after 5 minutes of runtime in order to let the SSD FW adapting to the workload. It will enter the drive into sustained performance state.

 


Workload manager.


Workload manager is a set of FIO settings grouped in files. It comes together with FIO Visualizer package. Each file represents specific workload. It can be loaded directly into FIO Visualizer tool. From where it starts FIO job automatically.

Typical workload scenarios are included in the package. These are basic datasheet workloads used for Intel® Data Center SSDs and some additional ones which simulate real use cases. These configuration files can be easy changes in any text editor. It’s great start point for the benchmarking.

 

workloadm.png

You see some workloads definitions have a prefix SATA, while others come with NVMe. There are few important reasons why they are separate. AHCI and NVME software stack are very different. SATA drives utilize single queue of 32 I/Os max (AHCI), while NVMe drives were architectured as massively paralleled devices. According to NVMe specification, these drives may support up to 64 thousands of queue of 64 thousands commands each.  On practice that means certain workloads such as small block random ones will have a benefits of executing them in parallel. That’s the reason, random workloads for NVMe drives use multiple FIO jobs at a time. Check it in the section “numjobs”. 

 

To learn more about NVMe, please see public IDF presentations explaining all details of this:

 

NVM Express*: Going Mainstream and What’s Next

 

Supercharge Your Data Transfers with NVM Express* based PCI Express* Solid-State Drives

Cybersecurity is a significant problem and it continues to grow.  Addressing symptoms will not achieve the desired results.  A holistic approach must be applied which involves improving the entire technology ecosystem.  Smarter security innovation, open collaboration, trustworthy practices, technology designed to be hardened against compromise, and comprehensive protections wherever data flows is required. 

The technology industry must change in order to meet ever growing cybersecurity demands.  It will not be easy, but technologists, security leaders, and end-users must work together to make the future of computing safer.

 

2015 CTO Forum - Security Transformation.jpg

 

I recently spoke at the CTO Forum Rethink Technology event on Feb 13 2015.  Presenting to an audience of thought-leading CTO’s and executives.  I was privileged to speak on a panel including Marcus Sachs (VP National Security Policy, Verizon), Eran Feigenbaum (Director of Security for Google for Work, Google), Rob Fry (Senior Information Security Architect, Netflix), and Rick Howard (CSO, Palo Alto Networks).  We all discussed the challenges facing the cybersecurity sector and what steps are required to help companies strengthen their security.

 

I focused on the cybersecurity reality we are in, how we all have contributed to the problem, and consequently how we must all work together to transform the high technology industry to become sustainably secure.

The complete panel video is available at the CTO Forum website http://www.ctoforum.org/

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

  Ainda muitas pessoas se perguntam sobre as aplicações dos Processadores intel Core no Windows 10, porem a Microsoft garante que esta Trabalhando junto

com a intel para que não exista perca de desempenho e estabilidade no seu sistema operacional Windows 10, a pergunta que não quer calar: COMO ANDA A APLICAÇÃO

DOS PROCESSADORES INTEL CORE M em tablet's com a vinda do WINDOWS 10 da microsoft?

 

ainda que possa especular ainda temos certeza que muito ainda virá acontecer.

I wanted to share with you a series of insights into the Intel IT Business Review... my first is about employee devices.

 

The talent battleground has never been more cluttered and in the technology sector if you want the right person for the job you need the right IT solutions.

 

This is a subject explored in detail in the new Intel IT Business Review. Intel as a major international employer, knows how important it is to recruit and retain the best talent and  that technology sector employees require great technology experiences through mobility, ease of collaboration, and a choice of devices.

 

For example, recent college graduates don’t just want these technology experiences, they expect them. But whatever the level, employees have to be empowered  to choose the right devices for their jobs. To do this, then you need to offer a variety of devices, including lighter, more capable mobile devices with a long battery life, the latest operating systems, and touch capabilities. These devices can transform the workplace by providing employees with a greater ability to work in a more flexible manner with optimum mobility and a better user experience.

 

Intel studies confirm that “one size does not fit all” regarding computing devices across Intel’s varied work environments. About 80 percent of Intel employees currently use mobile computing devices in the workplace, and the majority of the PC fleet consists of Ultrabook™ devices or 2-in-1 devices. In response to increasing employee demand for touch capabilities, the deployment of touch-enabled business Ultrabook devices and applications was accelerated, which has improved employee productivity and increased job satisfaction.

 

In a recent piece of Intel research, facility technicians reported that the use of tablets increased productivity up to 17 percent based on the number of completed work orders. In addition, by using tablets to display online information, these technicians performed their jobs 30 percent faster. 80 percent of participants reported an increase in job flexibility and 57 percent reported an increase in productivity.

 

In 2015, Intel will continue to investigate how innovations in mobile computing can improve employee productivity and attract the best and brightest talent to help develop tomorrow’s technology. To read the Intel IT Business Review in full go to www.intel.com/ITAnnualReport

 

“Intel SSDs are too expensive!”

“The performance of an SSD won’t be noticed by my users.”

“Intel SSDs will wear out too fast!”

"I don’t have time to learn about deploying SSDs!”

 

I’ve heard statements like this for years, and do I ever have a story to share – the story of Intel’s adoption of Intel® Solid-State Drives (Intel® SSDs).

 

Before I tell you more, I would like to introduce myself.  I am currently a Client SSD Solutions Architect in Intel’s Non-Volatile Memory Solutions Group (the SSD group).   Prior to joining this group last year, I was in Information Technology (IT) at Intel for 26 years.  The last seven years in IT were spent in a client research and pathfinding role where I investigated new technologies and how they could be applied inside of Intel to improve employee productivity.

 

I can still remember the day in late 2007 when I first plugged in an Intel SSD into my laptop.  I giggled.  A lot.  And that’s what sparked my passion for SSDs.  I completed many lab tests, research efforts and pilot deployments in my role, which led to the mainstream adoption of Intel SSDs within Intel.  That’s the short version.  More detail is documented in a series of white papers published through our IT@Intel Program.  If you’d like to read more about our SSD adoption journey, here are the papers:

 

 

I’ve answered many technical and business-related questions related to SSDs over the years.  Questions, and assumptions, like the four at the top of this blog, and perhaps one hundred others.  But the question I’ve been asked more than any other is, “how can you afford to deploy SSDs when they cost so much compared to hard drives?”  I won’t go in to the detail in this introductory blog, but I will give you a hint, point you to our Total Cost of Ownership estimator and ask, “how can you afford to NOT use SSDs?”

 

I plan to cover a variety of client SSD topics in future blogs.  I have a lot of info that I would like to share about the adoption of SSDs within Intel, and about the technology and products in general.  If you are interested in a specific topic, please make a suggestion and I will use your input to guide future blogs.

 

Thanks for your time!

 

Doug
intel.com/ssd

intel-xeon-processor-d-product-family-1.jpgIn case you missed it, we just celebrated the launch of the Intel Xeon processor D product family. And if you did miss it, I’m here to give you all the highlights of an exciting leap in enterprise infrastructure optimization, from the data center to the network edge.

 

The Xeon D family is Intel’s 3rd generation 64-bit SoC and the first based on Intel Xeon processor technology. The Xeon D weaves the performance of Xeon processors into a dense, lower-power system-on-a-chip (SoC). It suits a unique variety of use cases, ranging from dynamic web serving and dedicated web hosting, to warm storage and network routing.

 

Secure, Scalable Storage

 

The Xeon D’s low energy consumption and extremely high performance make it a cost-effective, scalable solution for organizations looking to take their data centers to the next level. By dramatically reducing heat and electricity usage, this product family offers an unrivaled low-powered solution for enterprise server environments.

 

Server systems powered by the new Intel Xeon D processors offer fault-tolerant, stable storage platforms that lend themselves well to the scalability and speed clients demand. Large enterprises looking for low-power, high-density server processors for their data stacks should keep an eye on the Xeon D family, as these processors offer solid performance per watt and unparalleled security baked right into the hardware.

 

Cloud Service Providers Take Note

 

intel-xeon-d-processor-family.jpg1&1, Europe’s leading web hosting service, recently analyzed Intel’s new Xeon D processor family for different cloud workloads such as storage or dedicated hosting. The best-in-class service utilizes these new processors to offer both savings and stability to their customers. According to 1&1’s Hans Nijholt, the technology has a serious advantage for enterprise storage companies as well as SMB customers looking to pass on savings to customers:

 

“The [Xeon D’s] energy consumption is extremely low and it gives us very high performance. Xeon D has a 4x improvement in memory and lets us get a much higher density in our data center, combined with the best price/performance ratio you can offer.”

 

If you’re looking to bypass existing physical limitations, sometimes it’s simply a matter of taking a step back, examining your environment, and understanding that you have options outside expansion. The Xeon D is ready to change your business — are you ready for the transformation?

 

We’ll be revealing more about the Xeon D at World Hosting Days; join us as we continue to unveil the exciting capabilities of our latest addition to the Xeon family!

 

If you're interested in learning more about what I've discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation, connect with me on LinkedIn or use #ITCenter.

Following Intel’s lead – decoupling software from hardware and automating IT and business processes — can help IT departments do more with less.

 

When I think back to all the strategic decisions that Intel IT has made over the last two decades, I can think of one that set the stage for all the rest: our move in 1999 from RISC-based computing systems to industry-standard Intel® architecture and Linux for our silicon design workloads. That transition, which took place over a 5-year period, helped us more than double our performance while eliminating approximately $1.4 billion in IT costs.

 

While this may seem like old news, it really was the first step in developing a software-defined infrastructure (SDI) – before it was known as such – at Intel. We solidified our compute platform with the right mix of software on the best hardware to get our products out on time.

 

Today, SDI has become a data center buzzword and is considered one of the critical next steps for the IT industry as a whole.

StorageCapacity.png


Why is SDI (compute, storage, and network) so important?

 

SDI is the only thing that is going to enable enterprise data centers to meet spending constraints, maximize infrastructure utilization, and keep up with demand that increases dramatically every year.

 

Here at Intel, compute demand is growing at around 30 percent year-over-year. And as you can see from the graphic, our storage demand is also growing at a phenomenal rate.

 

But our budget remains flat or has even decreased in some cases.

 

Somehow, we have to deliver ever-increasing services without increasing cost.


What’s the key?

 

Success lies in decoupling hardware and software.

 

As I mentioned, Intel decoupled hardware and software in our compute environment nearly 16 years ago, replacing costly proprietary solutions that tightly coupled hardware and software with industry-standard x86 servers and the open source Linux operating system. We deployed powerful, performance-optimized Intel® Xeon® processor-based servers for delivering throughput computing. We followed this by adding performance-centric higher-clock, higher-density Intel Xeon processor-based servers to accelerate silicon design TTM (time to market) while significantly reducing EDA  (Electronic Design Automation) application license cost — all of which resulted in software-defined compute capabilities that were powerful but affordable.

 

Technology has been continuously evolving, enabling us to bring a similar level of performance, availability, scalability, and functionality with open source, software-based solutions on x86-based hardware to our storage and network environments.

 

Screen Shot 2015-03-20 at 12.45.32 PM.pngAs we describe in a new white paper, Intel IT is continuously progressing and transforming Intel’s storage and network environments from proprietary fixed-function solutions to standard, agile, and cost-effective systems.

 

We are currently piloting software-defined storage and identifying quality gaps to improve the capability for end-to-end deployment for business critical use.

 

We transitioned our network from proprietary to commodity hardware resulting in more than a 50-percent reduction in cost. We are also working with the industry to adopt and certify an open-source-based network software solution that we anticipate will drive down per-port cost by an additional 50 percent. Our software-defined network deployment is limited to a narrow virtualized environment within our Office and Enterprise private cloud.


But that’s not enough…

 

Although decoupling hardware and software is a key aspect of building SDI, we must do more. Our SDI vision, which began many years ago, includes automated orchestration of the data center infrastructure resources. We have already automated resource management and federation at the global data center level. Our goal is total automation of IT and business processes, to support on-demand, self-service provisioning, monitoring, and management of the entire compute/network/storage infrastructure. Automation will ensure that when a workload demand occurs, it lands on the right-sized compute and storage so that the application can perform at the needed level of quality of service without wasting resources.


Lower cost, greater relevancy

 

Public clouds have achieved great economy of scale by adopting open-standard-based hardware, operating systems, and resource provisioning and orchestration software through which they can deliver cost-effective capabilities to the consumers of IT. If enterprise IT wants to stay relevant, we need to compete at a price point and agility similar to the public cloud. SDI lets IT compete while maintaining a focus on our clients’ business needs.

 

As Intel IT continues its journey toward end-to-end SDI, we will share our innovations and learnings with the rest of the IT industry — and we want to hear about yours, too! Together, we can not only stay relevant to our individual institutions, but also contribute to the maturity of the data center industry.

 

51 per cent of workloads are now in the cloud, time to break through that ceiling?

 

 

At this point, we’re somewhat beyond discussions of the importance of cloud. It’s been around for some time, just about every person and company uses it in some form and, for the kicker, 2014 saw companies place more computing workloads in the cloud (51 per cent) - through either public cloud or colocation - than they process in house.

 

In just a few years we’ve moved from every server sitting in the same building as those accessing it, to a choice between private or public cloud, and the beginning of the IT Model du jour, hybrid cloud. Hybrid is fast becoming the model of choice, fusing the safety of an organisation’s private data centre with the flexibility of public cloud. However, in today’s fast paced IT world as one approach becomes mainstream the natural reaction is to ask, ‘what’s next’? A plausible next step in this evolution is the end of the permanent, owned datacentre and even long-term co-location, in favour of an infrastructure entirely built on the public cloud and SaaS applications. The question is will businesses really go this far in their march into the cloud? Do we want it to go this far?

 

Public cloud, of course, is nothing new to the enterprise and it’s not unheard of for a small business or start-up to operate solely from the public cloud and through SaaS services. However, few, if any, examples of large scale corporates eschewing their own private datacentres and co-location approaches for this pure public cloud approach exist.

 

For such an approach to become plausible in large organisations, CIOs need to be confident of putting even the most sensitive of data into public clouds. This entails a series of mentality changes that are already taking place in the SMB. The cloud based Office 365, for instance, is Microsoft’s fastest selling product ever. For large organisations, however, this is far from a trivial change and CIOs are far from ready for it.

 

The data argument

 

Data protectionism is the case in point. Data has long been a highly protected resource for financial services and legal organisations both for their own competitive advantage and due to legal requirements designed to protect their clients’ information. Thanks to the arrival of big data analysis, we can also add marketers, retailers and even sports brands to that list, as all have found unique advantages in the ability to mine insights from huge amounts of data.

This is at the same time an opportunity and problem. More data means more accurate and actionable insights, but that data needs storing and processing and, consequently, an ever growing amount of server power and storage space. Today’s approach to this issue is the hybrid cloud. Keep sensitive data primarily stored in a private data centre or co-located, and use public cloud as an overspill when processing or as object storage when requirements become too much for the organisation’s existing capacity.

 

The amount of data created and recorded each day is ever growing. In a world where data growth is exponential,  the hybrid model will be put under pressure. Even organisations that keep only the most sensitive and mission critical data within their private data centres whilst moving all else to the cloud will quickly see data inflation. Consequently, they will be forced to buy ever greater numbers of servers and space to house their critical data at an ever growing cost, and without the flexibility of the public cloud.

 

In this light, a pure public cloud infrastructure starts to seem like a good idea - an infrastructure that can be instantly switched on and expanded as needed, at low cost. The idea of placing their most sensitive data in a public cloud, beyond their own direct control and security, however, will remain unpalatable to the majority of CIOs. Understandable when you consider research such as that released last year stating that only one in 100 cloud providers meets EU Data Protection requirements currently being examined in Brussels.

 

So, increasing dependence on the public cloud becomes a tug of war between a CIO’s data burden and their capacity for the perceived security risk of the cloud.

 

Cloud Creep

 

The process that may well tip the balance in this tug of war is cloud’s very own version of exposure therapy. CIOs are storing and processing more and more non critical data in the public cloud and, across their organisations, business units are independently buying in SaaS applications, giving them a taste of the ease of the cloud (from an end user point of view, at least). As this exposure grows, the public cloud and SaaS applications will increasingly prove their reliability and security whilst earning their place as invaluable tools in a business unit’s armoury. The result is a virtuous circle of growing trust of public cloud and SaaS services – greater trust means more data placed in the public cloud, which creates greater trust. Coupled with the ever falling cost of public cloud, eventually, surely, the perceived risks of the public cloud fall enough to make its advantages outweigh the disadvantages, even for the most sensitive of data?

 

Should it be done?

 

This all depends on a big ‘if’. Trust in the public cloud and SaaS applications will only grow if public cloud providers remain unhacked and SaaS data unleaked. This is a big ask in a world of weekly data breaches, but security is relative and private data centre leaks are rapidly becoming more common, or at least better publicised, than those in the public cloud. Sony Pictures’ issues arose from a malevolent force within its network, not its public cloud based data. It will take many more attacks such as these to convince CIOs that losing direct control of their data security and putting all that trust in their cloud provider is the most sensible option. Those attacks seem likely to come, however, and in the meantime, barring a major outage or truly headline making attack on it, cloud exposure is increasing confidence in public cloud.

 

At the same time, public cloud providers need to work to build confidence, not just passively wait for the scales to tip. Selecting a cloud service is a business decision and any CIO will lend the diligence that they would any other supplier choice. Providers that fail to meet the latest regulation, aren’t visibly planning for the future or fail to convince on data privacy concerns and legislation will damage confidence in the public cloud and actively hold it back, particularly within large enterprises. Those providers that do build their way to becoming a trusted partner will, however, flourish and compound the ever growing positive effects of public cloud exposure.

 

As that happens, the prospect of a pure public cloud enterprise becomes more realistic. Every CIO and organisation is different, and will have a different tolerance for risk. This virtuous circle of cloud will tip organisations towards pure cloud approaches at different times, and every cloud hack or outage will set the model back different amounts in each organisation. It is, however, clear that, whether desirable right now or not, pure public cloud is rapidly approaching reality for some larger enterprises.

Workplace transformation is not a new concept. It’s a piece of our evolution. As new generations enter the workforce, they bring new expectations with them; what the workplace meant for one generation doesn’t necessarily fit with the next. Think about the way we work in 2015 versus the way we worked in, say, 2000.

 

In just 15 years, we’ve developed mobile technology that lets us communicate and work from just about anywhere. Robust mobile technologies like tablets and 2 in 1s enable remote workers to video conference and collaborate just as efficiently as they would in the office. As these technologies evolve, they change the way we think about how and where we work.

 

Working-better.png

Working Better by Focusing on UX

 

Over the past decade, mobile technologies have probably had the most dramatic impact on how we work, but advances in infrastructure will pave the way for the next big shift. Wireless technologies have improved by leaps and bounds. Advances in wireless display (WiDi) and wireless gigabit (WiGig) technologies have created the very real possibility of a wire-free workplace. They drive evolution in a truly revolutionary way.

 

Consider the impact of something as simple as creating a “smart” conference room with a large presentation screen that automatically pairs with your 2 in 1 or other device, freeing you from adapters and cords. The meeting room could be connected to a central calendar and mark itself as “occupied” so employees always know which rooms are free and which ones are in use. Simple tweaks like this keep the focus on the content of meetings, not the distractions caused by peripheral frustrations.

 

The workstation is another transformation target. Wireless docking, auto-connectivity, and wireless charging will dramatically reduce clutter in the workplace. The powerful All-in-One PC with the Intel Core i5 processor will free employees from the tethers of their desktop towers. Simple changes like removing cords and freeing employees from their cubicles can have huge impacts for companies — and their bottom lines.

 

The Benefits of an Evolved Workplace

 

Creating the right workplace for employees is one of the most important things companies can do to give themselves an advantage. By investing in the right infrastructure and devices, businesses can maximize employee creativity and collaboration, enhance productivity, and attract and retain top talent. Evolving the workplace through technology can empower employees to do their best work with fewer distractions and frustrations caused by outdated technology.

 

If you're interested in learning more about what I've discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation on Twitter, please use #ITCenter. And you can find me on LinkedIn here.

Security Spending.jpgConvincing customers to be secure is no easy task, even when it is in their best interest.  Some innovative companies are exploring new ways to change behaviors without the downsides of fear and negative press, by actually rewarding their customers. 

 

Carrot and the stick. 

 

Nowadays, the only time customers go out of their way to change their passwords or act more securely is when they see headlines of a data breach, notified of their stolen identities, or see fraudulent charges.  Such events are costly and embarrassing to businesses, but do result in many users begrudgingly changing their passwords or behaving in more responsible ways to protect their security.  Companies want users to be more proactive and involved in protecting their information and access, but it is a difficult challenge to influence cooperation.

 

Some creative organizations are taking a different approach.  They are instituting positive reinforcement and rewards to bridge the gap between how customers currently act and how they should behave to enhance their security.  The Hilton Honors guest loyalty program, a travel rewards organization, is offering 1000 points to members to update their password.  The Google Drive team recently offered an additional 2GB of online storage for customers completing a security checkup.  This is a change in tactics and a proactive approach likely to make their customers more aware of security measures and good practices. 

 

Although not obvious, it may be a very shrewd business decision.  Cooperation between customers and businesses to enhance security is a powerful force.  The nominal costs of rewards may be offset by the reduction in risks and impacts of security incidents.  Beyond the fiscal responsibility, such interaction may strengthen brand awareness, trust, and loyalty.  Feeling secure, in an insecure world, has many advantages. 

 

For those who build a strong relationship, such rewards may only be the start.  Savvy users can help with early detection of attacks, report phishing attempts, and alert on other indicators of compromise.  Partnerships could extend to other security related areas where users are involved to define proper data retention parameters, privacy practices, and to voluntarily access sensitive services only from secured devices.  Cooperation builds trust and encourages loyalty.  Rewarding customers to actively engage and contribute to a safer environment could be something special and highly effective if worked properly.

 

Is bribing customers a bad thing?  Not in my book, when it results in better education, acceptance of more responsibility, and ultimately better security behaviors across the community.  So you have my vote.  Good job and I hope this begins a worthwhile trend. 

 

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

 

Ready or Not, Cross-Channel Shopping Is Here to Stay

 

Of all the marketplace transitions that have swept through the developed world's retail industry over the last five to seven years, the most important is the behavioral shift to cross-channel shopping.

 

The story is told in these three data points1:

 

  1. 60 plus percent of U.S. shoppers (and a higher number in the U.K.) regularly begin their shopping journey online.
  2. Online ratings and reviews have the greatest impact on shopper purchasing decisions, above friends and family, and have four to five times greater impact than store associates.
  3. Nearly 90 percent of all retail revenue is carried out in the store.

 

Retail today is face-to-face with a shopper who’s squarely at the intersection of e-commerce, an ever-present smartphone, and an always-on connection to the Internet.

 

Few retailers are blind to the big behavioral shift. Most brands are responding with strategic omni-channel investments that seek to erase legacy channel lines between customer databases, inventories, vendor lists, and promotions.

 

Retail-Graph.png

Channel-centric organizations are being trimmed, scrubbed, or reshaped. There’s even a willingness — at least among some far-sighted brands — to deal head-on with the thorny challenge of revenue recognition.

 

All good. All necessary.

 

Retail.png

Redefining the Retail Space

 

But, as far as I can tell, only a handful of leaders are asking the deeper question: what, exactly, is the new definition of the store?

 

What is the definition of the store when the front door to the brand is increasingly online?

 

What is the definition of the store when shoppers know more than the associates, and when the answer to the question of how and why becomes — at the point of purchase — more important than what and how much?

 

What is the definition of the store beyond digital? Or of a mash-up of the virtual and physical?

 

What is the definition — not of brick-and-mortar and shelves and aisles and four-ways and displays — but of differentiating value delivery?

 

This is a topic we’re now exploring through whiteboard sessions and analyst and advisor discussions. We’re hard at work reviewing the crucial capabilities that will drive the 2018 cross-brand architecture.

 

Stay tuned. I’ll be sharing my hypotheses (and findings) as I forge ahead.

 

 

Jon Stine
Global Director, Retail Sales

Intel Corporation

 

This is the second installment of a series on Retail & Tech. Click here to read Moving from Maintenance to growth in Retail Technology.

 

1 National Retail Federation. “2015 National Retail Federation Data.” 06 January 2015.

Filter Blog

By date:
By tag: