1 2 3 Previous Next

IT Peer Network

1,204 posts

Cybersecurity is a significant problem and it continues to grow.  Addressing symptoms will not achieve the desired results.  A holistic approach must be applied which involves improving the entire technology ecosystem.  Smarter security innovation, open collaboration, trustworthy practices, technology designed to be hardened against compromise, and comprehensive protections wherever data flows is required. 

The technology industry must change in order to meet ever growing cybersecurity demands.  It will not be easy, but technologists, security leaders, and end-users must work together to make the future of computing safer.

 

2015 CTO Forum - Security Transformation.jpg

 

I recently spoke at the CTO Forum Rethink Technology event on Feb 13 2015.  Presenting to an audience of thought-leading CTO’s and executives.  I was privileged to speak on a panel including Marcus Sachs (VP National Security Policy, Verizon), Eran Feigenbaum (Director of Security for Google for Work, Google), Rob Fry (Senior Information Security Architect, Netflix), and Rick Howard (CSO, Palo Alto Networks).  We all discussed the challenges facing the cybersecurity sector and what steps are required to help companies strengthen their security.

 

I focused on the cybersecurity reality we are in, how we all have contributed to the problem, and consequently how we must all work together to transform the high technology industry to become sustainably secure.

The complete panel video is available at the CTO Forum website http://www.ctoforum.org/

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

  Ainda muitas pessoas se perguntam sobre as aplicações dos Processadores intel Core no Windows 10, porem a Microsoft garante que esta Trabalhando junto

com a intel para que não exista perca de desempenho e estabilidade no seu sistema operacional Windows 10, a pergunta que não quer calar: COMO ANDA A APLICAÇÃO

DOS PROCESSADORES INTEL CORE M em tablet's com a vinda do WINDOWS 10 da microsoft?

 

ainda que possa especular ainda temos certeza que muito ainda virá acontecer.

I wanted to share with you a series of insights into the Intel IT Business Review... my first is about employee devices.

 

The talent battleground has never been more cluttered and in the technology sector if you want the right person for the job you need the right IT solutions.

 

This is a subject explored in detail in the new Intel IT Business Review. Intel as a major international employer, knows how important it is to recruit and retain the best talent and  that technology sector employees require great technology experiences through mobility, ease of collaboration, and a choice of devices.

 

For example, recent college graduates don’t just want these technology experiences, they expect them. But whatever the level, employees have to be empowered  to choose the right devices for their jobs. To do this, then you need to offer a variety of devices, including lighter, more capable mobile devices with a long battery life, the latest operating systems, and touch capabilities. These devices can transform the workplace by providing employees with a greater ability to work in a more flexible manner with optimum mobility and a better user experience.

 

Intel studies confirm that “one size does not fit all” regarding computing devices across Intel’s varied work environments. About 80 percent of Intel employees currently use mobile computing devices in the workplace, and the majority of the PC fleet consists of Ultrabook™ devices or 2-in-1 devices. In response to increasing employee demand for touch capabilities, the deployment of touch-enabled business Ultrabook devices and applications was accelerated, which has improved employee productivity and increased job satisfaction.

 

In a recent piece of Intel research, facility technicians reported that the use of tablets increased productivity up to 17 percent based on the number of completed work orders. In addition, by using tablets to display online information, these technicians performed their jobs 30 percent faster. 80 percent of participants reported an increase in job flexibility and 57 percent reported an increase in productivity.

 

In 2015, Intel will continue to investigate how innovations in mobile computing can improve employee productivity and attract the best and brightest talent to help develop tomorrow’s technology. To read the Intel IT Business Review in full go to www.intel.com/ITAnnualReport

 

“Intel SSDs are too expensive!”

“The performance of an SSD won’t be noticed by my users.”

“Intel SSDs will wear out too fast!”

"I don’t have time to learn about deploying SSDs!”

 

I’ve heard statements like this for years, and do I ever have a story to share – the story of Intel’s adoption of Intel® Solid-State Drives (Intel® SSDs).

 

Before I tell you more, I would like to introduce myself.  I am currently a Client SSD Solutions Architect in Intel’s Non-Volatile Memory Solutions Group (the SSD group).   Prior to joining this group last year, I was in Information Technology (IT) at Intel for 26 years.  The last seven years in IT were spent in a client research and pathfinding role where I investigated new technologies and how they could be applied inside of Intel to improve employee productivity.

 

I can still remember the day in late 2007 when I first plugged in an Intel SSD into my laptop.  I giggled.  A lot.  And that’s what sparked my passion for SSDs.  I completed many lab tests, research efforts and pilot deployments in my role, which led to the mainstream adoption of Intel SSDs within Intel.  That’s the short version.  More detail is documented in a series of white papers published through our IT@Intel Program.  If you’d like to read more about our SSD adoption journey, here are the papers:

 

 

I’ve answered many technical and business-related questions related to SSDs over the years.  Questions, and assumptions, like the four at the top of this blog, and perhaps one hundred others.  But the question I’ve been asked more than any other is, “how can you afford to deploy SSDs when they cost so much compared to hard drives?”  I won’t go in to the detail in this introductory blog, but I will give you a hint, point you to our Total Cost of Ownership estimator and ask, “how can you afford to NOT use SSDs?”

 

I plan to cover a variety of client SSD topics in future blogs.  I have a lot of info that I would like to share about the adoption of SSDs within Intel, and about the technology and products in general.  If you are interested in a specific topic, please make a suggestion and I will use your input to guide future blogs.

 

Thanks for your time!

 

Doug
intel.com/ssd

intel-xeon-processor-d-product-family-1.jpgIn case you missed it, we just celebrated the launch of the Intel Xeon processor D product family. And if you did miss it, I’m here to give you all the highlights of an exciting leap in enterprise infrastructure optimization, from the data center to the network edge.

 

The Xeon D family is Intel’s 3rd generation 64-bit SoC and the first based on Intel Xeon processor technology. The Xeon D weaves the performance of Xeon processors into a dense, lower-power system-on-a-chip (SoC). It suits a unique variety of use cases, ranging from dynamic web serving and dedicated web hosting, to warm storage and network routing.

 

Secure, Scalable Storage

 

The Xeon D’s low energy consumption and extremely high performance make it a cost-effective, scalable solution for organizations looking to take their data centers to the next level. By dramatically reducing heat and electricity usage, this product family offers an unrivaled low-powered solution for enterprise server environments.

 

Server systems powered by the new Intel Xeon D processors offer fault-tolerant, stable storage platforms that lend themselves well to the scalability and speed clients demand. Large enterprises looking for low-power, high-density server processors for their data stacks should keep an eye on the Xeon D family, as these processors offer solid performance per watt and unparalleled security baked right into the hardware.

 

Cloud Service Providers Take Note

 

intel-xeon-d-processor-family.jpg1&1, Europe’s leading web hosting service, recently analyzed Intel’s new Xeon D processor family for different cloud workloads such as storage or dedicated hosting. The best-in-class service utilizes these new processors to offer both savings and stability to their customers. According to 1&1’s Hans Nijholt, the technology has a serious advantage for enterprise storage companies as well as SMB customers looking to pass on savings to customers:

 

“The [Xeon D’s] energy consumption is extremely low and it gives us very high performance. Xeon D has a 4x improvement in memory and lets us get a much higher density in our data center, combined with the best price/performance ratio you can offer.”

 

If you’re looking to bypass existing physical limitations, sometimes it’s simply a matter of taking a step back, examining your environment, and understanding that you have options outside expansion. The Xeon D is ready to change your business — are you ready for the transformation?

 

We’ll be revealing more about the Xeon D at World Hosting Days; join us as we continue to unveil the exciting capabilities of our latest addition to the Xeon family!

 

If you're interested in learning more about what I've discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation, connect with me on LinkedIn or use #ITCenter.

Following Intel’s lead – decoupling software from hardware and automating IT and business processes — can help IT departments do more with less.

 

When I think back to all the strategic decisions that Intel IT has made over the last two decades, I can think of one that set the stage for all the rest: our move in 1999 from RISC-based computing systems to industry-standard Intel® architecture and Linux for our silicon design workloads. That transition, which took place over a 5-year period, helped us more than double our performance while eliminating approximately $1.4 billion in IT costs.

 

While this may seem like old news, it really was the first step in developing a software-defined infrastructure (SDI) – before it was known as such – at Intel. We solidified our compute platform with the right mix of software on the best hardware to get our products out on time.

 

Today, SDI has become a data center buzzword and is considered one of the critical next steps for the IT industry as a whole.

StorageCapacity.png


Why is SDI (compute, storage, and network) so important?

 

SDI is the only thing that is going to enable enterprise data centers to meet spending constraints, maximize infrastructure utilization, and keep up with demand that increases dramatically every year.

 

Here at Intel, compute demand is growing at around 30 percent year-over-year. And as you can see from the graphic, our storage demand is also growing at a phenomenal rate.

 

But our budget remains flat or has even decreased in some cases.

 

Somehow, we have to deliver ever-increasing services without increasing cost.


What’s the key?

 

Success lies in decoupling hardware and software.

 

As I mentioned, Intel decoupled hardware and software in our compute environment nearly 16 years ago, replacing costly proprietary solutions that tightly coupled hardware and software with industry-standard x86 servers and the open source Linux operating system. We deployed powerful, performance-optimized Intel® Xeon® processor-based servers for delivering throughput computing. We followed this by adding performance-centric higher-clock, higher-density Intel Xeon processor-based servers to accelerate silicon design TTM (time to market) while significantly reducing EDA  (Electronic Design Automation) application license cost — all of which resulted in software-defined compute capabilities that were powerful but affordable.

 

Technology has been continuously evolving, enabling us to bring a similar level of performance, availability, scalability, and functionality with open source, software-based solutions on x86-based hardware to our storage and network environments.

 

Screen Shot 2015-03-20 at 12.45.32 PM.pngAs we describe in a new white paper, Intel IT is continuously progressing and transforming Intel’s storage and network environments from proprietary fixed-function solutions to standard, agile, and cost-effective systems.

 

We are currently piloting software-defined storage and identifying quality gaps to improve the capability for end-to-end deployment for business critical use.

 

We transitioned our network from proprietary to commodity hardware resulting in more than a 50-percent reduction in cost. We are also working with the industry to adopt and certify an open-source-based network software solution that we anticipate will drive down per-port cost by an additional 50 percent. Our software-defined network deployment is limited to a narrow virtualized environment within our Office and Enterprise private cloud.


But that’s not enough…

 

Although decoupling hardware and software is a key aspect of building SDI, we must do more. Our SDI vision, which began many years ago, includes automated orchestration of the data center infrastructure resources. We have already automated resource management and federation at the global data center level. Our goal is total automation of IT and business processes, to support on-demand, self-service provisioning, monitoring, and management of the entire compute/network/storage infrastructure. Automation will ensure that when a workload demand occurs, it lands on the right-sized compute and storage so that the application can perform at the needed level of quality of service without wasting resources.


Lower cost, greater relevancy

 

Public clouds have achieved great economy of scale by adopting open-standard-based hardware, operating systems, and resource provisioning and orchestration software through which they can deliver cost-effective capabilities to the consumers of IT. If enterprise IT wants to stay relevant, we need to compete at a price point and agility similar to the public cloud. SDI lets IT compete while maintaining a focus on our clients’ business needs.

 

As Intel IT continues its journey toward end-to-end SDI, we will share our innovations and learnings with the rest of the IT industry — and we want to hear about yours, too! Together, we can not only stay relevant to our individual institutions, but also contribute to the maturity of the data center industry.

 

51 per cent of workloads are now in the cloud, time to break through that ceiling?

 

 

At this point, we’re somewhat beyond discussions of the importance of cloud. It’s been around for some time, just about every person and company uses it in some form and, for the kicker, 2014 saw companies place more computing workloads in the cloud (51 per cent) - through either public cloud or colocation - than they process in house.

 

In just a few years we’ve moved from every server sitting in the same building as those accessing it, to a choice between private or public cloud, and the beginning of the IT Model du jour, hybrid cloud. Hybrid is fast becoming the model of choice, fusing the safety of an organisation’s private data centre with the flexibility of public cloud. However, in today’s fast paced IT world as one approach becomes mainstream the natural reaction is to ask, ‘what’s next’? A plausible next step in this evolution is the end of the permanent, owned datacentre and even long-term co-location, in favour of an infrastructure entirely built on the public cloud and SaaS applications. The question is will businesses really go this far in their march into the cloud? Do we want it to go this far?

 

Public cloud, of course, is nothing new to the enterprise and it’s not unheard of for a small business or start-up to operate solely from the public cloud and through SaaS services. However, few, if any, examples of large scale corporates eschewing their own private datacentres and co-location approaches for this pure public cloud approach exist.

 

For such an approach to become plausible in large organisations, CIOs need to be confident of putting even the most sensitive of data into public clouds. This entails a series of mentality changes that are already taking place in the SMB. The cloud based Office 365, for instance, is Microsoft’s fastest selling product ever. For large organisations, however, this is far from a trivial change and CIOs are far from ready for it.

 

The data argument

 

Data protectionism is the case in point. Data has long been a highly protected resource for financial services and legal organisations both for their own competitive advantage and due to legal requirements designed to protect their clients’ information. Thanks to the arrival of big data analysis, we can also add marketers, retailers and even sports brands to that list, as all have found unique advantages in the ability to mine insights from huge amounts of data.

This is at the same time an opportunity and problem. More data means more accurate and actionable insights, but that data needs storing and processing and, consequently, an ever growing amount of server power and storage space. Today’s approach to this issue is the hybrid cloud. Keep sensitive data primarily stored in a private data centre or co-located, and use public cloud as an overspill when processing or as object storage when requirements become too much for the organisation’s existing capacity.

 

The amount of data created and recorded each day is ever growing. In a world where data growth is exponential,  the hybrid model will be put under pressure. Even organisations that keep only the most sensitive and mission critical data within their private data centres whilst moving all else to the cloud will quickly see data inflation. Consequently, they will be forced to buy ever greater numbers of servers and space to house their critical data at an ever growing cost, and without the flexibility of the public cloud.

 

In this light, a pure public cloud infrastructure starts to seem like a good idea - an infrastructure that can be instantly switched on and expanded as needed, at low cost. The idea of placing their most sensitive data in a public cloud, beyond their own direct control and security, however, will remain unpalatable to the majority of CIOs. Understandable when you consider research such as that released last year stating that only one in 100 cloud providers meets EU Data Protection requirements currently being examined in Brussels.

 

So, increasing dependence on the public cloud becomes a tug of war between a CIO’s data burden and their capacity for the perceived security risk of the cloud.

 

Cloud Creep

 

The process that may well tip the balance in this tug of war is cloud’s very own version of exposure therapy. CIOs are storing and processing more and more non critical data in the public cloud and, across their organisations, business units are independently buying in SaaS applications, giving them a taste of the ease of the cloud (from an end user point of view, at least). As this exposure grows, the public cloud and SaaS applications will increasingly prove their reliability and security whilst earning their place as invaluable tools in a business unit’s armoury. The result is a virtuous circle of growing trust of public cloud and SaaS services – greater trust means more data placed in the public cloud, which creates greater trust. Coupled with the ever falling cost of public cloud, eventually, surely, the perceived risks of the public cloud fall enough to make its advantages outweigh the disadvantages, even for the most sensitive of data?

 

Should it be done?

 

This all depends on a big ‘if’. Trust in the public cloud and SaaS applications will only grow if public cloud providers remain unhacked and SaaS data unleaked. This is a big ask in a world of weekly data breaches, but security is relative and private data centre leaks are rapidly becoming more common, or at least better publicised, than those in the public cloud. Sony Pictures’ issues arose from a malevolent force within its network, not its public cloud based data. It will take many more attacks such as these to convince CIOs that losing direct control of their data security and putting all that trust in their cloud provider is the most sensible option. Those attacks seem likely to come, however, and in the meantime, barring a major outage or truly headline making attack on it, cloud exposure is increasing confidence in public cloud.

 

At the same time, public cloud providers need to work to build confidence, not just passively wait for the scales to tip. Selecting a cloud service is a business decision and any CIO will lend the diligence that they would any other supplier choice. Providers that fail to meet the latest regulation, aren’t visibly planning for the future or fail to convince on data privacy concerns and legislation will damage confidence in the public cloud and actively hold it back, particularly within large enterprises. Those providers that do build their way to becoming a trusted partner will, however, flourish and compound the ever growing positive effects of public cloud exposure.

 

As that happens, the prospect of a pure public cloud enterprise becomes more realistic. Every CIO and organisation is different, and will have a different tolerance for risk. This virtuous circle of cloud will tip organisations towards pure cloud approaches at different times, and every cloud hack or outage will set the model back different amounts in each organisation. It is, however, clear that, whether desirable right now or not, pure public cloud is rapidly approaching reality for some larger enterprises.

Workplace transformation is not a new concept. It’s a piece of our evolution. As new generations enter the workforce, they bring new expectations with them; what the workplace meant for one generation doesn’t necessarily fit with the next. Think about the way we work in 2015 versus the way we worked in, say, 2000.

 

In just 15 years, we’ve developed mobile technology that lets us communicate and work from just about anywhere. Robust mobile technologies like tablets and 2 in 1s enable remote workers to video conference and collaborate just as efficiently as they would in the office. As these technologies evolve, they change the way we think about how and where we work.

 

Working-better.png

Working Better by Focusing on UX

 

Over the past decade, mobile technologies have probably had the most dramatic impact on how we work, but advances in infrastructure will pave the way for the next big shift. Wireless technologies have improved by leaps and bounds. Advances in wireless display (WiDi) and wireless gigabit (WiGig) technologies have created the very real possibility of a wire-free workplace. They drive evolution in a truly revolutionary way.

 

Consider the impact of something as simple as creating a “smart” conference room with a large presentation screen that automatically pairs with your 2 in 1 or other device, freeing you from adapters and cords. The meeting room could be connected to a central calendar and mark itself as “occupied” so employees always know which rooms are free and which ones are in use. Simple tweaks like this keep the focus on the content of meetings, not the distractions caused by peripheral frustrations.

 

The workstation is another transformation target. Wireless docking, auto-connectivity, and wireless charging will dramatically reduce clutter in the workplace. The powerful All-in-One PC with the Intel Core i5 processor will free employees from the tethers of their desktop towers. Simple changes like removing cords and freeing employees from their cubicles can have huge impacts for companies — and their bottom lines.

 

The Benefits of an Evolved Workplace

 

Creating the right workplace for employees is one of the most important things companies can do to give themselves an advantage. By investing in the right infrastructure and devices, businesses can maximize employee creativity and collaboration, enhance productivity, and attract and retain top talent. Evolving the workplace through technology can empower employees to do their best work with fewer distractions and frustrations caused by outdated technology.

 

If you're interested in learning more about what I've discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation on Twitter, please use #ITCenter. And you can find me on LinkedIn here.

Security Spending.jpgConvincing customers to be secure is no easy task, even when it is in their best interest.  Some innovative companies are exploring new ways to change behaviors without the downsides of fear and negative press, by actually rewarding their customers. 

 

Carrot and the stick. 

 

Nowadays, the only time customers go out of their way to change their passwords or act more securely is when they see headlines of a data breach, notified of their stolen identities, or see fraudulent charges.  Such events are costly and embarrassing to businesses, but do result in many users begrudgingly changing their passwords or behaving in more responsible ways to protect their security.  Companies want users to be more proactive and involved in protecting their information and access, but it is a difficult challenge to influence cooperation.

 

Some creative organizations are taking a different approach.  They are instituting positive reinforcement and rewards to bridge the gap between how customers currently act and how they should behave to enhance their security.  The Hilton Honors guest loyalty program, a travel rewards organization, is offering 1000 points to members to update their password.  The Google Drive team recently offered an additional 2GB of online storage for customers completing a security checkup.  This is a change in tactics and a proactive approach likely to make their customers more aware of security measures and good practices. 

 

Although not obvious, it may be a very shrewd business decision.  Cooperation between customers and businesses to enhance security is a powerful force.  The nominal costs of rewards may be offset by the reduction in risks and impacts of security incidents.  Beyond the fiscal responsibility, such interaction may strengthen brand awareness, trust, and loyalty.  Feeling secure, in an insecure world, has many advantages. 

 

For those who build a strong relationship, such rewards may only be the start.  Savvy users can help with early detection of attacks, report phishing attempts, and alert on other indicators of compromise.  Partnerships could extend to other security related areas where users are involved to define proper data retention parameters, privacy practices, and to voluntarily access sensitive services only from secured devices.  Cooperation builds trust and encourages loyalty.  Rewarding customers to actively engage and contribute to a safer environment could be something special and highly effective if worked properly.

 

Is bribing customers a bad thing?  Not in my book, when it results in better education, acceptance of more responsibility, and ultimately better security behaviors across the community.  So you have my vote.  Good job and I hope this begins a worthwhile trend. 

 

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

 

Ready or Not, Cross-Channel Shopping Is Here to Stay

 

Of all the marketplace transitions that have swept through the developed world's retail industry over the last five to seven years, the most important is the behavioral shift to cross-channel shopping.

 

The story is told in these three data points1:

 

  1. 60 plus percent of U.S. shoppers (and a higher number in the U.K.) regularly begin their shopping journey online.
  2. Online ratings and reviews have the greatest impact on shopper purchasing decisions, above friends and family, and have four to five times greater impact than store associates.
  3. Nearly 90 percent of all retail revenue is carried out in the store.

 

Retail today is face-to-face with a shopper who’s squarely at the intersection of e-commerce, an ever-present smartphone, and an always-on connection to the Internet.

 

Few retailers are blind to the big behavioral shift. Most brands are responding with strategic omni-channel investments that seek to erase legacy channel lines between customer databases, inventories, vendor lists, and promotions.

 

Retail-Graph.png

Channel-centric organizations are being trimmed, scrubbed, or reshaped. There’s even a willingness — at least among some far-sighted brands — to deal head-on with the thorny challenge of revenue recognition.

 

All good. All necessary.

 

Retail.png

Redefining the Retail Space

 

But, as far as I can tell, only a handful of leaders are asking the deeper question: what, exactly, is the new definition of the store?

 

What is the definition of the store when the front door to the brand is increasingly online?

 

What is the definition of the store when shoppers know more than the associates, and when the answer to the question of how and why becomes — at the point of purchase — more important than what and how much?

 

What is the definition of the store beyond digital? Or of a mash-up of the virtual and physical?

 

What is the definition — not of brick-and-mortar and shelves and aisles and four-ways and displays — but of differentiating value delivery?

 

This is a topic we’re now exploring through whiteboard sessions and analyst and advisor discussions. We’re hard at work reviewing the crucial capabilities that will drive the 2018 cross-brand architecture.

 

Stay tuned. I’ll be sharing my hypotheses (and findings) as I forge ahead.

 

 

Jon Stine
Global Director, Retail Sales

Intel Corporation

 

This is the second installment of a series on Retail & Tech. Click here to read Moving from Maintenance to growth in Retail Technology.

 

1 National Retail Federation. “2015 National Retail Federation Data.” 06 January 2015.

Intel’s CIO Kim Stevenson is “…convinced that this is an exciting time as we enter a new era for Enterprise IT. Market leadership is increasingly being driven by technology in all industries, and a new economic narrative is being written that challenges business models that have been in place for decades.”

 

With enterprise pumping more funds into the industry than ever, Gartner projects that IT spending will reach $3.8 trillion this year. Gartner’s prediction indicates that while many of the traditional enterprise IT-focused areas — data center systems, devices, enterprise software, IT services, and telecom services — will continue to see increased investment, new areas are expected to emerge much faster.

IT-Spending.png

As the business invests more in IT — whether it’s in these traditional focused areas or these new emergent areas — one thing is stable. The business is becoming more dependent on IT for both organizational efficiency and competitive value.

 

Let’s take a closer look at two of the emergent growth segments along with the challenges, opportunities, and value they create for this new era of business-IT relationships.

 

Security and the Internet of Things

 

Gartner projects an almost 30-fold increase in the number of installed IoT units (0.9 billion to 26 billion) between 2009 and 2020. The data collected from these devices is an essential component to future IT innovation; however, this technology comes with significant security and privacy risks that cannot be ignored. “Data is the lifeblood of IoT,” states Conner Forrest of ZDNet. “As such, your security implementation for IoT should center around protecting it.”

 

The potential for the IoT remains largely undefined and at risk, especially with 85 percent of devices still unconnected and security threats prevalent. The Intel IoT Platform was designed to address this business challenge. The Intel IoT Platform is an end-to-end reference model that creates a secure foundation for connecting devices and transferring data to the cloud. With this reference architecture platform, countless IoT solutions can be built and optimized with the advantages of scalable computing, security from device to cloud, and data management and analytics support.

 

Invest.pngThe Enterprise Investing in Startups

 

2014 represented the biggest year in corporate venture group capital investment since 2000, and this trend is set to continue, according to a MoneyTree Report jointly conducted by PricewaterhouseCoopers LLP, the National Venture Capital Association, and Thomson Reuters.  What is interesting to me is the why. Organizations want and need a critical asset: creative talent.

 

As the term “innovation” runs rampant through the enterprise, CIOs know they must make changes in order to stay fresh and competitive. However, according to Kim Nash of CIO, 74 percent of CIOs find it hard to balance innovation and operational excellence, suggesting that a more powerful approach would be to acquire a startup to capture its talent, intelligence, and creative spirit.

 

While buying a startup is not in every organization’s wheelhouse, some businesses are providing venture capital to startups in order to tap into their sense of innovation. “By making such moves,” explains Nash, “non-IT companies gain access to brand new technology and entrepreneurial talent while stopping short of buying startups outright.”


Leadership Tips For IT Innovation

 

IT’s success in this new environment will not follow a pre-defined formula. In fact, it will rely on new skills and an evolving partnership between business and IT. For this reason, Intel partnered with The IT Transformation Institute to present the Transform IT Show. Transform IT is a web-based show that features in-depth interviews with business executives, IT leaders, and industry experts to shed light on what the future holds for business and the IT organizations that power them. Most importantly, the show highlights the advice for all future leaders on how to survive and thrive in the coming era of IT.

 

I hope you enjoy our guests and can apply the insights you gain from the Transform IT Show. Join this critical conversation by connecting with me on Twitter at @chris_p_intel or by using #TransformIT.

Memory2.jpgRowhammer represents a special case for vulnerability exploitation, it accomplishes something very rare, by hacking the hardware itself.  It takes advantage of the physics happening at the nano-level in a very specific architecture structure present in some designs of computer memory.  Rowhammer allows attackers to change bits of data in sections of memory they should not have access to. It seems petty, but don’t underestimate how flipping bits at this level can result in tremendous risk.  Doing so, could grant complete control of a system and bypass many security controls which exist to compartmentalize traditional malicious practices.  Rowhammer proves memory hardware can be manipulated directly.

 

In the world of vulnerabilities there is a hierarchy, from easy to difficult to exploit and from trivial to severe in overall impact.  Technically, hacking data is easiest, followed by applications, operating systems, firmware and finally hardware.  This is sometimes referred to as the ‘stack’ because it is how systems are architecturally layered. 

 

The first three areas are software and are very portable and dynamic across systems, but subject to great scrutiny by most security controls.  Trojans are a great example where data becomes modified and can be easily distributed across networks.  Such manipulations are relatively exposed and easy to detect at many different points.  Applications can be maliciously written or infected to act in unintended ways, but pervasive anti-malware is designed to protect against such attacks and are constantly watchful.  Vulnerabilities in operating systems provide a means to hide from most security, open up a bounty of potential targets, and offer a much greater depth of control.  Knowing the risks, OS vendors are constantly identifying problems and sending a regular stream of patches to shore up weaknesses, limiting the viability of continued exploitation by threats.  It is not until we get to Firmware and Hardware, do most of the mature security controls drop away.   

 

The firmware and hardware, residing beneath the software layers, tends to be more rigid and represents a significantly greater challenge to compromise and scale attacks.  However, success at the lower levels means bypassing most detection and remediation security controls which live above, in the software.  Hacking hardware is very rare and intricate, but not impossible.  The level of difficulty tends to be a major deterrent while the ample opportunities and ease which exist in the software layers is more than enough to lure hackers in staying with easier exploits in pursuit of their objectives. 

Attackers Move Down the Stack.jpg

Attackers are moving down the stack.  There are tradeoffs to attacks at any level.  The easy vulnerabilities in data and applications yield much less benefits for attackers in the way of remaining undetected, persistence after actions are taken against them, and the overall level of control they can gain.  Most security products, patches and services work at this level and have been adapted to detect, prevent, and evict software based attacks.  Due to the difficulty and lack of obvious success, most vulnerability research doesn’t explore much in the firmware and hardware space.  This is changing.  It is only natural, attackers will seek to maneuver where security is not pervasive. 

 

Rowhammer began as a theoretical vulnerability, one with potentially significant ramifications.  To highlight the viability, the highly skilled Google Project Zero team developed two exploits which showcased the reality of gaining kernel privileges.  The blog from Rob Graham, CEO of Errata Security, provides more information on the technical challenges and details.

 

Is Rowhammer an immediate threat?  Probably not.  Memory vendors have been aware of this issue for some time and have instituted new controls to undermine the current techniques.  But this shows at a practical level how hardware can be manipulated by attackers and at a theoretical level how this could have severe consequences which are very difficult to protect against.

 

As investments in offensive cyber capabilities from nations, organized crime syndicates, and elite hackers-for-hire continue to grow, new areas such as hardware vulnerabilities will be explored and exploited.  Rowhammer is a game-changer with respect to influencing the direction of vulnerability research.  It is breaking new ground which others will follow, eventually leading to broad research in hardware vulnerability research across computing products which influence our daily lives.  Hardware and firmware hacking is part of the natural evolution of cybersecurity and therefore a part of our future we must eventually deal with.

 

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

 

In a world that’s never slow to hype the next best big thing, augmented reality and wearable technology are certainly on the 2015 hype curve. Back in January at CES both were key themes. In February at DistribuTECH in San Diego, both augmented reality and wearable's were topics, but they are not exactly mainstream as yet.  But this may be about to change...

Let’s look at augmented reality first. Cameras in mobile devices that provide overlays of GIS data to photo images are not new, but the ability to now get 3D depth information opens a whole different set of use cases. The capabilities of the Intel® RealSense™ Snapshot Depth Camera are really impressive. With devices such as the super thin Dell Venue 8 7000 Series tablet, which won the prestigious 2015 CES “Best of Innovation” award, coming equipped with the Intel® RealSense™ Snapshot Depth Camera, then as more devices come into the marketplace, more and more app's will take advantage of the capabilities. Imagine taking a 3D scan of an opject and then connecting up your 3D Printer...

For wearable's, the newly announced Intel® Curie module, a complete low-power solution designed for developing wearable technology, also opens a raft of potential innovation. We’re seeing companies exploring safety vests, hard hats that detect impacts, and wearable's that detect proper lifting technique to prevent back and knee injuries. You may see this as a bit of overkill, but these use cases are in the market and can be a benefit for health, safety and wellness as well as heading off insurance claims down the line. Regardless of the business case, preventing a bad back later in life has to be a good thing.

While the technology that enables all of this is remarkable, it’s not all about the technology. The key feature here is related to acceptance of the technology, especially by the worker; it has to bring them value. This can lead to some interesting discussions around tradeoffs. Everyone agrees the technology would be beneficial if a wearable provides the ability to detect if a worker is in a place of danger, or if they have fallen in a remote location. But the same technology can also figure out that the worker is in the local coffee shop when he or she is not supposed to be.

The same tradeoff goes for wearable's that provide heads-up display technology. We've all seen some great use cases on how such technology can be used for on the job training, servicing a given product, or showing a support group in real time what’s going on at a given site. But then what happens when a worker walks into my street or house wearing such a device? Am I comfortable with this? This can make for a number of interesting ‘what if’ scenarios.

The good news is that the utility industry does not have to figure all of this out by itself. Governments are striving to come up with overall privacy legislation that allows for protection of the individual, whether a private citizen or a worker.

The bottom line is that we seeing more and more companies exploring the benefits such capabilities can bring. Innovation today is being driven by business need so expect to see a lot more augmented reality and wearable usage cases evolve rapidly this year...

Kevin.

Find Kevin on LinkedIn

Start a conversation with Kevin on Twitter

The true potential of the Internet of Things (IoT) can only be reached when smart, embedded devices can interact and share data with the cloud, unlocking useful data that can provide new, invaluable insights to an organization. With enterprise embedded IoT demand on the rise, however, organizations are facing challenges of increasing separation, interoperability, and security risks.

 

To combat this, Intel has created unique, fully integrated hardware and software building blocks designed to connect devices, aggregate information, analyze data locally, and open the communication channel so secure data can flow into the cloud.

 

IoT-Gateway-Solution-Open.jpg

Expediting Intelligent Solutions

 

Available since early 2014, Intel Gateway Solution is a scalable, flexible family of integrated options that include the Intel Quark SoC X1000, Intel Quark SoC X1020D, and Intel Atom E3826 processors. The Intel Gateway Solution has the ability to connect legacy and new systems, ensuring that data generated by devices can flow securely from the edge to the cloud.

 

“It’s really about bringing together all the critical elements and really accelerating our customer’s time to market,” states Adam Burns, director of IoT Solutions Group at Intel. “We’ve put all the security elements in there so they can start out with a secure system…we’ve got the application environment, so their investment is really focusing on building their value-added applications and services, not creating the wheel on a bunch of foundational building blocks.”

 

Intel Gateway Solution is commercially ready, integrated with Wind River and McAfee software, and suitable for quick implementation into your own IoT infrastructure — something no one else on the market can currently claim.

 

Built-In Security

 

Security is a key building block in IoT. Without security integrated on every level, IoT deployment can go seriously awry. In addition, IoT will never achieve high levels of adoption if people don’t trust that data transferring is secure.

 

Intel Gateway Solution, coupled with McAfee Embedded Control and Wind River Intelligent Device Platform XT 2.1, provides rich enterprise-grade security features — including secure boot, GRSecurity, and IMA, to name a few — for strong support and end-to-end security protection.

 

The Internet-connected, data-driven era has arrived. Embedded sensors and devices have the ability to transform the enterprise, allowing for greater intelligence, cost efficiency, and value — Intel Gateway Solution is an essential component for any IoT business model, providing endless potential for innovation.

 

If you're interested in learning more about what I've discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation, use #ITCenter.

There’s some amazing innovation happening in the financial services industry right now. From new mobile banking initiatives to peer-to-peer lending options, the landscape is changing fast. But fears about security continue to hover like a dark cloud. In my previous blog, I talked about how cybersecurity remains a massive threat, with an estimated $400 billion lost to cybercrime each year. And with the majority of customers accessing their bank through digital channels, security is a huge and growing concern.

 

Security and Convenience Are at the Heart of Trust

 

More than ever, financial institutions need to build trust and address the identity concerns of their customers. Building trust is about more than security — it’s also about the convenience of the overall experience. To truly provide a secure and frictionless digital banking experience, financial institutions want to offer strong authentication and convenient transaction authorization, so customers can perform transactions quickly and securely.

 

Turkish-Lira.jpg

At Intel, we are helping our customers through the use of Intel Identity Protection Technology (IPT). Intel IPT is a hardware-based identity technology that embeds identity management directly into the customer’s device. With Intel IPT, banks issue a secure token that is stored in the security engine of a device’s Intel processor. For each banking service accessed, the bank generates and verifies a unique, one-time password, eliminating the need to verify identity using multiple factors for each premium service. Banks can also check the user’s presence through password verification.

 

Intel IPT Gains Traction in Turkey

 

Intel IPT has had success in Turkey, which is fast becoming a center of excellence in terms of innovative technology. Specifically, the country’s two largest banks have built their digital banking platform on top of Intel IPT and other technologies. Let’s take a closer look at how those banks are using Intel IPT to strengthen security for online and mobile banking.

 

Isbank

 

The first bank in the world to use Intel IPT on a mobile banking application, Isbank now has more than 1 million mobile customers. Like all financial institutions in Turkey, Isbank must comply with banking regulations that require the use of two-factor authentication. By using Intel IPT, Isbank customers don’t need to enter the OTPs being sent by SMS or generated by hard/soft token generators to use each banking service. Additionally, the bank can verify users’ mobile devices through the hardware-based Intel IPT solution, which supports 2,000 different mobile phone models and all operating systems. To date, the bank has reported more than 30,000 Intel IPT users, a number that continues to grow by the day.

 

Garanti Bank

 

Garanti Bank has deployed Intel IPT into its transaction processes, combining the technology with a cloud-based authentication solution. To use the solution, Garanti customers download an applet from the bank’s website. The applet, which runs on Ultrabook laptops or other mobile devices, activates the cloud service and creates a unique secret code on the Intel processor using IPT. The code facilitates a safe connection to the bank’s Internet banking service every time a customer logs on from their device.

 

Trust-Banking.jpgFor both banks, the benefits of Intel IPT are clear: they can protect their customers’ identities with authentication deeply embedded in hardware, while also giving those customers a user-friendly way to do their banking. In short, Isbank and Garanti are using their solutions to build trust. Having strengthened its image as an innovator in its customers’ minds, Isbank is already planning to use identity and access management technologies from Intel for its mobile payments website, as well as additional cloud banking applications.

 

Intel IPT is just one part of our overall security roadmap. Our vision includes the integration of secure biometric techniques to further improve end-to-end identity protection solutions across all platforms, from the client to the cloud. By integrating identity and access management directly into hardware and software, we’re helping ensure our partners can provide the most trusted and convenient experiences possible for their customers.  

 

To continue the conversation, let’s connect on Twitter.

 

Mike Blalock

Global Sales Director

Financial Services Industry, Intel

 

This is the sixth installment of a seven part series on Tech & Finance. Click here to read blog 1, blog 2, blog 3, blog 4, and blog 5.

Filter Blog

By date:
By tag: