1 2 Previous Next

Intel vPro Expert Center Blog

23 Posts authored by: JasonA.Davidson

I recently blogged about the interview with Citrix Software's Paul Hahn, Director of Business Development / Virtualization & Management Division, and Matt Edwards, Product Manager at: http://communities.intel.com/openport/blogs/ecmf/2008/09/22/citrix-software-with-intel-vpro-technology

 

For part 2 of this blog, you can view the actual demonstration of the software below. In this demonstration, you will see the solution explained in much more detail.

 

 

Citrix and Intel have been working together to deliver a solution that builds on both companies expertise.  The end-to-end solutions, application delivery, and virtualization software that Citrix provides combined with the manageability, performance, and security from vPro deliver a novel solution.  The solution allow the IT OS build to go through a secure or trusted boot, where the hardware and software used to launch the OS is measured for integrity before the program executes.  The OS can be streamed off a remote server, and the end-user gets the rich client side local execution experience.

 

In this video, Citrix Software's Paul Hahn, Director of Business Development / Virtualization & Management Division, and Matt Edwards, Product Manager, talk about how Citrix Systems is developing products for OS/App Streaming on top of Intel vPro technology.  You will see that the virtualized, measured, and streamed OS is able to still render and rotate a rich CAD drawing. 

 

 

Like many others, I have downloaded Google's Chrome Browser (using it to write this blog), and gave it a try. Of course, the first thing people are focused on is the UI and visible features... After reading the comic book explanation (the team did an awesome job at describing the architecture via a comic book - very unique idea), I think people need to look at what this browser really is - it’s not just a browser, it’s a web execution client.

 

First, they went away from launching a bunch of threads and went to processes, they are extending a well know operating system fundamental, making the browser similar to a sub OS of your OS (one that does not have to care about drivers and such). They went for the overhead of the process model to focus on scalability and stability. Both of these have to be fundamentals if Google Gears is to provide content to this web execution client, as who wants to run a cloud application and have your browser crash, run out of memory, or suffer from many of the other common limitations of browser-based applications (very few truly rich applications run purely in a browser).

 

Other features that seem to be radically different for this web execution client are the virtual machine manager used to execute Java script, the garbage collection method, the scalable user interface, and the way they are doing the developer testing. They have really taken a different approach here, an approach more focused on how things execute over being an web page rendering engine. The developer testing concept is very neat, they are leveraging the core of Google to test their builds against the most commonly viewed sites, this gives instant feedback about real world usages (but no testing is ever enough, right?).

 

Now how does a new browser release get into blog on compute models? The way I look at it, this is really a prime time client for executing cloud programs, Google Gears or others. The way they made the browser to not be limited in its processing capabilities and coupled that with common computer science stability models, the browser (even if not launched as a browser) is a prime candidate to become our interface to the rich application capability to cloud computing.

 

However, my biggest worry with this model is how the application verifies that the virtual machine manager and the other core services of the browser integrity have not been compromised. Is their a TXT style measurement of this browser? If cloud is going where people think it is, I am going to want my client execution engine to be trustworthy as possible.

 

From my first look, great job Google team.  What are your thoughts?

-Jason A. Davidson

In my tenure at Intel, I have had the pleasure of walking into major companies, educational institutes, non-profits, and government agencies to talk technology with many great people.  “How green is this solution” is a topic on many minds lately – no matter which topic of discussion.   Being an engineer by trade and scientist by education, I will typically dive into the details of around each component’s power consumption and the discussion ends with some simple math multiplying a number of units by their thermal numbers.  However, there is so much more to the overall impact, and as I walk in and out of these locations, I am always amazed at the number of larger issues with much larger impacts that are unresolved or overlooked.  Reading the book “[Living Like Ed: a Guide to the Eco-Friendly Life|http://www.randomhouse.com/crown/livinglikeed/index.html]” by Ed Begley Jr.,  inspired me to approach some of these topics, and to similarly classify items by their degree of difficulty to implement – easy changes, not-so-big changes, and big changes.  Additionally, I will belooking at the overall impact that compute model choices can affect.  However, I will leave the topics beyond the realm of compute models to experts such as Ed Begley Jr.

 

Corporate Recycling, it can be an easy change:

Before I dive into any of these subjects, recycling should come as an essential component to every one of these solutions – it should become part of your culture, the stockholders will most likely appreciate the frugality.  If your purchasing new equipment, you need to be thinking about what you can do with the old equipment, sometimes the answer is to donate the equipment to charities, sometimes it needs to be disposed of, but rarely does that require it fill a landfill.  As an example, everywhere that Intel operates, more than 70% of all waste is recycled.  I am not suggesting you need to achieve this overnight, Intel has been working on this since 1971…it is a gradual process.  Start by looking at what the biggest waste items are from your company and get creative – is it finding a use for all those coffee grounds, finding ways to reuse packaging material when shipping your products, or simply implementing recycle bins and growing employee awareness. 

Here is a great video which highlights how Intel practices corporate recycling:

 

Pure power consumption items:

Monitors

I have yet to find a location that I have visited where I cannot find that amongst the rows of office workers, several are still using CRT monitors – and many times they are not even the energy efficient CRTs.  Simply moving these users from CRTs to LCD can have a profound impact on power consumption.  Consider a typical 17” CRT will consume around 80 watts, and a 15” LCD is around 25 watts (these have similar viewing areas).  For any user working behind one of these outdated CRT monitors, we need not discuss any other aspect of power savings at their desk until this is fixed, no compute model savings are looking to give you 55 watts back with such a simple solution.  Added to this are well-known benefits around increased worker productivity when moving from CRT to LCD due to eyestrain reduction, glare, distortions, flicker, and visual search time improvements.  As far as I can see it, switching out these monitors is an easy change, it is in the same vein as moving from incandescent to compact fluorescents light bulbs.  Their are even HVAC efficiency changes when these changes happen on a large enough scale (less heat put off by the monitor equals less cooling needed from the HVAC – and in winter the heating produced by your HVAC system I am going to assume is more efficient than the heat being produced by that CRT).  http://ergo.human.cornell.edu/PUB/LCD_vs_CRT_AH.pdf

 

However, a big change item enabled by the CRT to LCD upgrade comes in the realm of building design.  The distance an employee sits from an LCD is the same as the CRT, however the space needed behind the LCD is far less than that of a CRT – LCD monitors even have direct wall mount options.  This gives space designers the ability to decrease desk depth and develop creative solutions around ergo designs.  This results in more compressed, configurable, and/or productive work environments.  The weight reduction on a given office floor can give some relief to building designers as well (average CRT weights 40-45 lbs, and the similar LCD is 6-8 lbs – multiply this by the number of workers in a building, let say 1000, yields a couple tons removed from a single floor).  Is their a way to utilize the weight and heat to balance locations that are often constrained already, such as your server room?  It’s worth looking into.

 

Telecommuting

The subject of telecommuting in general causes various reactions – from the employer who has witnessed abuse of the telecommuting freedoms to discussions around increased employee focus and higher output, the reaction and debate on this subject will continue, just as it has around anything from solitaire to YouTube and social network use.  Regardless of the outcome of these debates, telecommuting can have an overall world effect on the number of employees on the road to and from the office each day.  Being that most people tend to take a transportation method that is far from efficient, this alone can be a net positive impact.  I have heard some government employees are encouraged to spend 1 day each week working from home (pending their job allows them to do this) simply to reduce the environmental impact.  However, the debate is still out on the efficiencies in power consumption regarding heating and cooling a single residence verses several employees in an office environment, and the infrastructure costs to support more remote verses local employees.  The benefits of mobility in your compute model can definitely benefit the environment – at the least, mobility offers the flexibility to consider various work environments (e.g. what would be greener than a person using a laptop outside using solar power?).

 

Power policies

Often power policies have taken energy efficient configurations and pushed them another 20% beyond what is already seen as good.  When applied down the wire over manageability interfaces such as can be done on an Intel® vPro™ Technology enabled client, they are easy to deploy and quick to update when needed.  You can decide to simply turn off the unused computer, wake it up and update it when needed, and then return it to the low power states.  On the other hand, you can utilize the processing power of that vacant machine to run a distributed compute environment using an IDE redirection operation, further reducing the loads on your data center and switching the watt per calculation onto inexpensive devices.  Isn’t this the whole argument that drove RAID technology – the I in RAID stands for inexpensive, we took inexpensive drives and made redundant copies, much like we can do with the relatively inexpensive computations of these vacant client machines.  

 

Data Center Consolidation via Virtualization, DC Racks, and other things inside those glass rooms…

If no one has looked into this area yet at your organization, it may be time to visit your local datacenter and see what is going on.  Chances are the ladies and gentlemen running your datacenter are already plugged into these topics, as they are probably spending a large budget every year just to keep those servers that you never see humming along, money well spent on temperature and climate control, money spent on power consumption, etc.  Lets face it, unless you are someone who understands the difference between 1U, 2U, and 4U and knows what happens when the halon system is engaged, then you should get your teams that do know about such things looking at The Server Room.   There are many fantastic advances in the last few years that can drastically decrease the power consumption, reduce the cooling needs, and increase the manageability and reliability of your server room. 

 

Compute Model Debates:

The reason I call this section “debate” is that various individuals, corporations, analysts, and product vendors spend much time debating about which of these is greener.  The argument usually stays within the simple math as I described before, but the real answer I believe should extend into the larger, holistic, picture.  All of these solutions fit into the big changes category, as they require establishments to modify the way they operate, often involving the acquisition of new equipment and software, and typically requiring end-users to receive some training to function productively in these environments.  For more information on these compute models you should read the presentation at: Compute Models Explained 

 

Fixed Location: Terminal Services, Virtual Hosted Desktops, Blade PCs, and Web-Based Apps

I am grouping all of the compute models that move large amounts of computation from clients and place them on server(s), as they all have a very similar green impact with slight nuances related to each one.  Often this group of computing wins out quickly with simple math:

Current:      20 client computers running at X watts + server running at Y watts

Thin-client:      20 thin-client computers running at almost no watts + server running at Y watts

This always looks great, and why not, you are reducing the wattage on the item that is being multiplied.  Too good to be true?  This does not account for several key items with these calculations.  The scenario does not look at how many clients the one server handled before and after the switch.  The change pushes more computing demands on the server, and reduces the demands on the clients.  In the current scenario, the server may have been handling very limited calculations, and could have stretched to thousands of clients.  Anyone who has priced servers and supported them knows that the cost per calculation that you pay on a server is far greater than the cost per calculation on a client.  Servers require redundancy, are often located in raised floor climate and temperature controlled environments, typically are allowed to operate at up to 50% capacity before more are added to the mix, are supported by disk arrays which are also climate controlled and redundant, have built in fans with redundant fans, built in power supplies with redundant power supplies…  All of this is to make sure that you, the end user, never experiences downtime.  On the other hand, your desktop or mobile client is built with the end user in mind, it can often handle limited shocks, a wide range of temperatures, humidity, and electro-magnetic interference.  However, it does not have dedicated employees supporting it as the server does, and does not require a special room.  I have yet to hear a client discussion where we talk about five or six 9’s of uptime – client computers simply reboot much more often. 

The real equation should read something like the following:

Current:      2000 client computers running at X watts + server running at Y watts

+ server room HVAC

Thin-client:      2000 thin-client computers running at almost no watts + 200 servers at Y watts

+ server room HVAC expanded to support 199 more servers

Is this a net wash, increase, or reduction – that depends on how constrained your datacenter already is, what part of the world you are located in (do you have some glaciers nearby), and several other factors.  I am not saying it is not always a net reduction of power, but the equations used are often over simplified, they have to take a holistically approach to each environment to determine the true merits.

 

Off Network Options: Distributed, Rich Client, Virtual Containers, Application Virtualization and Streaming

The second group I am going to split my debate into is the group that supports mobility and retains computation on clients.  Many of these models support moving computations between the server and client as dictated by policies and system capabilities.  However, in general each of these models enables the server component to scale to a much larger number of clients, and when reaching server capacity can use policies to turn up client compute loads.  The same calculations apply in these environments where you include HVAC costs for the increased server demands.  However, the numbers of servers increased in these scenarios are much smaller.  Don’t just take my word for it, follow this linkto a study done by Fraunhofer Institute 

 

Conclusion:

No step is too small…changing behaviors, deciding which solutions are right for you, and reaping the benefits of growing greener is a gradual process, one for which we all should strive.  With each option we need to be looking at what the larger impacts are – what does it mean to productivity, security, manageability, and is now the right time to gain adoption for this change?

After much talking with end users and industry thought leaders, a group of us developed this utility the help people decide which compute model is best for a specific user segment. There are many items to consider when trying determining which compute model is best for your users. I believe this utility does a decent job at calling out the most common questions that help to determine the ones that would be well suited and lists the ones that may not be appropriate.

 

 

In this application, you walk through a compute model decision by answering a series of questions for a specific user segment (the user segment you enter is a free form text field and does not change the output). You are presented with a summary screen that will give you recommendations and concern areas based on your inputs. When you mouse over the compute model name, reasons why that model is or is not recommended are in the notes section.

I welcome any feedback.

-Jason A. Davidson

p.s. For compute model reference, please refer to this document: http://communities.intel.com/servlet/JiveServlet/previewBody/1518-102-1-1802/Public%20compute%20model%20discussion%20deck%204-17-08.pdf

The 2008 Microsoft World Partner Conference (WPC) was hosted in Houston, TX on July7-10, 2008, with global participation.  The WPC provides an online and in-person forum to learn more about business growth opportunities and product innovation from Microsoft executives. 

 

This year the ECMF team participated at the event and provide a showcase that incorporated the manageability of Intel vPro in a real world scenario that utilized application virtualization and steaming.  For the showcase the team used SCCM SP1 R2 beta as an enterprise management console with Microsoft's App-V (Soft grid 4.5 beta) to stream and manage applications to the vPro clients. 

 

This provides the ability to:

 

  • Dynamically deliver application on the world's most manageable clients

  • Enable greater business agility with an enhanced end-user experience

  • Achieve IT "Green Computing" and reduced TCO objectives via fine-grained update controls.

 

After the event, I sat down with Craig Pierce to record the demonstration.  I think it is a very compelling 4 minutes of video.  In the demo he shows both the server console and the client experience, and launches 2 versions of Microsoft Word (2007 & 2003), which share drivers and normally wouldn't be able to run on the same machine.  This concept can be extended to many other applications. 

 

 

Application Virtualization and streaming allows you to no longer go through the entire install process, but simply stream and execute the applications you need when you need them - and the licenses for these applications can then be reclaimed when your not using them.  This should become a defacto standard over time, as it works well in all compute models (from the rich client models to thin clients). 

 

Questions?  Comments?  Funny remarks?

 

-Jason A. Davidson

p.s. Thank you to Chris Kaneshiro, Sophia Stalliviere, Nicole Trent, and Gunitika Dandona for your help in filming & editing this video.

 

A few weeks back at the BriForum I attended a session called The Future of Client Computing, where the audience participated in an open discussion around where client computing is headed.  It was amazing to see a group of very smart people come to a single consensus...with various interpretations of that consensus I am sure.  Being that I am back from the show, and back from vacation, I wanted to take a few minutes to recap what my interpretation of the future...

 

 

Therefore, the future from my eyes looks something like the following.  I welcome your comments, disagreements, agreements, or snarky remarks.  I will try to keep this write-up as vendor agnostic as possible...all characters appearing in this work are fictitious, any resemblance to real persons, living or dead, is purely coincidental, no animals were harmed in the making of this blog...and if this future plays out, I am not responsible for the results. 

 

 

Let me start by explaining where many of us are now.  We typically live in a world where we boot a computer to a rich operating system that has many features we may or may not use, then we install applications off CD/DVD, downloading installers over the internet, or have it pushed as a local install over the corporate network.  We all run local virus scanners, firewalls, and patch/update everything often - less we fall behind and become vulnerable.  Each of these software programs we use have been tested to work with our operating system (or we hope), but very few of them are tested to work with other applications, and some are just not compatible with each other (they didn't make it out of kindergarten with the "plays well with others" moniker).  Many programs are installed on a ton of computers, with much of the data being the same across those computers, but being that content belongs to the person next to you, it is redundant but not accessible (across a given large group of people the amount of duplicate data is enormous...larger than having copies of the US library of congress in digital format).  Some people have started moving away from this model, but often come up with solutions that are either too awkward to become mainstream, or too limited to become useful.

 

 

Next, the path to the future...  With several compute model choices, people have started using the modern day compute and network resources to revisit solutions that had limited success in the past (I say limited as none of them won out over the model described above, many were very successful in specific environments).  To help with the large amount of redundant data, people moved the data to server rooms.  To deal with application conflicts people gave each of these programs their own virtual sandbox to play in (now they don't have to play well with others...they get their own sandbox instead).  To deal with patches and updates, people developed utilities to maintain compliance with a few button clicks (and several scripts, settings, and close monitoring).  And the list goes on...

 

 

The future...  Now I will put on my rose colored glasses and look at where things are going...in other words, I believe they are taking a turn for the better.  Going back to the discussion that was had during the BriForum class, the basic architecture was a "dial-tone OS" with virtual containers that can be streamed and executed locally or presented over the network.  The term dial-tone OS was new to me, and as I believe Ron Oglesby described it, the operating system would give a basic level of functionality similar to when you pick up a phone and hear the dial tone.  We all have grown to expect a dial tone when you pick up the receiver, and if there is a pause or delay we are very confused as we have grown a very high level of expectation for the quality of service on this device (not talking about coverage areas here - just the basic features).  With a dial-tone OS, the client device would quickly respond with some basic features - a GUI (graphical user interface)/window manager, a scheduler, I/O mapper, device drivers, and a virtual machine manager (I may be missing a few OS fundamentals, but the idea here is a truly minimal/microkernel type OS that has a high level of reliability).  All application that execute in the environment would work in their own virtual sandbox, which may contain an entire OS emulation, or simply the basics to execute - or in other words virtual containers.  These applications would interact with the GUI via the window manager, and negotiate the layout within the systems capabilities.  The Virtual Container would execute either locally, on a server, or in the network/cloud based upon the negotiated policies and client device capabilities.  For containers executing locally, differences of the container would be archived and ready for use on other machines or as backup (depending on connectivity, etc).

 

 

The key here is an environment that from the base up is built with device capabilities in mind - if you're executing a spreadsheet calculation and your device is going to take days to calculate it, have another location process that for you.  If you're using the same data as everyone else, make one image of it in the community of users, and everyone works from that image - when the image is upgraded, everyone migrates over time. If the device has Intel vPro capabilities, the virtual containers and dial-ton OS can take advantage of the energy-efficient performance, manageability, and security features.  If the device is ATOM based, then a whole new set of features are exposed.  Etc... (I had to add in my own Intel fanboy comments, but comments I really believe in).

 

 

The road to get from here to there involves a ton of non-trivial solutions, and I believe the good news is that many of the solutions are being thought about by some great minds - however I am sure there are some new and exciting "change the world" ideas left to solve... 

 

 

The future looks both responsive and reliable, and environment where we are not encumbered by the limitations of our environment, but simply a click away from doing our next task.

 

 

-Jason Davidson

 

 

Facebook, Twitter

 

 

A recent trip I took, I had the opportunity to visit the St. Agnes academy in Houston Texas.  They have been using a product by Symantec called SVS Pro to deliver a online portal to the students which integrates into the classes and seemlessly offers the books and applications needed for the students to learn in a whole new way!  I was able to get the perspective of sevearl students, a math teacher (whom I hear is one of the students favorites), as well as a great technical talk from Jason Hymes the director of Technology. 

 

 

Here is the video it runs approximatly 5 minutes. 

 

 

The url for the school is: http://www.st-agnes.org/ (if you have kids and live in that area, it looks like a great place to send your children).

 

If you are like me, when you travel, computers break at home - and being a computer person, you are the tech support...your house is most likely your personal lab, in a constant state of flux.  If not, I salute you. To make matters worse, I am often the one who messed things up before I leave - luckily, my wife patiently waits for me to get into my hotel and work with her to fix it remotely.  She already does a great job at tolerating the wires, keyboards, mice, monitors, and various other computer parts in every corner of our house - so having to wait for me to fix these, is a hassle for her I would like to reduce. 

 

 

I have a real life scenario from my current trip that is worth sharing with this community.  First let me explain a bit about the way I have my house setup.  Network wise, I have a standard DSL connection to the house which plugs into a slim & quite desktop that I has 2 network cards on it and runs the http://ipcop.org/ firewall solution, which I have added http://openvpn.net/ onto and use the OpenVPN GUI application on my mobile computer.  From the 2nd network connection I serve up my wireless and wired infrastructures and have gigabyte connections to all rooms in the house as well as a great wireless solution, even the printers and TV are networked.  I have more than one vPro clients in the house that I have enabled in small business mode.  I also have a RAID solution on one of my computers that handles all the file shares - including running various emerging solutions that we talk about on this site (I mentioned I view my home network as a lab, right?). 

 

 

Now the scenario - while on this trip one of the PC's who is up to date with virus protection and patches developed a virus, and as much as I would like to spend the time looking into how the virus got there - doing this over the phone would not be feasible.  Therefore, I did what any modern day geek would do - I VPN'd into my home from my hotel, I took control of the computer over a remote desktop session and started fixing.  I found the virus engrained into the system, and to keep my home running until I return, I set the machine to boot to the network instead of the local hard disk using IDE-R (a feature in vPro).  Then I rebooted the machine and it booted Ubuntu Linux over my network, and the files that my family uses are accessible over the file shares. 

 

 

Problem patched - until I return home.  Keeping my fingers crossed...

 

 

-Jason

 

 

p.s. If you have any questions on how to configure your house this way - fire away.

 

 

 

The event in Pittsburgh on May 6th was a fantastic event and the first where we folded the Application & Desktop Virtualization Forums into the already successful Intel Premier IT Professional events - it was a marriage waiting to happen. 

 

 

I am excited to reference a write-up on the event on our new sister community site dedicated to these events at: Short Overview Videos from the Pittsburgh Event

 

 

We had fellow travelers of Citrix, Microsoft, Symantec, and Tata.  This week we will be in Columbus, Ohio.  Several more of these events going on this year, pop over to the Intel Premier IT Professional Zone and find out about the one nearest you: http://communities.intel.com/community/ipip

 

 

Mark Wallis wrote:

 

-


 

One of things folks ask me about the Intel IT Premier Program event is 'what are they presenting about' or 'what demos do they show'? So, while I was at the Pittsburgh event, I took some short videos of the Intel presenters and asked them to explain what they'd be presenting about. I also asked a couple of the demo guys a similar question.

 

Check out these videos and you'll get a little taste of what happens at these shows. I'll do more videos as I work on upcoming events.

 

"A Peek at the Future: Intel Product and Technology Roadmap".

Presented By: Rick White, Intel

http://www.youtube.com/watch?v=T5ZCdrGj3Jg

 

"Client Virtualization Best Practices"

Presented By: Mike Breton, Intel IT

http://www.youtube.com/watch?v=2yBqWlUihZM

 

"Reducing Client TCO through the Use of Virtualization"

Presented By: Dave Buchholz, Intel IT

http://www.youtube.com/watch?v=7ZBpU34ueXg

 

"Data Center Virtualization and Consolidation"

Presented By: Steve Tadman, Intel IT

http://www.youtube.com/watch?v=_Trt7MNhAGo

 

Noel Tabotabo talking about some of his vPro demos

http://www.youtube.com/watch?v=zujOPBcmHCE

 

Randy Baxter pointing out some of the mobile devices in the showcase

http://www.youtube.com/watch?v=5f75zgp1SHc

 

 

 

I would like to invite you to engage in a virtual proof of concept with an emerging compute model technology via this site.  I will award a nice prize to the first five successful proof of concepts that are done over the Emerging Compute Model Forum (for more details send me a message).  If your organization is looking at various compute models and would like to go through the process and share your findings with the world, let's explore this model here.

 

 

-Jason

 

 

 

Intel partnered with ArsTechnica to create an ongoing web based conference / symposium with Intel as the presenting sponsor - and this week the topic is all about emerging compute models! 

 

 

Please take a few minutes and go check out the conversations on their site and join in the discussion.

 

 

-Jason

 

 

While in Atlanta, I was able to get a few minutes with Brian Duckering from AppStream to have him show us his latest. 

 

 

Here is the video.  (I also learned that I need to do lighting different in this video...novice mistake on my part about having the window in the background - beyond the window is the Atlanta Braves stadium, which would have been a nice backdrop). 

 

 

 

 

The application & desktop virtualization forums for Atlanta (March 20) and Washington DC (April 3) went off well.  Here is my recap. 

 

 

Atlanta:

 

 

When we arrived in Atlanta, the town had just survived a tornado on March 14th and was in repair mode (the hotel that many of us were staying at had extensive damage and was doing everything it could to get back in working order).  We had a few interesting times as passage to & from the hotel was often stopped due to the amount of falling glass (we passed the time in the nearby malls and downtown businesses).  One person checked into their room to find that moments later a crack in the window gave way to a breezy view.  The round the clock crews that were repairing the hotel made for some less than desired sleep patterns (3 am hammering in the room next to you is bound to wake the heaviest of sleeper).  The people in Atlanta were as hospitable as ever, confirming that Atlanta is big city with small-town hospitality - even in the aftermath of a tornado!

 

 

We held the event at the 755 club at Turner Field (the Atlanta Braves stadium); the venue was awesome!  The day of the event, started at 8:30 for attendees with a very enjoyable southern breakfast.  At 9 am, Ketan Sampat of Intel gave the opening address, followed by presentations from Citrix, , and Microsoft.  During the lunch time, there were demos and deep dives with experts from Intel, AppStream, Citrix, Dell, Microsoft, and Symantec.  As the attendees left the event, they received a USB thumb drive with all the presentations and collateral here:

 

 

I personally had several great discussions with the Atlanta attendees, and found that the attendees are definitely looking at various compute models to deliver the needs of their business and are eager to see which ones will emerge as the best complete solution - great perspectives and insight received from these talks.  In addition, the team was happy to see the city recover quickly, and as we all left, we look forward to a return visit to a restored Atlanta, and the continued contact with the attendees from the event as they move forward exploring these topics.

 

 

Washington DC:

 

 

We arrived in Washington DC during cherry blossom season, a fantastic time of year.  The venue for the event was the Marriott Hotel in Bethesda Maryland.  The hotel staff was very helpful, the hotel was enjoyable, and the event went off without any major issues.  The agenda was very similar to Atlanta with breakfast/registration time at 8:30 am, and at 9 am Chuck Brown of Intel giving the opening address. This was followed by presentations from Citrix, , and Microsoft.  During the lunch time, there were demos and deep dives with experts from Intel, AppStream, Citrix, Dell, Microsoft, and Symantec.  As the attendees left the event, they received a USB thumb drive with all the presentations and collateral here:

 

 

 

 

 

Many great talks with the attendees in DC as well, confirming a similar message that was received in Atlanta.  We are definitely on the edge of something big in this space - as can be seen by the various acquisitions that have occurred in the past year.  A fantastic first two events for 2008, if you have not been able to attend either of these, see if one of these matches your location. 

 

Pittsburgh\

May 06

Register: Members\ \

Non-Members\

Columbus\

May 28

Register: Members\ \

Non-Members\

Baltimore\

June 10

Register: Members\ \

Non-Members\

Tampa\

June 12

Register: Members\ \

Non-Members\

Austin\

June 24

Register: Members\ \

Non-Members\

Denver\

June 26

Register: Members\ \

Non-Members\

 

Hope to see you at one (or more) of these events in the near future. 

 

 

-Jason Davidson

 

 

 

I was excited to hear that in this morning's ManageFusion keynote, Symantec Chief Operating Officer Enrique Salem announced that Symantec has signed a definitive agreement to acquire industry-leading application streaming vendor AppStream.  This should make the SVS Professional product all the stronger, as AppStream has been providing the streaming component of this product already. 

 

 

You can read the blog from Scott Jones on the Juice site as well. 

 

 

Filter Blog

By date:
By tag: