Ding! Entering Über Productivity Mode

17 Dec

What is it about working on an airplane that puts me into such an unbelievable productivity trance?

Sure, I’m a big fan of GoGo in-flight internet, but that alone can’t be the only thing, because, well, I have internet basically wherever I go. Also, it can’t be the peace and quiet either, because, well, when is there ever peace and quiet on a flight? Between screaming kids, grumpy senior citizens, and flight attendants thinking their drink cart is a golden chariot from God that MUST travel down the aisle at all cost no matter the number of shattered elbows and delays to human nature calls it causes… Peace and quiet is the last thing I get while flying.

Maybe it’s the fact that I am basically being forced to sit in the same place for 4 hours and figure out a way to not go crazy… And the world’s best compensation machine, the human body, knows that if it doesn’t keep my mind occupied…

My own personal “Truman Show”…

8 Dec

I. Want. This. Bad.

I mean, I’m so incredibly cool when no one is looking… now there is finally a device that allows me to prove it to everyone!

20131208-112652.jpg

Narrative – the automatic life-logger

GeoWorld Magazine Publishes My Article: “Benefits of Cloud-First in the Public Sector”

2 Aug

GeoWorld Magazine has published an article I wrote on the “Sharing Costs, Improving Service and Spurring Innovation: Benefits of a Cloud-First Approach in the Public Sector” in their latest issue.

Thanks for GeoWorld for the publication of my ideals and concepts… I hope it can help local, state, and federal government agencies everywhere leverage cloud computing and the available cooperative contracts more effectively; saving taxes dollars and improving efficiency across the nation.

Here is the official link to the article (free registration is required to view it in it’s entirety): Click here

 

Cloudflaring my blog

19 Jun

Man, CDN’s have come such a long way.  I remember not that many years ago , some of my customers paying hundreds of thousand’s of dollars (upfront) to Akamai for content delivery of their imagery associated with mapping applications, while having to plan, send, and wait days for the content to become available.

I also have experience recently with Amazon Web Services CloudFront, which is their “easy to use if you are a developer” CDN product for websites and content hosted on AWS.  However, that is still operating at an entirely different tech skill level than a relative newcomer CDN provider that is on fire…

CloudFlare was in the news (via VentureBeat) yesterday claiming they “had more traffic than Amazon, Wikipedia, Twitter, Instagram, and Apple combined” last year.  And most impressively to me, you can get started for free and in as little as 5 minutes.

Now understandably, I am not saying that the Freemium version of Cloudflare edge-caching some of my low-res personal blog photos is anything the same as distributing terabytes of hi-res base map imagery from a cost nor ease of implementation standpoint, but being able to leverage that same kind of functionality so easily and quickly is still pretty cool.

So, gotta try it, right?!

Well, it took me about 7 minutes total, not 5 as promised haha, mostly because chose to log into my WordPress dashboard to install the Cloudflare plugin to ensure IP addressing was consistent and then also sign up for a Pingdom monitoring account at the same time while I waited for the DNS changes to resolve… but, I have to say Cloudflare delivered on their simplicity promise.  The key was the way they scraped my current DNS records and pre-populated the forms for me rather than asking me to log into GoDaddy and copy everything out.  Then it was just a simple name server change and final check to make sure all the records were complete.

But you tell me… notice any change in the speed of graphics loading?  I have a huge graphic in my last post that  that will be a great test and will hopefully benefit from some caching and route optimization via Cloudflare CDN service… I’ll keep an eye on my web stats the next couple week and update this post with the performance metrics.

Tags: , , ,

I bet big on Cloud

11 Jun

As most of you know, the last 6+ years of my life have been dedicated to injecting the use of Cloud Computing into (what I would consider) legacy IT environments.

The path hasn’t been easy. Enterprise IT departments are strong willed and resistant to change. With help from colleagues, vendors, and industry visionaries… I’ve fought the good fight through every battle, like:

It’s not for Production, just Dev & Test

And who could forget the classic:

My application won’t run in a virtualized environment

But, as we’ve probably all witnessed by now, the all-time best cloud-blocker excuse has to go to the random SysAdmin resistant to moving an app on a 10 yr old tower workstation sitting in a 90 degree “modified-closet data room” with a broken lock on the door, because:

The cloud isn’t secure

With that said, I wouldn’t have it any other way. I’ve been part of some incredible first-mover cloud successes over the years and continue to see workloads now start to leverage the power of public, on-demand, OpEx only cloud computing that previously were considered non-candidates.

My friends at CA Technologies hooked up with Luth Research to conduct this study of 542 IT leaders; asking them about their experiences with cloud. The results speak for themselves. There is no turning back now…

Don’t respond to RFP’s? It depends…

11 Mar

All I can think as I read through this Inc.com Article on “Why not to respond to RFP’s” is…

“Man, I really hope our competition is reading this and decides to follow their advice, ha!”

Although the author makes a couple of good points about the negatives of responding… RFP’s are so engrained in the culture of corporate and government procurement that I think responding will always be a large piece to the success of any business development team.

However, I do think we are seeing a permanent shift in the scope of RFP’s, especially on the government-side. They are moving more toward pre-approving smart and qualified vendors that provide long-term strategic guidance in an “on-call services” manner, rather than releasing RFP’s for each individual project.

Obviously this isn’t new, as the practice of Indefinite Delivery/Indefinite Quantity (IDIQ) contracts have been used to pre-qualify vendors for quite sometime. But, what is new is the broadening scope that these on-call vendors can provide once approved as a vendor.

For example, the company I work for, Dewberry, was awarded an IDIQ-like contract by the Western States Contracting Alliance (WSCA) last year to provide Cloud Computing, Cloud Consulting, and Cloud Architecture Design for a three year period with two following option years.

No dollars whatsoever were promised to Dewberry during the entire bidding process, which included an approximately 18 month long RFI, RFP, and Award negotiation process. Meaning, we dedicated hundreds of business development and proposal writing hours, over an 18-month period, to a client that promised us nothing in return except for the opportunity to perhaps supply Cloud infrastructure and services to them if they so chose.

So why did we continue to pursue this RFP?

I mean, we’re not bad business people. We understood this process wasn’t ideal in terms of what IT sales textbooks tell you to do like “control the deal” and make sure with “every concession, you get something in return”.

So why did we keep pouring hours into it?

“It comes down to the confidence you have in your solution and building a long-term strategy.”

As I mentioned earlier, although IDIQ contracts have long been used, the breadth of scope that pre-approved vendors can now provide a customer is broadening due to technology like cloud computing. For example, now that Dewberry has been awarded a long-term contract with WSCA, we can focus on becoming their strategic trusted adviser more than a silo’ed project poacher.

folders_pic

This means we get to understand our customer as people, personalities, and an organizational culture; not just as project requirements on paper. Based on this deeper level understanding, we can recommend solutions that reach an entirely different level of customer satisfaction; including, solving organizational, cultural, and interdepartmental issues that may have existed since the absolute beginning of their days.

In addition to this promise of long-term strategic engagement with our customers upon award, we responded because we knew we had the absolute best, most innovative, and lowest cost solution the customer could buy (in large part to a successful partnership with Amazon Web Services) and therefore we could easily become the one-stop IT shop for every State, Local, and even Federal Government entity (WSCA allows any U.S. based government entity to purchase from the contract with approval).

My purposeful use of my phrase “one-stop IT shop” and not just “Cloud provider” shows that we obviously thought well ahead of the written words in the original RFP, which was titled simply “Public Cloud Contract.”

We knew that cloud technology had changed the landscape of IT procurement so much that these future customers would end up bypassing other legacy RFP procurement cycles like software license buys and system integration projects, because having access to this contract vehicle would enable them to hop online and within a few clicks, buy a pre-installed, pre-licensed hosted cloud server instances with powerhouse software like Oracle, SAP and Microsoft… All without ever talking to a software, hardware, networking, or consulting vendor

“Man, it feels great to empower customers to buy on their own terms!”

So, is it worth chasing RFP’s? Damn betcha!  But, only if you have a strategy where an award will set you up to be a long term trusted adviser and, most importantly, you are confident that your solution is in the absolute best interest of your customer.

If “no” to any of those points, then you are better off saving your business development hours for relationship-based selling like golfing with a CIO and praying he likes to watch you slice it into the woods, again.

 

Tags: ,

2013 Cloud Predictions: “Private Cloud” name gets exposed as a fraud

13 Dec

If you’re an IT manager calling your internal VMware or other virtualization farm a “Private Cloud” in an attempt to prove to your leadership that “public cloud is insecure” or “I built the same thing as Amazon Web Services (AWS)”, you need to get ready for a dose of reality in the coming year.

Server-huggers beware, you might have been able to get away with it until now, but 2013 will mark a turning point in which the term Private Cloud will be permanently exposed for what it is…  a capital intensive, server stacking, virtualization game.

Just because you might have flexibility to decide how much RAM you can assign to a VM, doesn’t give you the right to “cloud-wash” your internal IT operation and call it something that it’s not… because although it may be Private (can someone tell me again why it’s important to be able to touch your servers?), it’s certainly not Cloud.

Not that there’s anything wrong with that…

Not that there’s anything wrong with that private infrastructure

I’m not saying there is anything wrong with running an IT shop where you still spend lump sums of capital (CapEx) for physical resources, especially if you are working to make those resources flexible and reliable by optimizing your data center, using virtualizing, and invoking best practices like continuous monitoring and agile development.

Just don’t use the word “Cloud” because your business users and C-level leadership are getting smarter every day on the incredible economic advantages, real security story, and global scalability benefits of public cloud.

In short, selling them a story like “my private cloud is the same as AWS, but more secure because it’s on-premise” is going to begin to look childish.  And worse, it will discount the credibility of the (probably pretty good and still very useful) internal IT environment that you’ve worked so hard to build.

If you physically touched it, estimated your peak demand before buying, and/or don’t have a re-occurring OpEx fee… IT’S NOT CLOUD.

Tightening definitions

The definition of  “Cloud” will also further tighten in 2013, where it will be reserved only for systems that allow you to:

  • transform your IT into only operational expenditures (OpEx)
  • go global in minutes
  • never have to guess your initial or future capacity

Despite all the marketing from old guard IT and large virtualization software companies that claim building your own Cloud is the best way to go, your Private Cloud still:

  • is a large capital expense (CapEx)
  • rarely allows even the largest installs to go global in minutes
  • makes you commit to a upfront minimum and requires you to predict future capacity

In his recent keynote at Amazon Web Services Re:Invent conference, SVP Andy Jassy [View his Keynote on Youtube here] put it in the best perspective I’ve heard yet, giving these six simple items that differentiate the burden of private, from value of public.  [Click picture to enlarge]

Andy Jassy, AWS Senior Vice President pokes fun at Private Cloud at the recent AWS Re:invent conference

 

It’s okay, just try a little bit… it won’t hurt you.

Remember those drug prevention classes in middle school (was it called D.A.R.E. everywhere or was that just an Ohio thing?) where the police officers would come and tell you the dangers of drugs and how they get you hooked by getting you to just try a little bit? 

“Don’t even do it once,” they would say, “Because if you try it once, you’ll be hooked for life!”

Well, it seems the private cloud loving internal IT folks were all sitting in the front row during those officer presentations, because they took this advice a little too seriously and have applied it to public cloud adoption too.

The best thing about public cloud is it’s cheaper to fail than belabor conversations about whether to try it or not.

Internal IT will remain greatly relevant

Don’t worry internal IT, you’ll still be greatly needed by your company in 2013 and well beyond because there absolutely is a place for flexible, private infrastructure in today’s IT.

Organizations that have invested millions in capital on IT hardware, software, networking, and human resources would be completely insane to throw it all away today and move everything to public cloud tomorrow; however, in the same breath, I would also call these organizations insane to keep piling investment into more private resources given the extreme economic, scalability, and functionality advantages of public cloud.

Over the coming years, even very large internal IT groups, simply won’t be able to keep up with the rate of innovation, security, and scale that public cloud operations will achieve.

Internal IT will also face tough competition from rogue business users going outside of their internal IT to get what they need from public cloud with something as simple as a credit card swipe.  Of course, internal IT may think the best weapon against this is a strict lock-down policy where business users get punished for going rouge; but, a moratorium on public cloud only hampers corporate innovation and creates animosity between the teams.  I suggest there is another answer for internal IT… Embrace, broker, and support.

Although easier said than executed correctly, cloud brokering both public and private IT services, while supporting business users on both,will be the key function for internal IT groups staying relevant to the business and even thriving in 2013 and beyond.

More on the “why and how of cloud brokering” soon… we’ll even take a look at some tools that can (maybe) help.

Disclaimer:  These predictions are based on the fact that world does not end on December 21, 2012 as the Mayan calendar predicts. If we never reach 2013, I reserve all rights to drastically modify these predictions.

Tags: , ,

Teach kids to type, not write?

7 Dec

Most of you that know me personally or follow me online know that I am pretty much neck deep in preschool hell.  With my three boys at the ages of 4 1/2 yrs, 3 yrs, and 4 months, it isn’t going to end anytime soon either.  Between making sure they have their backpack full of supplies for kiddie activities, packing their lunches that are required to have all 5 food groups or else we get a “you’re a terrible parent” nastygram sent home, and managing a ridiculously preventive doctor and dentist appointment schedule… you would swear I had four full time jobs (and if you saw my pre-school bill you would assume I would need 4 full time jobs to pay for it!).

Lately, my 4 1/2 year old is really getting into drawing, writing, and reading, and I wouldn’t be the proud Dad that I am unless I told you that he’s pretty darn good at it!  But, more importantly than being good at it, he seems to really have a passion for it.

“Monster Truck Minivan” complete with Santa Claus and a front door on-board

All he wants to do at school is draw pictures, write his name, and practice letters.  His teachers, mom, and grandparents are glad to see his growth and passion in this area, but I can’t help but wonder if it is really useful or not?

I know that the brain develops and learns fastest by the actual performance of work (e.g. practice) and that the visualization of that work after you’ve completed it aids in the  internalizing of this learning, which is stored in your brain as knowledge.  Hence, as he gets better at drawing a letter, his brain gets better at recognizing it.

But ask yourself this… when is the last time you wrote anything on paper by hand that was much more than a list of words for the grocery or a to-do list?  I’m talking about a complete sentence or dare I say entire paragraph written on a piece of paper by a pen in your hand?  And if you actually have done this lately, I bet you at some point you took that handwritten script and convert it to digital by scanning or typing it so you could email to others, post it on a blog, or have access to it on your phone, right?

Not my kids

Obviously my kids are going to learn to write by hand, but I would like someone in education to actually acknowledge that the only reason that my kid is learning to write by hand is because that’s what they were taught to do when they were little… and it’s just what schools do… they teach kids reading, writing, and arithmetic… enough said.  But, how long until this changes?

We have already deemed writing by hand as a society virtually useless.  Word, blogs, websites, email… all of it is digital and only useful by means of typing.  Even my reference before regarding handwriting short notes or lists has been taken over by mobile apps like Remember the Milk, iPhone reminders, and text messaging.

So how long until schools, even preschools are teaching toddlers to type and not write?

Home row instead of how to hold a pencil?

Drawing a pictures using Microsoft Paint and a mouse click instead of crayons on construction paper?

Big changes, especially in education are hard, but at what point do we give up the past and move forward?

If all almost of our kids are going to make a living by typing (show me a profession no matter what the industry that doesn’t rely on email and word docs as the primary means of communication and documentation), aren’t we teaching our kids an outdated, useless skill just because that’s what we were taught and not because it’s the right thing for their future?

 

How I spent over $200,000 decorating one wall

26 Aug

Go Falcon Falcon Antelope Nittany Lion!

Don’t you wish there were such an animal?!  

It took us more than 4 years after graduating, but my wife and I finally framed and hung the most expensive pieces of paper we’ll ever own.

Tags:

The Future of Cloud Computing, Part 1: Virtual Desktops – Dead on Arrival

16 Aug

The answer is not SaaS, nor VDI, nor Cloud; rather, an evolutionary compilation of all these technologies.

The impact that Cloud Computing has brought to the IT industry to date has been primarily beneficial to application developers, system admins, and network architects, and not directly to end-users of technology.

Yes, IT developers and architects leverage cloud computing’s flexible and virtualized compute, storage, and network infrastructure to build resilient applications that eventually benefit end users due to improvements in speed-to-market and improved up-time statistics, but the direct benefits to the tech-needy end user are still rarely recognized.

Most daily users of personal and business class applications don’t have the turnkey, on-demand access to the applications they need. At work, their IT departments at work are too slow in delivering the apps they need or refuse to provide them due to cost, limited resources, or lack of recognized need. At home, users struggle to deploy software themselves due to complexity, time involved, or again, cost.  However, advances in cloud-powered software and service delivery have started to revolutionize the way that end-users (both business and consumer-level) think about acquiring the tools they need to succeed. These innovations will finally give end-users with their piece of cloud computing value and change the way software is delivered, licensed, and used both on-line and off-line.

Over the next several weeks, I will be releasing several blog posts on the topic of the “Future of Cloud Computing”.  Below is Part 1, which describes the unrealized promise and eventual demise of virtual desktops.

Innovations in streaming application code… rather than streaming pixels… will kill VDI before it even fully arrives.

Do users really like or want Virtual Desktops?

From the start, the concept of virtual desktop infrastructure (VDI) is flawed for most real-world applications and use-cases.  No matter how optimized VDI compression companies claim their proprietary algorithms might be, they are still trying to push a proverbial “watermelon of pixels” though a relatively pinhole-size network to get what you need to your device.  It almost seems like all the stars have to align before VDI actually works for the every day, multi-location worker.

VDI technology refresher

Virtual Desktop Infrastructure (VDI) is a method of enabling end-users with a client device (PC, laptop, tablet, etc.) to access, log-in, and utilize a remotely hosted desktop environment.  In order for you to get access to and interact with the remote environment, compressed screen shots of the display (what you would see if you were standing in front of a monitor) of the VDI instance are streamed continuously via network connection to your client device’s display screen.  Meaning, that a user could have access to a completely different environment including OS, applications, and network without actually having that environment installed on their physical client device.

What’s so wrong with VDI now?

For the typical everyday business user, who works from a combination of office, home, client-site and car, using a virtual desktop sounds perfect, but in actual practice, it’s a real productivity killer due to several key flaws.

  • No offline access.  VDI requires a persistent high-speed internet connection throughout the entire session of usage on your virtual desktop… and while wifi is supposedly everywhere, it never seems to never be reliable, fast, nor secure… making accessing your applications and data “anywhere and anytime” more of an under-delivered promise than a reliable reality.
  • Not as “Green” as advertised.  For all its press about being “green”, VDI is actually incredibly wasteful because it is architected to leverage only the compute and storage of a hosted server or cloud environment, while completely ignoring the processing and storage power of your personal PC or tablet client-side device.  With the exception of true “thin-clients”, which are not widely used by consumer nor businesses to-date because of their inflexibility to be used for anything expect for VDI, your client device, whether it be a desktop or laptop PC, is still powered on and consuming a similar amount of power as it would if you were utilizing its local resources rather than just viewing the streamed screen shots of your VDI instance to your device’s display. Powerful client-side (e.g. PC, Mac) devices are so relatively cheap, yet are virtually (no pun intended) wasted when leveraging VDI.
  • Performance and Graphic degradation.  VDI struggles with graphic intense applications like Engineering, Drawing, CAD, GIS, and Gaming applications because most cannot use the device’s local graphic card to render complex or fast-moving graphics locally rather than streaming non-3D and/or pixelated graphics from the VDI instance.
  • Cost.  A typical private VDI environment set up from a leading vendor is easily into the millions of dollars after accounting for new data center space, servers, networking, storage, and virtualization licensing.  A large price to pay to duplicate and even derogate some of the applications and services that your users are currently using.

How were we convinced streaming screenshots was “the right way” anyhow?

Undoubtedly there are benefits of VDI, but most of the benefits are to the IT staff, not end-users.  Most of these benefits to the IT staff surround topics of license management, patching, and security.  Although I understand these benefits, I don’t know how IT shops got on the path of streaming pixels with VDI rather serving the code instead which would allow them to better optimize and control application delivery and licensing than what streaming screenshots could.

Using the server-side to deliver application functionality, data, and licensing on-demand to devices directly

Sending pieces of the code to your device, using your local device’s compute processing to run it, and then getting updates pushed from the mothership server whenever you connect or security requires it seems like a much more streamlined approach to a VDI-like environment than relying on a high-speed connection to stream pictures of screenshots from a remote data center slice of a server.

In this scenario, IT admins still get all the manageability benefits and licensing controls for deploying applications on-demand that they get from VDI… all without spinning up an entire cloud infrastructure to host a VDI backend  and without wasting perfectly good client-side resources.

How to replace VDI… Streaming application code, not pixels

Benefits:
• Any software delivered to your own device
• VDI-like features still present- updates/patches pushed, zero-footprint device wiping
• Fast, reliable, offline-accessible local storage and processing (w/admin approval)
• Native graphics performance for CAD, GIS, Visualization, Gaming, etc.

The next step – making “Cloud-bursting” workloads a reality

Added Benefits:
• Application code, data, and compute are on local device & cloud for ultimate workload flexibility.
• Local & Cloud Storage Sync for redundancy and faster processing by chosen processing destination
• Local & Cloud Processing Capabilities – Cloud-bursting a workload becomes a reality
• Native Graphics Performance

More Advantages of streaming application code rather than pixels

  • Applications and data can live on both you local device and the cloud; enabling you to “cloud-burst” large jobs
    • Enables you to choose where you process your requests, choose the location, speed, and even cost of your processing jobs
  • More flexible and functionality-based licensing terms
    • Stream apps to first-responders in disaster response situations, then remote wipe once tasks complete
    • Sales teams can easily give customer’s full trials with automatic licensing time-bombs
    • Create SaaS-like easy deployment without changing a single thing about your successful legacy desktop applications
  • Similar benefits to traditional VDI for application updates, bulk maintenance, and security
    • Admins are still administering one application package for everyone to use
    • Can auto-push critical security patches or application updates
  • Enables offline usage
    • Since the application code runs on the client device, with admin approval, user can take the application off-line indefinitely or, using time-bomb or usage-bomb licensing, admins can limit usage of the application for a certain period of time or for a specific task only.
  • Extends the life of Desktop applications
    • Traditional “boxed” software companies are spending millions of dollars and years of R&D time to re-engineer their software “for the cloud” because they think they only way to cloud-enable their software is to write from scratch a multi-tenant web application that recreates their technology’s traditional functionality.  However, the usual outcome of this new SaaS development is watered-down, bug-ridden functions compared to their flagship desktop product functions
  • Less risk of software piracy
    • Since only the application code for the functions you need is being streamed to you, your computer will never have full application code; making it much harder if not impossible to pirate, re-package, and re-sell and full pirated version of the software.
  • Superior application performance and 3D graphics rendering
    • When you stream code to the device instead of pixels,  it could remedy probably the biggest problem in VDI, application performance and graphics rendering.
    • This enables entire industries like CAD, Mapping (Geospatial, GIS), Gaming, and more to become usable and controllable, rather than becoming “IT silos” that get treated managed, updated, and secured differently than other non-graphic intense applications.

Although the technology to pull off this type of code-streaming environment might not be full baked yet, the groundwork for replacing pixel-streaming VDI has already been laid.  As the cost drops for cutting edge client devices and their amazing processing and graphics capabilities continue to wow customers and set expectations on user experience, VDI implementations will continue fail at achieving their once great promise to stream any application to every user via only a web connection.

It seems that VDI is perhaps only a patch-over solution while we wait on something better to come about.  Code streaming to client devices may be that answer.

 

Watch for my upcoming post:  
The Future of Cloud Computing – Part 2:  Why PaaS will fail and how Software-Stacks-as-a-Service (SSaaS) will replace it.

 

Tags: , , ,

%d bloggers like this: