Different Strokes For Different Folks: Hybrid Cloud Edition

No doubt you’ve heard plenty about the concept of hybrid cloud (or even hybrid IT) recently.  Personally, my Twitter timeline has been rampant with opinion pieces (some good, some bad) and your usually array of bickering between the various factions as they have to do all they can to represent their brands.  My intention of this post is not to enflame those factions, but invariably, I’m sure I’ll have to defend my comments.  What I want to try to do is get down to the reasons as to why hybrid has become a rather hot topic and break down the various approaches that are now coming to market.

Why Hybrid (or even Multi-Cloud, for that matter)?

Well, first and foremost, if you like protecting internal silos and continuing the rationale that, as an infrastructure person, it’s your way or the highway when it comes to IT, you’ve missed some awfully big memos recently.  Not only that, YOU are part of the problem.  Most of these hybrid use cases stem from ineffective IT departments.

Let me explain.  It’s currently late October 2017.  Some of you are still provisioning internal systems like it’s 1999.  I’ll cut a little slack that there might actually be some shred of a business use case, however, I’d bet that most of it stems from the scared nature that somehow, as an infrastructure person, you aren’t going to be important in this new world.  So, rather than get on the wave of progress, you’ve chosen to entrench yourself in ancient IT methodologies and do your best to scatter as much FUD internally about anything that isn’t what you know.

This has led to extreme dissatisfaction between businesses and their IT departments.  The business, usually in the form of some sort of application development group, decides to circumvent the process and goes rogue.  Public cloud is leveraged, usually without the approval of IT, and thus are given tools to finally catch up to their pace.  Whether you want to admit it or not now, you and your IT department are in direct competition with the likes of AWS and Microsoft Azure.  And many of you are failing miserably.


So, your application developers went out and tried public cloud and while they like the overall interaction capabilities, the realization is that they need access to on-premises resources.  Unfortunately, line rates and long distances caught up to the application and latency reared its ugly head.  No matter what was tried, whether it’s faster connectivity with AWS Direct Connect or Microsoft ExpressRoute, it wasn’t able to solve the latency problem.  The need was to put the legacy components necessary to new application development in much closer proximity to the new breed of applications being developed.

Enter the Solutions

At this point, this is where the solutions deviate.  At their core, many address the latency problem but depend on different directions as to where workloads are shifted.  For instance, VMware Cloud on AWS wants you to move your legacy workloads (not just lift and shift, but due to the need to refactor applications around it) out of your datacenter and into an AWS datacenter.  Again, this will likely solve the latency issue, however, I still don’t feel this solves any sort of regulatory or sovereignty issues related to that data.  The focus of this blog post isn’t about those issues, so I’ll refrain from deep diving into it.  Just know that I consider latency AND sovereignty to be a couple of the major issues towards public cloud adoption.

That leads us to Microsoft’s solution in Azure Stack.  Unlike the VMware offering, Microsoft wants to try to address this problem in your datacenter, thus making the Azure extension feel much more like yet another private cloud service offering.  Selling Azure Stack as just a private cloud solution, unfortunately, does sell the offering well short of its intended mark.  There are some advantages to Azure Stack that may not be available in the VMware offering.  For instance, with being able to take advantage of everything in your datacenter, non-virtualized workloads can be accessible as legacy entities (my primary example would be something similar to an AS-400 system, where the processor types prohibit creation of a virtual machine on an x86-based platform).  I will admit, however, that this example is a very limited “advantage”, but in many enterprises, it does exist.


Whether the two camps (plus countless others that will crop up…cue the recent Google Cloud Platform and Cisco announcement) want to admit, they both are trying to address the same problem.  Latency is a public cloud adoption killer.  The major differences come down to what sort of legacy workloads do you have running on-premises and what the plans are for any sort of cloud-native application refactoring your organization has planned.  Sprinkle in whether you believe that your investments into certain legacy infrastructure tool sets are worth continuing (VMware really wants you to know you don’t have to get rid of your VMware admins, as an example, because someone still has to operate a vCenter instance) and you got a recipe for making a major decision.

If anything, what these moves have proven is that prior statements by some of the major public cloud providers were, in fact, categorically false.  No, you can’t run everything in the public cloud, especially if you’ve obtained years of IT baggage (i.e., any enterprise).  Not every IT organization is capable of being a stable for unicorns, nor should they be.  Welcome to the world of hybrid (or multi-cloud), folks.

Posted in Technical | Tagged , | Leave a comment

Becoming a Leader

Leadership.  We hear this term thrown around but I think few people actually know what it means to be a leader, let alone an effective leader.  Personally, I scoffed at the idea that I would ever be a “leader”.  I was perfectly content being in the background and being a good worker bee.  Then incidents happened during the early days of my professional development that I now know forged the beginnings of what I believe to be the leadership genes I have today.  So, let’s fire up the way back machine and describe some of these instances and how they came to make me into who I am today.

Back to College

Ah, college.  That wonderful time where, hopefully, you get away from what you know and go out and start to discover how the world actually operates.  While I was in college, I always had work study programs as part of my tuition package (the sorts of things you can get when you have zero parental contribution to your higher education bills).  I originally applied to work as a lab attendant in the fine computer centers on the campus of the University of Northern Iowa.  Unfortunately, I had heard nothing back from the persons doing the hiring and for my first semester as a college student, I held down two part-time jobs.  One was as a worker in the dining center nearest to my dormitory.  The second was as a glorified shop clerk in my dormitory.  Neither were that intellectually stimulating, but I was able to meet new people and get lots of studying done (as the shop clerk).  This changed when a gentleman on my dorm floor, Brent, recommended me to work on his team within the university’s ITS (Information Technology Services) group.  He was a student technician that went around from various university departments and the computer centers, fixing various software or hardware issues as they were reported into the call center.  Pretty standard entry-level IT work, for what it’s worth.

Now, I worked there from that point on until graduation.  I developed a lot of skills that eventually lead to a consulting position post-graduation.  However, there was a summer were different skills started to materialize.  My supervisor had to miss most of that summer.  She was recovering from major surgery and was not expected back until well after the fall semester started.  During that time, we have a temporary “manager” but while they were an assistance from higher level administration perspectives, they did not necessarily lead the group in the same way that my ailing supervisor did.  During the first few weeks, we found ourselves floundering around and the queue of work was growing at an exponential rate.  I remember that we had upwards of 120 workstation replacement tickets that had come in, as an entire department was able to finally obtain enough budget for this project.  This project also came with new challenges as we were migrating away from a Novell-heavy core to a Microsoft-heavy authentication core.  This meant instead of Windows 95/98, we were now having to deal with the animal known as Windows NT 4.0.  We had very little experience and had to get up to speed quickly on this.

Enter the opportunity.  While the temporary supervisor was busy with her own challenges on the backend, I decided it was time to step up as the most tenured individual on the team.  With much reluctance, I organized a few internal training sessions on Windows NT 4.0 and started to better delegate the work to the right individuals.  I already knew who was liked by various departments and who would be best suited to spend 3-4 hours working in those locations between shifts.  I found myself actually leading my team in ways I didn’t realize I had within me.  By the time the middle of the summer came around, we had heavily reduced our backlog of outstanding tickets to the point where we did achieve Ticket Inbox Zero for a brief time before the fall semester start.  I did receive a lot of kudos from not only my absent supervisor, but the temporary supervisor and many in the department.  I was also given a raise that made me one of the highest paid students on campus, second to the gentlemen who had overhauled many of the computer labs on campus and got them working in much better order.

However, that new semester came along and the department hired two new full-time employees.  My newly found leader powers were stunted when one of those individuals had a meeting with me and informed me that he didn’t appreciate that I held so much clout within the group and asked that I back away from many of the duties that were now just coming naturally.  To play the peacemaker, I did so.  I held some animosity towards that individual (we all do when we are asked to relinquish positional powers we earned), but we were amicable towards each other for that semester.  I graduated shortly afterwards and moved on to post-college life.  I also did not flex any leadership capabilities for quite some time.

Along Comes a Video Game

Strangely enough, it took a video game for me to get back into a leadership role again.  Back in 2004, World of Warcraft hit the PC gamer world.  This game had a ton of player interaction and eventually, you worked your character up to what is dubbed the “end game”.  This involved teamwork between many people (some of these end game raids either needed 20 people or 40 people to complete).  Many players had certain roles and all that was required was some preparation and execution during the fights in the raid locations.

As one of those roles, I played what was called a “tank”.  This type of character is effectively the guy who gets beat on the entire time with these major raid bosses.  They get to control the pace in which damage can be done to the raid boss without the raid boss turning attention towards that person doing the damage (who typically could not take a hit without major risk of in-battle death).  This role required a lot of skill between threat generation (dubbed “aggro”) and damage mitigation.  All the while, keeping up with the ever-changing battle landscape (moving the raid boss out of areas that are very harmful to the overall party, as an example).

So, why did a video game help with leadership skills?  After many early failures in execution and preparation, I started to voice dissatisfaction during some of our initial raids.  Now, it’s easy to point out what’s wrong.  What separated my diatribes from others, was that I offered to help fix those problems.  On top of all the things that were asked of me during the in-game battles, I also organized and helped configure the parties so that we could better prepare and survive the encounters.  I helped, in other arenas in the game, to help some underperforming individuals to become better by doing trial runs with them for practice.  I also called out battle plans, in mid fight, for when things needed to be reacted to.  In essence, it was like being a brigade leader in a military branch.

Scoff at the references to video games, but many people have learned a lot by organizing a guild within some of these games.  It’s almost on the job training without the ramification of losing a source of income (well, unless you never went to work because you were playing World of Warcraft 24/7).

Final Thoughts

Leadership genes can be born and bred in a vast majority of places.  You could be a teenage working in a fast food joint and becoming a shift manager.  You can play a silly video game in which you fight non-existent monsters and where the spoils of beating those monsters doesn’t matter to those outside of the game.  You can be thrust into an unfamiliar spot, like if someone on a team leaves suddenly and you are now tasked to fill their role and lead by example for the rest of the team.  You don’t necessarily have to be predestined towards this role.  You can learn these skills.  These skills make you a better teammate and usually land you on fast tracks for promotions or even spring boarding opportunities external to where you might be currently employed.  The point is they can come from anywhere and they can help define a better you.  I never thought I’d be thanking Blizzard and World of Warcraft for where I might be today, but I also have an excellent amount of respect (both ways, not just towards me) with my teammates and they know I can help lead them in getting past the challenges that are presented to them day-to-day.  I challenge you to find your leadership genes and help make your teams and organizations better.  Remember that inspiration can come from anywhere; even in leading 40 people you know only in a video game to beating evil black dragons.  😉

Posted in Technical | Tagged , , , , , | Leave a comment

The Golden Rule

Personally, I’ve never claimed to ever be a religious man.  However, as a child, I did enough Sunday School activities to remember Matthew 7:12. “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.”  The Golden Rule, as it’s now called, is something we’ve all learned in some fashion while growing up.  In an attempt to make us more compassionate adults, we were subjected to this rule in many fashions, whether that be in terms of sharing or just helping others in crisis or need.

Why do I bring up this core principle?  Well, I’m going to get on a bit of a soap box, especially about the tech industry as a whole.  For the longest time, we in the information technology sector, have been brought up on the idea that “he with the most information, rules”.  Egos made way to those that, as a measure of job security, hoarded key information and lauded it over those they called coworkers.  Political posturing was and remains rampant in many of our office environments.  However, in recent years, this has started to change.  Cultural movements have started to take root in which the ideals of knowledge sharing and organizational learning reign.

We always see flare ups, especially on social media, about the perceived wrongs individuals and their thoughts have done to other individuals and the brands they represent.  Not a day goes by that there isn’t some sort of ember that flares up into something more than it really needs to be.  The rampant egos involved just make things worse.  I get it.  You want to protect your brand and you want to show the world how much you know.  It’s inherent that you would do that, especially since you represent a brand and you are trying to sell something for that brand.  Pretty much Tech Marketing 101.  What I have a major problem with is when it goes too far.

I always hate having to mute/block someone on my Twitter timeline.  I like to believe I give individuals their fair shake when it comes to the thoughts they manage to post in 140 characters (or 280 for those with the early access).  However, when the discord spills into personal attacks, that’s when the mute/block button usage occurs.  There’s little value to a conversation that turns into a personal attack.  It’s really a shame to see some very smart people that driven by their overly inflated egos resort to having to tear down individuals with opposite viewpoints.

This industry is too full of people like I just described.  The good news is that their day may be drawing to a close.  As mentioned before, there’s cultural movements happening within organizations that are promoting knowledge sharing and organizational learning as core principles.  Within those organizations, the idea that hoarding knowledge over someone else for political gain are a thing of the past.  Individuals are judged against that of their team metrics and this means that every member of that team has to be helping to make everyone on their team better.  I’m reminded of a tweet that I saw from Chris Wahl (I don’t know if he was the exact author, as I don’t see a reference to the quote), “Sharing your knowledge doesn’t put your job at risk; it empowers your team to perform at a higher level.  Iron sharpens iron.”

Back to the Golden Rule.  Ask yourself if you’ve ever helped any coworker with a task and helped transfer knowledge to them to help make them better.   Do you continue to transfer knowledge down to others on your team?  Conversely, do you take knowledge and hold onto it like it’s the secret Coca-Cola formula?  If you do this, I ask whether you want your career to go anywhere.  You are actually doing a disservice to your career in an effort to make yourself feel relevant.  Reciprocity.  You get what you give.  The Golden Rule.  Amazing how career advancement could come from such a simple concept.  Now, drop your ego and go make your teams better.

Posted in Technical | Tagged , , | Leave a comment

Just Show Up

I didn’t always start out in this industry as a “community” guy.  Actually, even when I was a student at the University of Northern Iowa, I really didn’t do much participation with any of the get-togethers or “club” scene within the Computer Science department.  Most of my interaction was always done outside of the building where that department was housed.  Years of isolation and wanting to feel more involved with the technology scene, I felt it was needed to make a push external to my employer.  At the time, this was rather radical due to the fact that external technical community involvement was rather frowned upon.  The ego-driven nature of many departments and the general push towards self-reliance within started to cause a rift to form between my direct management and even with some of my team members.  I remember certain times where just thinking about attending a regular local VMUG meeting, even with my immediate tasks completely to ensure I had the time to attend.  It wasn’t long afterwards that I finally started forcing conversations about it that I agreed to present for the first time at one of these events.  As you would imagine, I caught the bug with community involvement and it wasn’t much longer after that I received my first vExpert award.  I left that position shortly afterwards too, as these sorts of things were still points of contention.

Fast forward a few years and I finally had an employment structure that not only encouraged involvement, but also felt it was net positive to not just myself personally, but to the business I worked with.  In the right hands, influence can be a powerful thing in not just the technical communities we are involved in, but with places we do may do business with.  To some, we offer a level of credibility that can be used to many advantages.  I’m never going to say that my existence with my current employer was the tipping point for any sort of business deal, but I can’t help but think that building a good reputation certainly can’t be used against you.

So, what happened between now and then?  I stated I got involved, but what specifically did I do between leaving my previous role and what I currently do now that is so drastically different that I’ve been able to accelerate my career in ways I never would have thought possible five years ago?  After many moments of reflection, I remembered a conversation I’ve had recently with a very influential gentleman (and I’m not just talking about his bourbon ways either).  You may know the gingered one, Mr. Josh Atwell, from his popular webinars and presentations related to all things DevOps and still bringing up some coding components from his memory banks from long ago when we used to try to beat Luc Dekens to all the questions in the VMware PowerCLI community forums.  Back in July, the both of us were in Indianapolis for the Indianapolis VMUG UserCon.  I had given a presentation on DevOps and IT culture earlier in the day and Josh was about to give his closing keynote.  I don’t recall the exact quote, but we were talking about how things were going and that things were starting to accelerate rapidly with my public persona (as well as my corporate/private persona with my current employer) and I was told (again, butchered and horribly paraphrased), “You are doing what you need to be doing.  Right now, that’s just showing up to events.”

So, you want a good piece of advice to your career and how community involvement can take it to a whole new level?  Just show up.  You may not be ready to actually start getting your hands into some of the involved pieces of community involvement, but sometimes, the first step is just attendung user group meetings or meet ups in your neck of the woods.  Sooner or later, you start to get recognized, just due to physical proximity.  Who knows, maybe you can get inspired to challenge yourself with overcoming your fears of public speaking by offering to do a small presentation for one of those user groups or meet ups?  Perhaps a spark from a conversation topics prompts you to open up a WordPress account and start a blog?  Even better, maybe you continue down the tracks in multiple technologies and get recognized by their influencer programs?  You could move on to being invited to speak to larger and larger events.  By chance, maybe you get a Twitter follow from Stephen Foskett and end up on a delegate panel for a Tech Field Day event.  Like a great many things, before you can experience steps two through infinity, you need to start with step one.

So, show up.  That’s my advice, even if I’m stealing it from Josh.  Show up and who knows what could happen.  Honestly, the worst thing that could happen is that we may buy each other a pour of Blanton’s and wax philosophical about DevOps for a while.  What you do after that, I leave that to you.  🙂

Posted in Technical | Tagged , | Leave a comment

An Azure Stack Primer for vSphere Folk

Over the last year, I’ve been involved in a journey that is changing my core competency in the technology industry.  My employer, a managed service provider, has been working with Microsoft in the Early Adopter Initiative in their hybrid cloud space.  Azure Stack is the name that it goes by and what I want to try to do to educate on what exactly this solution is trying to provide and what it means to those that are still in virtualization centric shops.  The goal of this isn’t to go into major technical detail and incite any sort of great tit for tat Twitter war between any factions that are perceived competitors to this product.  The goal is only to offer up the basics to those that are curious about Azure Stack.  Without further ado, let’s get into our first point about Azure Stack.

A Virtualization Replacement Product?

One of the common fallacies you hear about Azure Stack is that the primary use case for it in an enterprise or service provider environment is that of a replacement for a currently running virtualization platform.  I mention fallacy here because Azure Stack, while powered by a virtualization technology (Microsoft Hyper-V) is so much more than just a virtualization platform.

In the Microsoft messaging, they want everyone to understand that Azure Stack is more about enabling cloud consumption models than it is about just virtualization.  In that sense, Azure Stack is being positioned as having the same look and feel of public Azure in your own datacenter.  The platform has inclusions for many traditional IaaS capabilities, but also has many PaaS capabilities to push towards the application layer as the primary delivery within the platform.

In fact, some of the early Microsoft messaging in the Early Adopter Initiative was focused around the concept of data sovereignty.  There exist many industries in which data that is generated is subject to laws and regulations as to where that data can exist.  This was a heavy barrier to overcome towards adoption of public clouds.  Also, very few platforms exist to be able to provide more robust cloud consumption models in a private fashion.  Microsoft felt it was a good location to focus Azure Stack, so that many of these industries could now try to take advantage of public cloud consumption models within the walls of their own datacenters (or within service providers within data jurisdictions).

I Thought You Said This Was Hybrid Cloud?

Honestly, it’s a mix of both public and private cloud.  While Azure Stack is the private cloud implementation side, there are many integration points between Azure Stack and public Azure.  Technically, there are two implementation types of Azure Stack that exist out there.  What distinguishes between the two implementation types comes down to identity source (Azure Active Directory [AAD] and Active Directory Federation Services [ADFS]) and the licensing model in which you need to operate.

Focusing on the identity source, if you are using AAD for authentication, you are not running in a disconnected state and will refer back to public Azure for authentication purposes.  ADFS allows for you to use localized authentication and will not refer to public cloud identity sources for authentication.

Outside of the technical definition for hybrid cloud, Azure Stack and Azure share the same toolsets for management.  Consistency is the name of the game when Microsoft discusses how each is managed.  Both Azure Stack and Azure use the same subset of tools for management.  This includes their respective portal pages, PowerShell integrations, and integration into coding tools (for example, Visual Studio or Visual Studio Code).  Both implementations can be configured use Azure Resource Manager (ARM) templates.  Within an ARM template, all the information (in JSON format) that defines the infrastructure use and configuration of the solution you wish to deploy is contained.  This concept is used in both public Azure, as well as Azure Stack.  One caveat, however, is that when dealing with Azure Stack, the API versions will likely lag behind that of the public versions.  However, there are tools to help drive policies that ensure that whatever ARM template that is created in public Azure is also using versions that are compatible with Azure Stack.

I Can Roll My Own Hardware?

Short answer?  No.  This is likely going to be a major point of contention for many of you.  However, Microsoft has many perfectly valid reasons for needing to control the hardware information in their stack.  First, the sheer amount of validation needing to be done across the multitude of drivers within Windows for the various parts of Azure Stack would ensure a HCL that would take way too long to certify.  Secondly, there are security concerns within the computing environment that many vendors may not have in some of their server lines yet.  For instance, I found out that TPM 2.0 is a requirement of Azure Stack certified equipment.  During a Microsoft Ignite presentation, it was revealed that not many vendors have TPM 2.0 standard on most of their server lines.  As of right now, only four vendors have equipment that can be purchased.  This list includes:  HPE, DellEMC, Lenovo, and Cisco.  Many other vendors are going to be forth coming.

Also, major certification of networking components is an absolute need of the platform.  The storage system with Azure Stack is powered by Storage Spaces Direct (S2D), which requires offloads, not on only on the NIC layer in the servers, but also on the switching layers for RDMA (Remote Direct Memory Access).  Also, optimizations for VXLAN on both the NIC and switching layers for usage with the Azure SDN layer for network management were an absolute must.

Final Thoughts

In the scheme of things, I know there are some points of contention with this product versus what many infrastructure folks have ran in the past.  Not being able to choose your own hardware is one that I have seen many blog posts and opinion pieces on.  However, Microsoft has the marketing message that the point of this solution isn’t to operate hardware and worry about low level nerd knobbery within top-of-rack networking equipment.  The point is to hit the ground running and focus on the cloud consumption capabilities of the solution.  Personally, I love the fact that I’m going to actually be able to run a more robust cloud solution within my data center and begin to craft more cloud-oriented solutions for customers moving forward.

Now, if you’d like to give Azure Stack a try and have some hardware laying around to pull it off (or, if you really adventurous and want to create an Azure Stack Development Kit instance in a nested sense in public Azure), head on over to the Azure Stack Development Kit website (https://azure.microsoft.com/en-us/overview/azure-stack/development-kit/) and check the requirements for the hardware and signup for downloading the kit.

Hopefully, I can write more to come on things outside of initial concepts of Azure Stack moving forward!  Stay tuned!

Posted in Technical | Tagged , , , | Leave a comment

One Small Piece of Career Advice

Normally, especially this time of year, I would be pumping out a blog post about the latest things I’ve seen at some trade show (specifically, VMworld).  However, this year, my family gave me a great idea and instead of spending the last week of August with 20,000+ fellow virtualization nerds, my family and I took one of those “bucket list” places off our lists.  I spent the VMworld time frame in Wyoming and tooling around Yellowstone National Park.

Why am I even talking about this?  Well, the reason is that many of us in the industry sometimes get derailed from what really matters the most.  Personally, I have way too many friends, former or current, and colleagues, former or current, that spend entirely too much time worrying about their careers and less time worrying about things outside of work.

Personally, I feel as if I sacrificed by twenties for the sake of career advancement.  I traveled, as a consultant, on a pretty cool team with a major healthcare software company.  I got to see parts of the world that I figured this kid from the middle of nowhere Iowa was only ever going to read about.  However, what I also gave up was my own personal free time.  I rarely took vacations (to the point of routinely being reprimanded when my vacation time was going to well exceed maximums to carry over at the end of the calendar year).  With hindsight bias, this wasn’t probably the greatest thing I could have done.  Sure, I was seeing the world, but I was also only seeing the world from inside multiple datacenters.  For the most part, datacenters look the same, no matter what part of the world you travel to.  The only exception might be the power plug type you are plugging into a PDU.

Coming off this travel role, I started to branch out more, as I realized that I have zero social life.  Work had become my social life.  I had become a rather crusty curmudgeon and was routinely angry with way too many things.  I finally started to branch out with my elements of free time and found some rather enjoyable things to pursue in life.  I even started dating (which anyone who knows me that well knows that I’m pretty much a train wreck in that aspect of relationships).  I started down the road towards getting married and becoming a family man.

However, I still do feel the calling of the road.  As much as I try to avoid getting onto the road, this last calendar year has been a bit of a blessing and a curse.  I’m excited that I get to travel to various technical functions across the country.  I love getting out there and seeing other cities and experiencing the things they have to offer that I can’t get back in my neck of the woods.  What’s different is the guilt I feel being away from my family during that time.  We’ve had a standing agreement that I can do these things, as long as it doesn’t make the home life suffer.  Thus far, I’ve kept to this agreement and it’s been amicable.

Again, why the long-winded story?  I felt it was necessary to skip VMworld this year.  I did tell a few people that VMware technology isn’t a primary focus of my current job function right now (which is true), but that wasn’t enough to fully skip out on the conference.  Last year, my family made efforts to come along with me to VMworld.  While they enjoyed the many things external to the conference, I did not enjoy it.  Oh, I had fun with people and have a bunch of great stories with friends.  I did not enjoy having to try to balance my two worlds, public and private.  I constantly felt I was robbing time with my family there to be with business associates.

This year, my family and I decided that instead of coming back from the conference and hearing about the guilt I carried, we decided to take a vacation that was independent from any work function.  It worked out for the best.  I came back relatively refreshed and ready to tackle some challenges I’ve been avoiding at the day job.

So, as you, the reader, continue along with your career, I offer up one small piece of advice.  Find time in your busy career to enjoy the little things.  It’s ok to take a break now and then.  Use your vacation time and reestablish yourself with what’s important.  In my case, I got to spend quality time with my family, in a place where my cell phone couldn’t distract me.  Plus, how cool is it that I got to tent camp for 6 nights on a super volcano?  🙂

Posted in Technical | Tagged | 1 Comment

Old Dogs Learning New Tricks

Old Dogs

The saying goes “You can’t teach an old dog new tricks”.  Certainly, all of us have found examples of this to be true.  Otherwise this statement would not still be in use to this day.  However, what if I was able to tell you that you can teach an old dog a new trick?  In our industry, plenty of companies, as they age and mature, become satisfied on maintaining the status quo and forgetting to push innovation.  The storage industry is ripe with well-established companies that continue to dominate their market share.  Odds are, you probably have one of their products in your data center.

That beings said, what if I were to tell you that there’s a transformation going on within one of those well-established storage companies?  If I threw out terms like “OpenStack” or “Kubernetes”, I would bet that the first storage company you thought of was going to be a Valley startup.  In this case, you would be wrong.  That company, strangely enough, is NetApp.  Feel free to take a moment to work that Keanu Reeves-esque “Whoa!” look off your face.  Honestly, I was pretty shocked as well, considering, until recently, I have never personally worked with anything in the NetApp portfolio.  This did change when I became a customer of SolidFire (which was acquired by NetApp back in late 2015).  I’m not going to go into specifics into the SolidFire platform in this post.  That would distract from the message about the internal metamorphosis going on within NetApp.  Just know that, in my belief, SolidFire was acquired to continue molding this internal transformation.

The Road to Change

Change is hard, especially for those well established.  You slowly become the saying of “That’s how we’ve always done it.”  However, something had to be done within NetApp.  The overall technology industry was asking for more from all of their vendors.  No longer could we get by with hardware dominated solution with very little software to interface with that hardware.  Storage administration had turned into an overcomplicated mess with little to be done to try to resolve it.  Full-time administrators needed to be devoted to these hardware masses, just to be able to perform the equivalent of keeping the lights on.

Just when those administrators thought they could take a breather, along came an organizational shift.  I’m not talking about a simple organizational chart realignment wheel spin.  DevOps came along.  It forced conversations about the way we approach our IT departments.  No longer could IT departments be content with protecting their individual silos or technical fiefdoms.  IT departments had to start aligning with the business goals of the overall organization.  Contentment with just keeping your realm operational was no longer going to be enough to satisfy those outside of the IT department.  We’ve dubbed this as “better business outcomes”.

To help enable these changes within the IT department, persons within had to start looking towards automation capabilities.  They needed to start gaining efficiency in processes and putting them into technical systems to be able to better deliver the components necessary to achieve better business results.  This meant, those in the datacenter needed to start looking towards their partners for help in better delivering software that would help them automate and orchestrate their data centers.

What I heard during the Tech Field Day 14 presentation, was a story of a large company trying to help their customers along that very journey.  With better software development, NetApp has been able to make headways into products that you wouldn’t normally associate with NetApp.  Better API capabilities within their ONTAP software have opened up capabilities in systems like OpenStack, Puppet, Chef, Ansible, PowerShell, Docker, Jenkins, and even Kubernetes (in the form of Trident).

Final Thoughts

I will admit that it was extremely refreshing to listen to a Tech Field Day (specifically, Tech Field Day 14) presentation from a storage vendor and have only one mention of the underlying storage architecture.  As someone who primarily spends most of his day job hours busting silos and getting a technology organization to try to see that what we provide is greater than the sum of ports, spindles, virtual machines, and blades, I was pleased with the messaging during this presentation.  The goal is easier accessibility by those that aren’t traditionally data center specific personnel.  I really believe that NetApp is on the right path in being able to do so.

Posted in Technical | Tagged , , , , | 1 Comment