Amplify: The Story of My 2017

End of the year recaps are about as clichéd as they come.  However, I do feel it’s important to see where we start the year and how we end the year.  Needless to say, 2017 has been a rather interesting year.  When I recapped 2016, I talked about establishing my voice.  2017 was about amplifying my voice.  From a personal perspective, I know I’m not the brightest individual out there in the technical communities that I frequent.  However, I don’t feel that’s an absolute requirement to have a voice and use it.  2017, can be summed up in a couple of words:  community presenter.

3584

For those playing along, no, the header to this paragraph is not a product of a sneeze while typing.  Instead, it’s the summation of the mileage I drove to present at multiple VMUG UserCons this year.  Three thousand, five hundred, eighty-four miles from my residence to the locations where I presented.  Granted, it was only a total of five UserCons, but the road trips with friends and the thrill of honing my skills as a presenter were well worth the experience.

Unfortunately, everyone has that one event they wish they could do a little differently.  I try my hardest to ensure I don’t do a horrible job with my presentations.  However, sometimes they get away from you.  So, this special session is for those that saw me at the St. Louis VMUG UserCon in March.  I tried to pare down a 50-minute presentation desk to 15 minutes and it did not go well.  For that, I will forever feel I owe the St. Louis VMUG leaders a do over.

Outside of St. Louis, I presented in Minneapolis, Kansas City (my own backyard, essentially), Indianapolis, and Cincinnati.  I was involved in plenty of discussions about other UserCons, however, they didn’t seem to work out in the end.

One of the biggest compliments I got from my presentation (which was a return to my IT Culture and DevOps presentation I did for vBrownBag’s Commitmas 2016), was that with a little TLC to the slide deck, I might have a keynote session on my hands.  To be honest, I was perfectly content getting a community slot, but there’s something to be said about pursuing something a little higher than where your skills are right now.

My Favorite Podcasts

Podcasts have become a very powerful medium in our industry and within our technical communities.  It was an awesome experience to be able to participate in some well-established podcasts as a featured guest.  While I might have laid a clunker on the presentation in St. Louis, Keith Townsend resurrected his dormant “The CTO Advisor” podcast where I was able to talk about DevOps as a cultural movement for 15 minutes.  Later in the year, at the Indianapolis VMUG UserCon, Keith, Mark May, Tim Smith, and myself recorded a part of an episode chatting about the power of community in our day to day technical lives.

One of the more interesting podcasts I got to be a part of happened on the exhibit floor in Orlando at Microsoft Ignite 2017.  Clint Wyckoff had me on as a guest for the Veeam Community Podcast where we talked about Microsoft Azure Stack, DevOps, and many other topics.  Clint and I have run into each other at plenty of technical events, however, this was the first time we were able to actually record anything.  It was nice to be able to talk about things well outside of the VMware-based wheelhouse we knew each other from.

Lastly, my crown jewel appearance was as a guest on the Packet Pushers podcast.  Ethan and Greg are more known for having networkers as guests, but this time, my topic of choice about IT culture ended up being a good enough topic for a full episode.  Sprinkle in a little bit about paralleling Nietzsche with ITIL, and we had ourselves a fantastic discussion that probably could have gone on for multiple episodes.  I can’t thank those gentlemen enough for allowing me to be a guest.

The Moment of 2017

In my 2016 recap post, I went on to tell a story of how I brought some good ol’ Kentucky bourbon to VMworld 2016 and had a fun couple of hours just talking with Josh Atwell.  It wasn’t necessarily technical, which is why it stood out the most.  For 2017, I have another good story that doesn’t deviate into the technical realm either.

To set the stage, I travelled to Boston in May 2017.  This was a combination week, as I was going to be attending OpenStack Summit (my first time) and Tech Field Day 14.  The week turned into a rather hefty logistical nightmare, as I had to get my pass for OpenStack Summit well in advance, along with setting up lodging outside of what the Field Day family sets up for their event.  I scheduled an airbnb stay about 15 minutes from the convention center that OpenStack Summit was going to happening at.  Unfortunately, as Stephen (Foskett) will be able to tell you, Tech Field Day 14 had some issues with presenting companies.

Originally, Tech Field Day 14 was supposed to be three days, with upwards of seven to eight different companies presenting to the delegate panel.  As the event drew closer, the schedule changed drastically and the event was whittled down to four presenting companies and one of the days was dropped from the schedule.  As I had already reserved my airbnb location before all of this happened, I was stuck with a night in which I was lacking lodging.  I offered to pay for my own room for that gap night at the hotel we, the delegates, would be staying at.  Stephen informed me that it was all right and that he would pick me up at the conference and take me out there, as he was already going to be doing some chatting with prospective companies for the Field Day events.

Now, all of this setup leads me to my favorite moment of 2017.  As OpenStack Summit was drawing to a close, Stephen wanted to know if I would be interested in heading up to one of the restaurants at the top of the Prudential Tower.  While the company he wanted to talk to escapes me, the view was awesome and the food, drink, and conversations were really great.  Normally, it takes me a while to warm up to these types of conversations, but it felt rather natural to discuss some technical things with the folks hosting the event.

As the evening drew to a close, Stephen wanted to stop by one of the shops in the shopping district around the Prudential Tower to buy something to discount parking, which lead us to a sports wear store.  Inside, Stephen, being a huge Boston Red Sox fan, chose to pick up a Boston Red Sox shirt or two.  This lead us to having a rather lengthy discussion about baseball.

You see, baseball is a passion for me.  I’ve been a lifelong Chicago Cubs fan.  I remember the days before Wrigley Field got night time lights, so all the afternoon games were available to watch on a local channel where I grew up.  At the same time, currently residing in Kansas City, I can easily spend an evening at Kauffman Stadium.  As long as I live, I will always be able to remember October of 2014 (the Royals lost in Game 7 of the World Series to the San Francisco Giants), 2015 (the Royals returned to the World Series, this time winning it in five games), and 2016 (the Cubs finally ended 108 years of futility and won the World Series).

On the drive from downtown Boston out to the hotel in Waltham, we talked each other’s ears off about baseball.  Stephen pointed out how well the Red Sox looked.  I talked about whether they had a bullpen that would be able to get them over the hump and into the World Series again.  We talked about rookies with hot starts.  We even chatted about the various experiences we had with our teams getting to and even winning the World Series.  It was a great evening of talking with another fellow baseball fan.  It’s something I’m going to remember for quite some time, even if I end up jinxing the Red Sox, Cubs, or Royals from not winning another World Series in either of our lifetimes.

The Next Steps

The next steps are going to be rather interesting ones.  I’m currently at a multi-path crossroads that feels like it’s going in hundreds of different directions.  I’ve started to get involved in more cloud-native efforts, especially in the Microsoft Azure ecosystem (public and Azure Stack).  I’ve gotten on radars for more community programs (like the Microsoft MVP program, although, I still haven’t been able to focus on obtaining it like I’d like to).  I’ve been given options to jumpstarting user groups for Microsoft Azure in my local area.  I’m even having discussions with VMUG people about putting in another presentation to make the rounds for UserCons in 2018.

The main point is that it seems like I’m rather content to keep doing what I’m doing.  Keep interacting with the technical community.  Keep branching out to those I can call as friends and having excellent conversations about tech, life, and everything.  At this rate, I might be able to cut and paste this post for 2018!

So, here’s to 2017!  While I may have never met many of you directly, you helped shape my last 12 months.  I can only hope that I’m able to return the favor and help you shape your next 12 months in the same fashion!

Advertisements
Posted in Technical | Tagged , , , , | Leave a comment

Three Up/Three Down: HPE Tech Day Edition

In early October, I was approached by Calvin Zito (@CalvinZito on Twitter) about attending a HPE Tech Day in mid-October.  He mentioned that this one was going to have a storage focus (which anyone who reads any of my blogs or follows me on social media knows storage isn’t my forte).  I mentioned this to Calvin, but he felt that maybe I could provide some insight from a different perspective than those just coming at this from the in’s and outs of storage specifics.  Also, even though storage isn’t my forte, I do believe that gaining knowledge in different technical disciplines is a good thing.

During the event, maybe different HPE personnel went through many parts of the HPE storage portfolio and left themselves open to many questions (along with providing many answers) during the event.  Highlight below will be my first attempt at giving a classic “Three Up/Three Down” spin to this event.  The idea being that there are three things that I got excited about (three up) and three things that did not excite me or gave me cause for concern (three down).  Without further ado…

Three Up

First, we’ll start off with one of the major impressions I got from this event. Multiple times, it was referenced about how well the acquisitions of both Nimble and SimpliVity were going within HPE.  Following HPE’s history, there had been a pattern of acquisitions that never really panned out.  While no one ever expects every acquisition to be perfectly integrated into existing product portfolios, there had been some pretty illustrious failures on HPE’s part.  What was starting to show was that things were starting to get better with integration of new parts into the HPE ecosystem.  Along with Nimble and SimpliVity, it was stated by multiple HPE personnel how things seemed to take a turn towards the positive with the way that HPE treated the 3PAR acquisition.  With the roadmaps that were being given during the Tech Day event, it did certainly show that HPE has learned a lesson, overall, and have done a pretty good job of working Nimble and SimpliVity into their family of products

Secondly, let us discuss what HPE is doing with their newest analytics toy from the Nimble acquisition, InfoSight. Personally, I believe Nimble’s best IP asset wasn’t its storage array, but InfoSight.  Whatever HPE paid for Nimble, I felt InfoSight was worth the price alone.  For those unaware, InfoSight was/is Nimble’s analytics engine.  This fact-finding tool has some pretty illustrious stats behind it, in regard to diagnosing and responding to issues before they become major problems to administrators.  However, as HPE now has this tool to its disposal, the arduous task of other HPE product integration begins.  We will begin to see exactly how modular Nimble made the framework as HPE attempts to add other products to this engine.  The first product being ran through this gauntlet is the 3PAR system.

Lastly, another Nimble technology, Cloud Volumes, appears to be gaining a lot of steam for HPE in the cloud storage realm. While not necessarily being in some of the major cloud provider data centers, HPE promises a low amount of latency between cloud providers like AWS and Azure to help augment some of the storage needs of workloads that native cloud storage platforms like Azure Blob or AWS S3 will not be able to.  While we could go off onto many debates on whether block storage should be the storage type you want to use in cloud-native approaches, there are some nice things, like avoiding expensive data egress charges, by using Cloud Volumes.

Three Down

This is going to be a moment in time nitpick. During a few of the demos, we got to see many different HPE management endpoints (not all of them related to the storage portfolio).  There was a mention that the amount of GUI interactions between multiple products was going to reduce over time.  However, I have to slap the dreaded award of “Too Much Nerd Knobs” to the HPE management software suites.  As someone who has to spend much of their time making multiple endpoints work together into larger system workflows, the fact that so many endpoints were needed to make the entire package hum along felt exceedingly unnecessary.  I hope to see the amount of administration components drastically reduced in the next 12 months.

Another non-secret to any people who follow me regularly know that I’ve always been a “friend of the program” when it came to SimpliVity. I even got a semi-public acknowledgement for helping SimpliVity start to realize what they now had available when one of their major releases put forth a public REST API.  While the “community version” of the PowerShell module to show the power of their API never made it for public consumption, it seems many of the lessons learned by SimpliVity during this process have helped craft many of the interoperability messages since then.  Another slight HPE nitpick on this one is that it does feel that HPE is having a problem trying to apply what SimpliVity brings to the table to a more classic Venn diagram product portfolio.  There’s so many things that platform can do, that it does the platform a disservice to just discuss and shoehorn it into a single function platform.  As HPE starts to realize what they have in SimpliVity and are able to properly craft better integration into its product portfolio, I’m sure this nitpick just needs time to resolve.

Lastly, I was going to pick on HPE’s container messaging, however, I’m starting to realize that it’s not just a unique HPE message. At roughly the same time as this HPE Tech Day, DockerCon EU was happening.  I’ve heard from a few people that attended that there’s been an overall shift in messaging about containers and how to approach their usage, especially in the early stages of potential application refactoring towards cloud-native approaches.  What we are now hearing is that there’s a prevailing recommendation that we just now take applications, as is, and just wrap them into a container packaging.  I’ve had many discussions since then and while I see there might be some tangible benefits to this approach, I still feel that simple lifting an application from a virtual machine into a container isn’t exactly the intended proper use case for driving container adoption.  The application itself didn’t change and all that happened was the run time shifted from a hypervisor to a container runtime.  I’m going to completely disagree that this is a valid approach to container adoption, mostly because it just feels like a complete lapse in application architectural principles.  So many questions need to be answered about the viability of that application before changing its runtime.  Let’s actually make some sense in our technology choices, rather than trying to justify changing it for technology’s (and only technology’s) sake.

Final Thoughts

There you have it.  I think HPE is making some strides from some of their most recent history and are looking to rebound.  I believe the acquisitions of Nimble and SimpliVity are going to be a great benefit to HPE long term.  I hope to see even more positives in the upcoming calendar year.

For those interested, the HPE Tech Day event is available on YouTube at the following link:  YouTube – HPE Tech Day

Disclaimer:  I was invited, on behalf of HPE, to attend this event.  All my expenses, including transportation, food, and lodging were covered by HPE.  HPE never offered, nor did I expect to receive any compensation for writing this post, nor was I required in any fashion to write this post.  I wrote this because I was genuinely excited (and not so excited) about developments with the HPE storage portfolio.

 

 

Posted in Technical | Tagged , , | Leave a comment

Different Strokes For Different Folks: Hybrid Cloud Edition

No doubt you’ve heard plenty about the concept of hybrid cloud (or even hybrid IT) recently.  Personally, my Twitter timeline has been rampant with opinion pieces (some good, some bad) and your usually array of bickering between the various factions as they have to do all they can to represent their brands.  My intention of this post is not to enflame those factions, but invariably, I’m sure I’ll have to defend my comments.  What I want to try to do is get down to the reasons as to why hybrid has become a rather hot topic and break down the various approaches that are now coming to market.

Why Hybrid (or even Multi-Cloud, for that matter)?

Well, first and foremost, if you like protecting internal silos and continuing the rationale that, as an infrastructure person, it’s your way or the highway when it comes to IT, you’ve missed some awfully big memos recently.  Not only that, YOU are part of the problem.  Most of these hybrid use cases stem from ineffective IT departments.

Let me explain.  It’s currently late October 2017.  Some of you are still provisioning internal systems like it’s 1999.  I’ll cut a little slack that there might actually be some shred of a business use case, however, I’d bet that most of it stems from the scared nature that somehow, as an infrastructure person, you aren’t going to be important in this new world.  So, rather than get on the wave of progress, you’ve chosen to entrench yourself in ancient IT methodologies and do your best to scatter as much FUD internally about anything that isn’t what you know.

This has led to extreme dissatisfaction between businesses and their IT departments.  The business, usually in the form of some sort of application development group, decides to circumvent the process and goes rogue.  Public cloud is leveraged, usually without the approval of IT, and thus are given tools to finally catch up to their pace.  Whether you want to admit it or not now, you and your IT department are in direct competition with the likes of AWS and Microsoft Azure.  And many of you are failing miserably.

Latency

So, your application developers went out and tried public cloud and while they like the overall interaction capabilities, the realization is that they need access to on-premises resources.  Unfortunately, line rates and long distances caught up to the application and latency reared its ugly head.  No matter what was tried, whether it’s faster connectivity with AWS Direct Connect or Microsoft ExpressRoute, it wasn’t able to solve the latency problem.  The need was to put the legacy components necessary to new application development in much closer proximity to the new breed of applications being developed.

Enter the Solutions

At this point, this is where the solutions deviate.  At their core, many address the latency problem but depend on different directions as to where workloads are shifted.  For instance, VMware Cloud on AWS wants you to move your legacy workloads (not just lift and shift, but due to the need to refactor applications around it) out of your datacenter and into an AWS datacenter.  Again, this will likely solve the latency issue, however, I still don’t feel this solves any sort of regulatory or sovereignty issues related to that data.  The focus of this blog post isn’t about those issues, so I’ll refrain from deep diving into it.  Just know that I consider latency AND sovereignty to be a couple of the major issues towards public cloud adoption.

That leads us to Microsoft’s solution in Azure Stack.  Unlike the VMware offering, Microsoft wants to try to address this problem in your datacenter, thus making the Azure extension feel much more like yet another private cloud service offering.  Selling Azure Stack as just a private cloud solution, unfortunately, does sell the offering well short of its intended mark.  There are some advantages to Azure Stack that may not be available in the VMware offering.  For instance, with being able to take advantage of everything in your datacenter, non-virtualized workloads can be accessible as legacy entities (my primary example would be something similar to an AS-400 system, where the processor types prohibit creation of a virtual machine on an x86-based platform).  I will admit, however, that this example is a very limited “advantage”, but in many enterprises, it does exist.

Thoughts

Whether the two camps (plus countless others that will crop up…cue the recent Google Cloud Platform and Cisco announcement) want to admit, they both are trying to address the same problem.  Latency is a public cloud adoption killer.  The major differences come down to what sort of legacy workloads do you have running on-premises and what the plans are for any sort of cloud-native application refactoring your organization has planned.  Sprinkle in whether you believe that your investments into certain legacy infrastructure tool sets are worth continuing (VMware really wants you to know you don’t have to get rid of your VMware admins, as an example, because someone still has to operate a vCenter instance) and you got a recipe for making a major decision.

If anything, what these moves have proven is that prior statements by some of the major public cloud providers were, in fact, categorically false.  No, you can’t run everything in the public cloud, especially if you’ve obtained years of IT baggage (i.e., any enterprise).  Not every IT organization is capable of being a stable for unicorns, nor should they be.  Welcome to the world of hybrid (or multi-cloud), folks.

Posted in Technical | Tagged , | Leave a comment

Becoming a Leader

Leadership.  We hear this term thrown around but I think few people actually know what it means to be a leader, let alone an effective leader.  Personally, I scoffed at the idea that I would ever be a “leader”.  I was perfectly content being in the background and being a good worker bee.  Then incidents happened during the early days of my professional development that I now know forged the beginnings of what I believe to be the leadership genes I have today.  So, let’s fire up the way back machine and describe some of these instances and how they came to make me into who I am today.

Back to College

Ah, college.  That wonderful time where, hopefully, you get away from what you know and go out and start to discover how the world actually operates.  While I was in college, I always had work study programs as part of my tuition package (the sorts of things you can get when you have zero parental contribution to your higher education bills).  I originally applied to work as a lab attendant in the fine computer centers on the campus of the University of Northern Iowa.  Unfortunately, I had heard nothing back from the persons doing the hiring and for my first semester as a college student, I held down two part-time jobs.  One was as a worker in the dining center nearest to my dormitory.  The second was as a glorified shop clerk in my dormitory.  Neither were that intellectually stimulating, but I was able to meet new people and get lots of studying done (as the shop clerk).  This changed when a gentleman on my dorm floor, Brent, recommended me to work on his team within the university’s ITS (Information Technology Services) group.  He was a student technician that went around from various university departments and the computer centers, fixing various software or hardware issues as they were reported into the call center.  Pretty standard entry-level IT work, for what it’s worth.

Now, I worked there from that point on until graduation.  I developed a lot of skills that eventually lead to a consulting position post-graduation.  However, there was a summer were different skills started to materialize.  My supervisor had to miss most of that summer.  She was recovering from major surgery and was not expected back until well after the fall semester started.  During that time, we have a temporary “manager” but while they were an assistance from higher level administration perspectives, they did not necessarily lead the group in the same way that my ailing supervisor did.  During the first few weeks, we found ourselves floundering around and the queue of work was growing at an exponential rate.  I remember that we had upwards of 120 workstation replacement tickets that had come in, as an entire department was able to finally obtain enough budget for this project.  This project also came with new challenges as we were migrating away from a Novell-heavy core to a Microsoft-heavy authentication core.  This meant instead of Windows 95/98, we were now having to deal with the animal known as Windows NT 4.0.  We had very little experience and had to get up to speed quickly on this.

Enter the opportunity.  While the temporary supervisor was busy with her own challenges on the backend, I decided it was time to step up as the most tenured individual on the team.  With much reluctance, I organized a few internal training sessions on Windows NT 4.0 and started to better delegate the work to the right individuals.  I already knew who was liked by various departments and who would be best suited to spend 3-4 hours working in those locations between shifts.  I found myself actually leading my team in ways I didn’t realize I had within me.  By the time the middle of the summer came around, we had heavily reduced our backlog of outstanding tickets to the point where we did achieve Ticket Inbox Zero for a brief time before the fall semester start.  I did receive a lot of kudos from not only my absent supervisor, but the temporary supervisor and many in the department.  I was also given a raise that made me one of the highest paid students on campus, second to the gentlemen who had overhauled many of the computer labs on campus and got them working in much better order.

However, that new semester came along and the department hired two new full-time employees.  My newly found leader powers were stunted when one of those individuals had a meeting with me and informed me that he didn’t appreciate that I held so much clout within the group and asked that I back away from many of the duties that were now just coming naturally.  To play the peacemaker, I did so.  I held some animosity towards that individual (we all do when we are asked to relinquish positional powers we earned), but we were amicable towards each other for that semester.  I graduated shortly afterwards and moved on to post-college life.  I also did not flex any leadership capabilities for quite some time.

Along Comes a Video Game

Strangely enough, it took a video game for me to get back into a leadership role again.  Back in 2004, World of Warcraft hit the PC gamer world.  This game had a ton of player interaction and eventually, you worked your character up to what is dubbed the “end game”.  This involved teamwork between many people (some of these end game raids either needed 20 people or 40 people to complete).  Many players had certain roles and all that was required was some preparation and execution during the fights in the raid locations.

As one of those roles, I played what was called a “tank”.  This type of character is effectively the guy who gets beat on the entire time with these major raid bosses.  They get to control the pace in which damage can be done to the raid boss without the raid boss turning attention towards that person doing the damage (who typically could not take a hit without major risk of in-battle death).  This role required a lot of skill between threat generation (dubbed “aggro”) and damage mitigation.  All the while, keeping up with the ever-changing battle landscape (moving the raid boss out of areas that are very harmful to the overall party, as an example).

So, why did a video game help with leadership skills?  After many early failures in execution and preparation, I started to voice dissatisfaction during some of our initial raids.  Now, it’s easy to point out what’s wrong.  What separated my diatribes from others, was that I offered to help fix those problems.  On top of all the things that were asked of me during the in-game battles, I also organized and helped configure the parties so that we could better prepare and survive the encounters.  I helped, in other arenas in the game, to help some underperforming individuals to become better by doing trial runs with them for practice.  I also called out battle plans, in mid fight, for when things needed to be reacted to.  In essence, it was like being a brigade leader in a military branch.

Scoff at the references to video games, but many people have learned a lot by organizing a guild within some of these games.  It’s almost on the job training without the ramification of losing a source of income (well, unless you never went to work because you were playing World of Warcraft 24/7).

Final Thoughts

Leadership genes can be born and bred in a vast majority of places.  You could be a teenage working in a fast food joint and becoming a shift manager.  You can play a silly video game in which you fight non-existent monsters and where the spoils of beating those monsters doesn’t matter to those outside of the game.  You can be thrust into an unfamiliar spot, like if someone on a team leaves suddenly and you are now tasked to fill their role and lead by example for the rest of the team.  You don’t necessarily have to be predestined towards this role.  You can learn these skills.  These skills make you a better teammate and usually land you on fast tracks for promotions or even spring boarding opportunities external to where you might be currently employed.  The point is they can come from anywhere and they can help define a better you.  I never thought I’d be thanking Blizzard and World of Warcraft for where I might be today, but I also have an excellent amount of respect (both ways, not just towards me) with my teammates and they know I can help lead them in getting past the challenges that are presented to them day-to-day.  I challenge you to find your leadership genes and help make your teams and organizations better.  Remember that inspiration can come from anywhere; even in leading 40 people you know only in a video game to beating evil black dragons.  😉

Posted in Technical | Tagged , , , , , | Leave a comment

The Golden Rule

Personally, I’ve never claimed to ever be a religious man.  However, as a child, I did enough Sunday School activities to remember Matthew 7:12. “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.”  The Golden Rule, as it’s now called, is something we’ve all learned in some fashion while growing up.  In an attempt to make us more compassionate adults, we were subjected to this rule in many fashions, whether that be in terms of sharing or just helping others in crisis or need.

Why do I bring up this core principle?  Well, I’m going to get on a bit of a soap box, especially about the tech industry as a whole.  For the longest time, we in the information technology sector, have been brought up on the idea that “he with the most information, rules”.  Egos made way to those that, as a measure of job security, hoarded key information and lauded it over those they called coworkers.  Political posturing was and remains rampant in many of our office environments.  However, in recent years, this has started to change.  Cultural movements have started to take root in which the ideals of knowledge sharing and organizational learning reign.

We always see flare ups, especially on social media, about the perceived wrongs individuals and their thoughts have done to other individuals and the brands they represent.  Not a day goes by that there isn’t some sort of ember that flares up into something more than it really needs to be.  The rampant egos involved just make things worse.  I get it.  You want to protect your brand and you want to show the world how much you know.  It’s inherent that you would do that, especially since you represent a brand and you are trying to sell something for that brand.  Pretty much Tech Marketing 101.  What I have a major problem with is when it goes too far.

I always hate having to mute/block someone on my Twitter timeline.  I like to believe I give individuals their fair shake when it comes to the thoughts they manage to post in 140 characters (or 280 for those with the early access).  However, when the discord spills into personal attacks, that’s when the mute/block button usage occurs.  There’s little value to a conversation that turns into a personal attack.  It’s really a shame to see some very smart people that driven by their overly inflated egos resort to having to tear down individuals with opposite viewpoints.

This industry is too full of people like I just described.  The good news is that their day may be drawing to a close.  As mentioned before, there’s cultural movements happening within organizations that are promoting knowledge sharing and organizational learning as core principles.  Within those organizations, the idea that hoarding knowledge over someone else for political gain are a thing of the past.  Individuals are judged against that of their team metrics and this means that every member of that team has to be helping to make everyone on their team better.  I’m reminded of a tweet that I saw from Chris Wahl (I don’t know if he was the exact author, as I don’t see a reference to the quote), “Sharing your knowledge doesn’t put your job at risk; it empowers your team to perform at a higher level.  Iron sharpens iron.”

Back to the Golden Rule.  Ask yourself if you’ve ever helped any coworker with a task and helped transfer knowledge to them to help make them better.   Do you continue to transfer knowledge down to others on your team?  Conversely, do you take knowledge and hold onto it like it’s the secret Coca-Cola formula?  If you do this, I ask whether you want your career to go anywhere.  You are actually doing a disservice to your career in an effort to make yourself feel relevant.  Reciprocity.  You get what you give.  The Golden Rule.  Amazing how career advancement could come from such a simple concept.  Now, drop your ego and go make your teams better.

Posted in Technical | Tagged , , | Leave a comment

Just Show Up

I didn’t always start out in this industry as a “community” guy.  Actually, even when I was a student at the University of Northern Iowa, I really didn’t do much participation with any of the get-togethers or “club” scene within the Computer Science department.  Most of my interaction was always done outside of the building where that department was housed.  Years of isolation and wanting to feel more involved with the technology scene, I felt it was needed to make a push external to my employer.  At the time, this was rather radical due to the fact that external technical community involvement was rather frowned upon.  The ego-driven nature of many departments and the general push towards self-reliance within started to cause a rift to form between my direct management and even with some of my team members.  I remember certain times where just thinking about attending a regular local VMUG meeting, even with my immediate tasks completely to ensure I had the time to attend.  It wasn’t long afterwards that I finally started forcing conversations about it that I agreed to present for the first time at one of these events.  As you would imagine, I caught the bug with community involvement and it wasn’t much longer after that I received my first vExpert award.  I left that position shortly afterwards too, as these sorts of things were still points of contention.

Fast forward a few years and I finally had an employment structure that not only encouraged involvement, but also felt it was net positive to not just myself personally, but to the business I worked with.  In the right hands, influence can be a powerful thing in not just the technical communities we are involved in, but with places we do may do business with.  To some, we offer a level of credibility that can be used to many advantages.  I’m never going to say that my existence with my current employer was the tipping point for any sort of business deal, but I can’t help but think that building a good reputation certainly can’t be used against you.

So, what happened between now and then?  I stated I got involved, but what specifically did I do between leaving my previous role and what I currently do now that is so drastically different that I’ve been able to accelerate my career in ways I never would have thought possible five years ago?  After many moments of reflection, I remembered a conversation I’ve had recently with a very influential gentleman (and I’m not just talking about his bourbon ways either).  You may know the gingered one, Mr. Josh Atwell, from his popular webinars and presentations related to all things DevOps and still bringing up some coding components from his memory banks from long ago when we used to try to beat Luc Dekens to all the questions in the VMware PowerCLI community forums.  Back in July, the both of us were in Indianapolis for the Indianapolis VMUG UserCon.  I had given a presentation on DevOps and IT culture earlier in the day and Josh was about to give his closing keynote.  I don’t recall the exact quote, but we were talking about how things were going and that things were starting to accelerate rapidly with my public persona (as well as my corporate/private persona with my current employer) and I was told (again, butchered and horribly paraphrased), “You are doing what you need to be doing.  Right now, that’s just showing up to events.”

So, you want a good piece of advice to your career and how community involvement can take it to a whole new level?  Just show up.  You may not be ready to actually start getting your hands into some of the involved pieces of community involvement, but sometimes, the first step is just attendung user group meetings or meet ups in your neck of the woods.  Sooner or later, you start to get recognized, just due to physical proximity.  Who knows, maybe you can get inspired to challenge yourself with overcoming your fears of public speaking by offering to do a small presentation for one of those user groups or meet ups?  Perhaps a spark from a conversation topics prompts you to open up a WordPress account and start a blog?  Even better, maybe you continue down the tracks in multiple technologies and get recognized by their influencer programs?  You could move on to being invited to speak to larger and larger events.  By chance, maybe you get a Twitter follow from Stephen Foskett and end up on a delegate panel for a Tech Field Day event.  Like a great many things, before you can experience steps two through infinity, you need to start with step one.

So, show up.  That’s my advice, even if I’m stealing it from Josh.  Show up and who knows what could happen.  Honestly, the worst thing that could happen is that we may buy each other a pour of Blanton’s and wax philosophical about DevOps for a while.  What you do after that, I leave that to you.  🙂

Posted in Technical | Tagged , | Leave a comment

An Azure Stack Primer for vSphere Folk

Over the last year, I’ve been involved in a journey that is changing my core competency in the technology industry.  My employer, a managed service provider, has been working with Microsoft in the Early Adopter Initiative in their hybrid cloud space.  Azure Stack is the name that it goes by and what I want to try to do to educate on what exactly this solution is trying to provide and what it means to those that are still in virtualization centric shops.  The goal of this isn’t to go into major technical detail and incite any sort of great tit for tat Twitter war between any factions that are perceived competitors to this product.  The goal is only to offer up the basics to those that are curious about Azure Stack.  Without further ado, let’s get into our first point about Azure Stack.

A Virtualization Replacement Product?

One of the common fallacies you hear about Azure Stack is that the primary use case for it in an enterprise or service provider environment is that of a replacement for a currently running virtualization platform.  I mention fallacy here because Azure Stack, while powered by a virtualization technology (Microsoft Hyper-V) is so much more than just a virtualization platform.

In the Microsoft messaging, they want everyone to understand that Azure Stack is more about enabling cloud consumption models than it is about just virtualization.  In that sense, Azure Stack is being positioned as having the same look and feel of public Azure in your own datacenter.  The platform has inclusions for many traditional IaaS capabilities, but also has many PaaS capabilities to push towards the application layer as the primary delivery within the platform.

In fact, some of the early Microsoft messaging in the Early Adopter Initiative was focused around the concept of data sovereignty.  There exist many industries in which data that is generated is subject to laws and regulations as to where that data can exist.  This was a heavy barrier to overcome towards adoption of public clouds.  Also, very few platforms exist to be able to provide more robust cloud consumption models in a private fashion.  Microsoft felt it was a good location to focus Azure Stack, so that many of these industries could now try to take advantage of public cloud consumption models within the walls of their own datacenters (or within service providers within data jurisdictions).

I Thought You Said This Was Hybrid Cloud?

Honestly, it’s a mix of both public and private cloud.  While Azure Stack is the private cloud implementation side, there are many integration points between Azure Stack and public Azure.  Technically, there are two implementation types of Azure Stack that exist out there.  What distinguishes between the two implementation types comes down to identity source (Azure Active Directory [AAD] and Active Directory Federation Services [ADFS]) and the licensing model in which you need to operate.

Focusing on the identity source, if you are using AAD for authentication, you are not running in a disconnected state and will refer back to public Azure for authentication purposes.  ADFS allows for you to use localized authentication and will not refer to public cloud identity sources for authentication.

Outside of the technical definition for hybrid cloud, Azure Stack and Azure share the same toolsets for management.  Consistency is the name of the game when Microsoft discusses how each is managed.  Both Azure Stack and Azure use the same subset of tools for management.  This includes their respective portal pages, PowerShell integrations, and integration into coding tools (for example, Visual Studio or Visual Studio Code).  Both implementations can be configured use Azure Resource Manager (ARM) templates.  Within an ARM template, all the information (in JSON format) that defines the infrastructure use and configuration of the solution you wish to deploy is contained.  This concept is used in both public Azure, as well as Azure Stack.  One caveat, however, is that when dealing with Azure Stack, the API versions will likely lag behind that of the public versions.  However, there are tools to help drive policies that ensure that whatever ARM template that is created in public Azure is also using versions that are compatible with Azure Stack.

I Can Roll My Own Hardware?

Short answer?  No.  This is likely going to be a major point of contention for many of you.  However, Microsoft has many perfectly valid reasons for needing to control the hardware information in their stack.  First, the sheer amount of validation needing to be done across the multitude of drivers within Windows for the various parts of Azure Stack would ensure a HCL that would take way too long to certify.  Secondly, there are security concerns within the computing environment that many vendors may not have in some of their server lines yet.  For instance, I found out that TPM 2.0 is a requirement of Azure Stack certified equipment.  During a Microsoft Ignite presentation, it was revealed that not many vendors have TPM 2.0 standard on most of their server lines.  As of right now, only four vendors have equipment that can be purchased.  This list includes:  HPE, DellEMC, Lenovo, and Cisco.  Many other vendors are going to be forth coming.

Also, major certification of networking components is an absolute need of the platform.  The storage system with Azure Stack is powered by Storage Spaces Direct (S2D), which requires offloads, not on only on the NIC layer in the servers, but also on the switching layers for RDMA (Remote Direct Memory Access).  Also, optimizations for VXLAN on both the NIC and switching layers for usage with the Azure SDN layer for network management were an absolute must.

Final Thoughts

In the scheme of things, I know there are some points of contention with this product versus what many infrastructure folks have ran in the past.  Not being able to choose your own hardware is one that I have seen many blog posts and opinion pieces on.  However, Microsoft has the marketing message that the point of this solution isn’t to operate hardware and worry about low level nerd knobbery within top-of-rack networking equipment.  The point is to hit the ground running and focus on the cloud consumption capabilities of the solution.  Personally, I love the fact that I’m going to actually be able to run a more robust cloud solution within my data center and begin to craft more cloud-oriented solutions for customers moving forward.

Now, if you’d like to give Azure Stack a try and have some hardware laying around to pull it off (or, if you really adventurous and want to create an Azure Stack Development Kit instance in a nested sense in public Azure), head on over to the Azure Stack Development Kit website (https://azure.microsoft.com/en-us/overview/azure-stack/development-kit/) and check the requirements for the hardware and signup for downloading the kit.

Hopefully, I can write more to come on things outside of initial concepts of Azure Stack moving forward!  Stay tuned!

Posted in Technical | Tagged , , , | Leave a comment