The Curious Case of the Newly Defaulted Firmware Version (EFI Edition)

Let’s set the stage. It’s a few days before VMworld 2018 (US Version).  I’m frantically trying to get a demo to work for a vBrownBag session I’m giving in a couple of days.  I decided to make a last-ditch effort to show off something involving my employer’s product (Cohesity CloudSpin) and using Terraform to create application stacks in the cloud with the usage of a generalized Windows Server 2016 virtual machine.

Amazingly enough, everything is working.  The gist is that I can take a secondary copy of this virtual machine (coming from ESX 6.5, at the time) and convert it for use with Microsoft Azure.  Then taking that generalized virtual machine, I can create a couple of web server VMs and front them with an Azure load balancer, with some sprinkling of Network Security Group work in between my VNets. Not bad, considering I had just started working with Terraform a week or so before hand.

Now, let’s fast forward to a couple of weeks ago.  In my haste, I blew away my generalized image and decided it was best to recreate it. No issue, I thought to myself.  I’ll just get a new Windows Server 2016 image up and going.  My first few tests aren’t going so well.  I knew something was wrong since my virtual machine creation options were shooting well past my usual time for powering on the VMs.  I fire up a bit of Boot Diagnostics and notice that all I’m seeing is a black screen.  Absolutely nothing is happening with the virtual machine.  It’s like the machine isn’t even posting…

Forgotten Upgrade

Normally, you think back to all the changes you’ve done to your environment and wonder if something was different.  I actually had spaced off the fact that in an effort to stabilize my lab environment, a coworker and I had standardized storage usage across all servers in the cluster and we decided it was best to finally upgrade to ESX 6.7.

ESX 6.7, as it would happen, brought forth Virtual Machine Hardware version 14.  In the past, this has normally not caused any sort of problem for me.  This time around, I found something that I never expected.  VM Hardware 14 brings for some new changes to default behavior for some operating systems.  As it would have it, after two days of searching around, I found the answer here, on a screen very few admins tend to look at, outside of some really unique use cases:

2018-09-21_22-38-47

The changes to the way ESX 6.7 can now handle Windows Server 2016 (with Secure Boot and Windows Virtualization Based Security) have led to the default firmware setting being changed to EFI.  For most people, this isn’t going to be seen as any sort of issue.  However, if you do anything with public clouds (for instance, the aforementioned conversion from a VMware-based virtual machine to VHD, Azure’s VM object type) then you are going to realize this is a big problem.  Most of the major cloud providers specifically call out EFI firmware as something they aren’t willing to handle right now.

What this means is if you do anything related to conversion from a VMware environment to a public cloud environment (whether that’s using native tools from Microsoft or Amazon or using third-party conversion tools [such as Cohesity’s CloudSpin]), you are going to have a problem on your hands, if you don’t address this default setting. The good news is that the option can be changed back (as this is just the default behavior, not the forced behavior).

Choices

How I see it, you really have two choices in this manner, at least until the public clouds can start handling virtual machines with EFI firmware.  Your first option is to build any net new virtual machines with Hardware Version 14, but programmatically/manually change the Firmware setting from EFI to BIOS. You just have to remember to change into the screen (if manual) or run some PowerCLI:

$vm = Get-VM -Name
$spec = new-object VMware.Vim.VirtualMachineConfigSpec
$spec.Firmware = New-Object VMware.Vim.GuestOsDescriptor
$spec.Firmware = "bios"
$task = $vm.ExtensionData.ReconfigVM_Task($spec)

Your second option would be to just keep building a virtual machine with Hardware Version 13. Unless you are specifically needing features with Secure Boot and Windows Virtualization Based Security in Windows Server 2016 (or needing Red Hat Enterprise Linux 8, which appears as a supported OS with ESX 6.7 and defaults to EFI firmware), Hardware Version 13 will still work.

Final Thoughts

When I posted this original inquiry on social media, I received a lot of feedback, although some of it felt very much like this wasn’t an issue for VMware, but more of an issue for the public clouds.  I get that VMware is in their own right to change the defaults how they see fit.  It’s their product.  Let them decide.  I just wish there was more notice out there for some in the admin community.  While many of those that are likely to read this blog consider themselves connected to the virtualization community, there are many who are not and many of them don’t delve into advanced setting properties like this.  In some of those cases, depended upon services, such as Azure Site Recovery, as an example, are now potentially rendered useless.

I get that public cloud providers need to start adopting more modern standards.  I was actually surprised to read documentation from both Microsoft Azure and Amazon Web Services to state their disdain for all things EFI firmware.  The many public cloud providers out there really need to start getting on the ball with these sorts of things.  It’s not like EFI firmware is that brand new.

That being said, I want to at least put this word of caution out there and in subsequent VMware releases, pay attention to the changes happening in the more advanced options for many operations.  You never know when something like your public cloud backup strategy gets rendered useless…

Advertisement
Posted in Technical | Tagged , , , , , , , | Leave a comment

Flipping The Script – From Delegate to Presenter

Last Friday, I got to have an out of body experience.  For the past two and a half years, I’ve attended Tech Field Day events as a delegate. Plenty of delegates have blogged about their experiences as a delegate and I highly suggest you take some time to read about those experiences.  What I’m talking about here is coming to one of these events from the other side of the equation.  You can’t have delegates talking about company products without companies willing to present to the audience.  Last Friday, I got to present my company’s cloud portfolio to the Cloud Field Day 4 delegates.

A Whole New Perspective

I’m pretty sure that somewhere in his vast amount of information on delegates past, present, and future, Stephen (Foskett) keeps some statistics about the groups that he puts together.  I know that there have been plenty of delegates that eventually made the jump from delegate to a role with a vendor company.  What I think might be rarer air is a delegate that makes the jump to a company in which then presents at one of his events.  While not “unicorn in the wild” impossible, the numbers are pretty low, which is why I felt that this was a very good time to write up something about this experience.

The Gestalt crew do a great job of preparing the presenting companies.  They’ve seen plenty of good (and plenty of bad) formulas about how to approach the content, to the presentation style, and even to the way interactions occur with the delegates.  Now, I was not really invited to many of those sessions.  Others handled those session internally and I was kept mostly to making sure we combined a good story for two hours of content.

Grab the Pepto

To be fair, I have only been with Cohesity for roughly two months.  During that time, I’ve kept my public interactions to social media outlets. This was the first event for me to really put my face with the Cohesity products.  Having been to many of these events, I wanted to be able to craft something that set the foundation towards the rest of the product set we wanted to demo (and demonstration-heavy was the name of the game).

The Monday before the event, Stephen and Tom both swung by the Cohesity HQ in San Jose.  They want to make a site visit beforehand to make sure we have physical logistics set and ready to go.  This is also one of the last times a presenting company really has a chance to make sure they are on the right track for making a good physical impression for the delegates in the room.  As the same conference room had been previously used for a prior Field Day event (Storage Field Day 15), many of the issues that were found using the room for the first time were already rectified.  Honestly, we spent most of the time discussing with other marketing folks in the room about the entire Field Day delegate selection process and why it’s always important to be on the lookout for the next new crop of delegates.

The largest thing to worry about was making sure we had our IT staff prepared and ready to assist with the setup, start, and tear down of the event.  Behind the scenes, the PrimeImage crew brings quite a bit of equipment and you have to make sure there’s plenty of network bandwidth for live video broadcasts (taken care of with a dedicated hardline with QoS policy configuration for them) and enough power to make sure there’s no issues with popping circuits at the wrong time.  In fact, we made sure to give too many power ports in the room, since there were a lot of foreign delegates and we all know large power plugs with power plug conversion kits = lots of space.

Heartburn Time

After the Gestalt visit, the rest was up to us.  Logistics, scheduling, food ordering, swag bag creations.  All of these topics needed to be straightened out before Friday morning.  As it was also Cohesity’s SKO week, there was already plenty to do around the office and at the SKO location.  Plenty of coordination between marketing teams happened and we all finally agreed on when to be at HQ for the event (yes, I knowingly got up at 6:30am to make sure we could greet PrimeImage before the delegates arrived).

Notice that I haven’t even discussed the actual presentation yet.  Thursday became out dry run day.  I lost count to the amount of times we ran through everyone’s sections that day. All I know is that towards the end, many of us were getting rather ragged and tired of being in the main conference room.  If there is a tech company version of “12 Angry Men”/”12 Angry Jurors”, it felt like we were all living it in that room.

The sheer amount of information we wanted to show to everyone was immense.  Demos were deemed to be too long or deemed to be too oversaturated with information that we cut things down or just cut things out entirely. We defined our schedule and plan (which I’ll mention we did not try to adhere to specific time blocks on our schedule…sometimes things run long; sometimes things run short; sometimes you have guys like Howard Marks in the room that ask all the questions, all at once) and stuck to things to practice and add to the presentation.  It felt good to get out of the room (at something like 8:30pm).

Sleep?  What’s That?

So yeah, as you can imagine, with an early wakeup call and having to vacate out of the hotel, sleep was a premium that night.  However, I mentally was rolling through everything I wanted to make sure happened during the demos.  What I had been thinking about for 2-3 weeks was finally coming to a head and there was no way I was going to drop any sort of ball on this one.

6:30am rolls around and I’ve already been downstairs in the hotel lobby waiting on Aaron for about 5-10 minutes.  We walk over to HQ, get everything setup, and wait for everyone to trickle in. PrimeImage comes in a few minutes before 7am (not to future presenter companies, always have someone show up 15 minutes prior to when PrimeImage says they want to be in).  Food arrives, then the delegates arrive.  Handshakes, hugs, and caffeination occurs from 7:30am to 8am. Then it’s go time.

The Moment You’ve Been Waiting For

Personally, I was in the first block of topics/demos for the day.  The importance to the overall presentation was that I set the foundational components for the rest of the demos for the day.  Fall on your face and you leave the rest of the presenters to do damage control; win the day, and it’s an easy transition to more advanced use cases with those building blocks.

Before we continue, a quick note on my personality.  I’m not the type to really toot my own horn about a job well done.  I will always find something that needs critiqued and most often, dwell on that.  That being said, my demos couldn’t have gone any better.  The questions were the types of questions I wanted asked by the delegates and I even got to show things with my own wrinkle (Hi there, PowerShell code!). Also, not having to sacrifice a new delegate to the live demo gods is ALWAYS a plus!

Now, for the sake of the temperature in the room (as you can imagine, all the video gear, delegates, and coworkers of mine can make for a very warm room), I vacated and returned to my temp space to catch the rest of the presentation online.  I also wanted to make sure I shut down all my public cloud projects, just in case I missed parts of it where I might have exposed all my sensitive access keys.

Final Result

All in all, I believe we showed well to the delegates and to the rest of the world about the Cohesity cloud portfolio.  For a company that has talked about being hyperconverged secondary storage for so long, it was refreshing to see us start talking about elements beyond what is rapidly becoming tablestakes.  I applaud the entire effort of all those involved in making this event a great success for us and I look forward to talking about our cloud portfolio to the masses even more.

From a personal perspective, it’s fantastic to see how far I’ve come from the first time I tried to present in front of a live audience.  Who knew that I could convert a deep-rooted fear into a wild success?

Now, who do I have to talk to get approval in the budget for another one of these events? 🙂

Posted in Technical | Tagged , , | Leave a comment

The Power of Community – Job Search Conclusion

In a prior blog post, I chronicled that I was thrust into a job search, due to a recent layoff.  The tech communities that I participate in came rushing to my aid and sent me plenty of job opportunities to follow up upon.  I’m extremely grateful that everyone (which there are way too many of you to mention) helped me out in my search, even if your suggestion didn’t net me steady employment.  I wanted to put out there exactly how this search went, what pitfalls I ran into, and how perseverance eventually prevailed.

April 16, 2018 – 9:30am

You know, during my career, I’ve never been the one to have been dumped.  I’ve navigated my career in a way that was rather safe, but secure.  That usually lends to a steady source of income, however, usually without any drastic amount of notoriety or massive salary increase.  There’s something to be said about being entirely too comfortable with your situation.

That changed when I went into my manager’s office on April 16th.  I was testing new functionality for my employer’s private cloud implementation and had spent a good amount of time in the lab working on scenarios for multi-site VMware NSX and getting back into implementing VMware vCloud Director as the primary interface for customer interaction.  In fact, I was just about to start testing basic communication between my two lab sites when I was reminded I needed to head into his office.

What transpired there was more of a blur than anything else.  I do remember a look of dread on my manager’s face and the sullen tone with the head of HR on the phone line.  Immediately, I internally seized up and realized what was about to happen.  I’d say for about two to three minutes, I felt just pure panic.  However, as the conversation went on about things like severance packages, documents that needed to be signed, and the apologies coming from everyone, I had a moment of clarity.  Strangely enough, I felt relieved.  Comedically, it wasn’t until I had to hand in my laptop that my first moment of legitimate concern happened.  I had just handed in the only device I had that I could take with me to St. Louis to present at the St. Louis VMUG UserCon, which was in about 36 hours.

I called my wife and informed her of what just happened.  She ended up more panicked that I was about the entire event.  In fact, I was completely opposite of panicked.  I had just felt as if a huge weight had been lifted off my shoulders.

Onwards and Upwards

For the first time in nearly a year, I had to come clean with myself.  I was pretty unhappy with most of my situation at the service provider position.  I clearly had become too comfortable with my role and I was projecting wants and desires upon something that had only recently become a part of who I am and who I’m becoming in this industry.  You see, back in late 2015, I finally joined the ranks of the independents.  Those people who go to technical events and offer relatively unbiased opinions on tech.  I got the bug for attending events, like Tech Field Day, and being invited to influencer/analyst events.  I even got back into public speaking and had become a regular, at least in the US Midwest, at many VMUG UserCons.

In fact, I looked more forward to those dates on my calendar than I did about anything internally.  During that last year, any potential bastion for learning new technology was immediately snatched away due to unrealistic expectations upon the company that I worked for.  Now that I think about it, a company focused purely on IaaS solutions is really going to have to turn over a big leaf to start talking PaaS and application development.  Hindsight bias allows for me to realize that it was a fool’s errand to believe that implementing Microsoft Azure Stack was going to just be a monumental waste of $500,000 that would take nearly five times that amount of time and effort internally to finally realize and make a profit from.

However, I was rather downtrodden about how it seemed that any new technology stack was met with such bitter opposition.  In fact, this should have given me my first major red flag.  During this time, I had been having a few conversations with some pretty good places about transitioning over to them to work with Microsoft Azure.  In the end, all I got was multiple levels of run around.  In fact, so much run around, the idea that I had in my head of these places is likely tarnished permanently.  However, even as more discussion opportunities started to pour in, I would say that I would listen, but something kept me from really pursuing those positions.

A Good Manager is Hard to Find

If you spend anytime on LinkedIn, you eventually get bombarded with a circulated opinion piece about how people don’t quit the companies they work for; they quit the managers that they directly report to.  In my case, for the first time in a very long time, I had a manager in which I enjoyed having conversations with on multiple levels.  In fact, this is the very same manager that allowed for me to pursue so many external opportunities, many without the need for PTO.  I had found one of those rare cases where you really enjoy working for your direct manager but were having way too many internal struggles with the rest of the company.

Again, with hindsight bias perfectly tuned, I couldn’t move on because I enjoyed my manager and the team that I worked on.  We had become a tight knit squad and were able to cover a long of ground and formulate plans.  My comfort level was at an all-time high.  I was blinded by this comfort and was unable to move along, like the rest of my brain was telling me to do so.  I started to believe that eventually I would make the business impact that I had strived to do when I first joined that organization four years prior.  Instead, I deluded myself right up to the point when the business decided I was no longer worth keeping.

Dropped on Your Ass

Nothing is more humbling that being told that you no longer matter to the organization you’ve been spending your time and effort to try to turn around.  So, why was it that as I was sitting in the car, calling my wife, immediately after I had just been escorted from the building, that I felt the most intense wave of relief ever?  I had the immediate realization that I not longer had anything shackles on me.  I felt free.  I was no longer grounded by my own head and instead I was now forced to find my own path.

Granted, I was also scared, but I felt that I could no longer have any excuses for not talking with everyone about potential positions and figure out exactly what I wanted to do for the next stage in my career.  I was damn near giddy about the idea of figuring this out.  In fact, I immediately knew that being unceremoniously booted out the back door was going to be best thing that has happened in my career since I broke from the mold of a prior employer after a 14-year tenure there.

…Not So Fast

Almost immediately, I started reaching out to my network.  Multiple links and contacts were sent my way and I tried to follow up with all of them.  Almost immediately, I got hooked up with a recruiter from a pretty good analytics and monitoring vendor and got a first stage interview setup for later in the week.  I felt this was a good sign.  I kept the positivity up when I started to receive even more pre-sales opportunities to follow up on.  I even had friends of mine start a recruiting blitz upon me while I was attending the St. Louis VMUG UserCon.  However, as I started to run through the gamut of these initial conversations and starting interviews, I quickly started to realize that I had no idea what I wanted to focus on for my next career stage.  I went through the motions and continued as many conversations as I could.  Each time, I was met with long delays, immediate rejections, or mentions of being told that my skills weren’t good enough for the role I was seeking.  I was frustrated beyond all measure.

To Settle or Not to Settle

I was unwilling to settle, but I also knew when my severance package was going to end and the financial liability that my family would be exposed to without a steady paycheck and no health insurance.  I was on the verge of settling for the first thing that would take me when I had an epiphany.  What I really wanted to do had been staring at me for the longest time.  Hell, I was even performing these types of tasks as more of a hobby or side project.  I loved interacting with everyone at technical events and I loved educating.  It didn’t matter what the topic was, I wanted to ensure that during a conversation, both of us came out better.  I started to focus in and realizing I wanted to try to get into what’s considered technical marketing.

Immediately, I started looking through various companies that I knew members of their technical marketing teams and started having conversations.  I even filled out a couple of applications and put my name in for positions.  Unfortunately, I have a delay in the schedule, as I had another technical event I had agreed to before I was laid off.  While attending that event, I did have some conversations with various companies and I learned that I had in fact made the “final list” for a TME position.  I was excited; however, I also knew that the calendar was working heavily against me.

Lightning in a Bottle

I tend to have a very pragmatic view of the universe.  Very rarely do whirlwind events happen around me that provide positive change.  In most cases, these events happen, and the change is extremely negative instead.  The universe rewarded me with a very rare positive whirlwind event when I returned from my last influencer event.  Upon returning, a good friend mentioned as to whether I had investigated company XYZ, as they had a TME position open.  Over that weekend, I just filled out the online application for the position, without really thinking anything would come of it.

I was told that a benefactor of mine set-in motion something that following Monday.  I’ll forever be grateful to this benefactor, as this caused a domino effect in which by that following Friday, I had upwards of seven interviews with this company in three days, including a marathon block of back-to-back-to-back-to-back interviews on a Friday before a US holiday.  Even though there was a pause for Memorial Day, within 4 business days (minus the Monday of Memorial Day), I was staring at an offer sheet for a new position.  I was also flabbergasted by the fact that this offer sheet, by far, exceeded anything I was asking for.

Sign on the Dotted Line

So, now comes the fun part.  Who is this mystery company and what am I now going to be doing while they provide me paychecks?  Starting on June 11th, I will be starting a position of Technical Marketing Engineer for…

logo-cohesity-e1483556042335

Yep.  Cohesity.  I’m going to take a bit of a risk and challenge myself in a new way.  I’m going to see if there’s more depths to this persona I had developed that was decent at giving community presentations and that was getting invited back to influencer events.  I know it’s going to be hard, as it’s the first time I’m not just going to focus just specific tech, but also how it’s presented to masses.  However, it’s exactly what I was looking for, even if I didn’t realize it until late in the process.

Last Words

I absolutely love the Disney movie “Meet the Robinsons“.  In fact, at the end of the movie, the following quote from Walt Disney is displayed.
disneyquote

The quote reminds me that I’m always curious and I’m always willing to try new things, even if I set a reserved tone and fall into the traps of comfort.  Stay curious, my friends.  You never know what new door opens when you least expect it.

Posted in Technical | Tagged , , | 2 Comments

Head in the Clouds: Oracle Cloud Infrastructure

Before I begin, I will state that I had a very hard time writing this blog post.  It wasn’t that the presentation and associated information provided were that tough to disseminate through, it’s just that I felt, for a long time, that I had way too much bias to write this post.  See, I’ve bought into the cloud models that have been presented by market leaders like Amazon and Microsoft.  Concepts like “lift and shift”, along with legacy enablement are considered passé to me.  Refactor or stay with in your own data center or with your managed service provider on your aging infrastructure stacks that have your applications bound too tightly.

However, as time went on, I felt it was time to finally get out what my thoughts were on the combined presentations between Oracle Ravello Blogger Day (#RBD2) and that from Cloud Field Day 3 (#CFD3).  I was able to be in attendance for the Oracle Ravello Blogger Day and I relied upon recordings from Cloud Field Day 3 to get these thoughts finally penned.

So, my thoughts?  Let’s set the moment first.  As this was the second rendition of Oracle Ravello Blogger Day, I had to turn down an invite to the first one.  This means that my reference point isn’t coming from someone who got to see what this relationship looked like a calendar year ago.  While we were presented a roadmap of how thing have been going and what’s changed since the last event, I personally was not able to correlate what the entire environment looked like a year prior.  Will that skewer some of my points?  It very well might.  However, I do think some themes are still going to be valid without that data point.

WHY ask WHY

Nearly my entire thought process for the calendar year, thus far, of 2018 can be summed up by the name of a single author, Simon Sinek.  Mr. Sinek wrote a series of books and gave a series of presentations about the concept of WHY.  The concept of WHY is a simple one; it’s about inspiration.  Even the back of Mr. Sinek’s book, Start with Why, states that, “Any person or organization can explain what they do; some can explain how they are different or better; but very few can clearly articulate why.”

No pun intended, but why is this important?  I believe the concept of WHY is important here because I feel that OCI is missing a very clear message of WHY.  I do believe that both Amazon and Microsoft have started to establish clearer WHY messages when it comes to their clouds.  Most of the information that I got from Oracle Ravello Blogger Day (and that of Cloud Field Day) seemed to focus on the WHAT and the HOW (through multiple discussions about the infrastructure, pricing models, SLAs, services).  Multiple industries are riddled with stories of copycats that tried to match their industry leading brethren, only to fail miserably in their endeavor to keep up.  Using an example from Mr. Sinek’s book, if you focus on the airline industry, Southwest Airlines is considered the king, in terms of profitability and loyalty.  Many other airlines tried to execute in the same budget-conscious space, including nearly clear knock offs of Southwest Airlines’ tactics.  However, you’ll find that none of those attempts lasted.  It’s a cautionary tale to other industries that you might be able to copy a competitor, even to the point of clear plagiarism, but unless you can capture a good WHY, you’ll be doomed before you even begin.

Cohesion

Cloud cohesion is a big thing for me.  What I mean by cloud cohesion is that all the parts of the cloud, whether it’s the infrastructure components, the platform components, and all the “glue” that holds it together in between, are created with a purpose to better the system, rather than to be some sort of standalone part that just exists outside of whatever core that is being built.

Now, it’s no secret that Oracle is behind in the cloud game.  To capture a market segment, they’ve had to build their cloud via acquisition.  In most industries, this isn’t necessarily a bad thing and honestly, for the sake of trying to catch up to Amazon and Microsoft, it was an absolute necessity.  Unfortunately, what happens with acquisitions is the awkward period where integrations feel (and certainly look) clunky to the outside eye.  No greater example of this when a demo to show how to take a peered network in OCI and leverage that network in Ravello.  Work had to be done in the appearance of two discreet systems, with no real way of confirming that the peered network was presented properly.  If I go back to my idea of cohesion, this would not qualify as cohesive.

To bridge on the cohesion message, Oracle wanted to talk about things like containers and serverless (FaaS).  Personally, from what I’ve seen, we might be getting a bit ahead of ourselves.  I consider Amazon Web Services Lambda and Azure Functions to be the ultimate expressions of cohesion across their cloud platforms.  All that these platforms can call upon is available and easily condensed to short batches of code.  Due to the nature of what I saw with network peering, I think any talk about function-based services in OCI is extremely premature, especially with the evolving ecosystem.  There was even discussion in the conference room about trying to compare this platform to that of VMware-on-AWS and even I thought was putting the cart well before the horse, as well.

If there’s a hill I’m going to die on, in terms of my feelings about OCI’s roadmap, it’s that there’s a lot of work to go before we can start tying things together with a FaaS offering.  I need to see more cohesion before I even think this needs to be attempted.  Attempting to roll something out like this is going to immediately go back to the WHY message in the prior section.  Imitation without a WHY is a recipe to a failed product.

Infrastructure, Infrastructure, Infrastructure

I’ll give Oracle this, they really want you to get out of your on-premises locations and use their infrastructure components.  That message still suffers based on the assumption that all workloads for a prospective customer are x86-based or have some sort of compliance (or even latency) requirements that prohibit the moving of those workloads to their cloud.  As much as we’ve been told that “cloud is the future”, I can point to out hundreds of examples of companies, large and small, that have 30-year old mainframe systems powering the core business.  Perhaps a facelift to front-end applications has occurred, but somewhere, there’s still a translated call in that new front-end application that is converted into something the legacy system can process.

There’s a level of attractiveness to what Oracle is trying to sell in their infrastructure SLAs.  Having been in the managed service provider business for an extended period, SLAs represent risk reduction and moving workloads to another provider is, at its core, a risky proposition.  Many an enterprise will certainly enjoy having reduced risk in terms of lifting and shifting (or, forgive me, moving and improving).

This is where the part of me that believes in refactoring applications (under the right circumstances) for cloud native approaches.  Decoupling the application layer from limitations on the infrastructure layer is exactly you want to be doing.  Therefore, I’m not one to really get all that enamored by the idea of infrastructure SLAs.  As an organization, I’m instructing, as the business, to my development arm, to reduce this burden upon the business.

However, I’m reminded, even in my own words a few paragraphs back, that not every organization is ready, nor willing to take on the burden and risk of a major application refactoring task.  Keep that legacy application packaged up in the same form factor (typically a virtual machine) and don’t bother with really improving it.  The primary use case here is that, barring getting past the compliance and latency gate, is to reduce costs.  Oracle will sell you a much-reduced cost for operating legacy application stacks.  Based on the way enterprises operate, this may very well be the pinnacle of Oracle’s cloud.

Conclusion

I do realize that I do have some cloud biases.  I even stated as such at the beginning and bordered on showing it again talking about cloud native applications.  I really want to believe in what Oracle is doing.  There might be a damn good path in there for them.  Maybe even a path that Amazon and Microsoft aren’t either willing to go down.  We don’t have those answers right now.  Oracle should continue to stay the course with what they are working on, but they really need to work on their WHY messaging.

I do wish Oracle the best here.  They are in a race where they are woefully behind and may never really catch up to the development arms of Amazon and Microsoft.  There’s a very comfortable message here that can make Oracle relevant for a very long time in this space.  However, given the history of Oracle, I’m not sure “relevant” is the pinnacle they are looking for.

That being said, maybe I check in on the progress next calendar year with an invite to Ravello Blogger Day 3 (my not so subtle hint)?  😊

 

Posted in Technical | Tagged , , , , , | 1 Comment

The Power of Community: Job Search Edition, Volume 1

Sitting here in my home office, I’m pondering whether to refer to my career in the present tense or the past tense.  Technically, I’m currently without employment.  It’s been nearly a month since I was let go by my prior employer.  I’ve ran through a gamut of emotions since, however, I still do truly believe that it will benefit me in the long term, even if I might need to figure out what the hell COBRA is and how to pay for the benefits of said acronym.

I want to inform everyone that the job search is still ongoing.  It’s been part surprising and many parts painful.  I’ve ran through emotions in which I feel like I’m going to nail an interview to the sneaking voices in my head telling me that I ranked just above plankton after.  I’m still generally positive on this process, but I’m getting antsy.  My family, without going into too many details, needs steady medical coverage and I would like to ensure paychecks continue to come to my address well after the beginning of June.

Now, I’m going to gush on you, my networked tech communities.  Many of you, who I may not even recognize, have sent me plenty of opportunities to keep me busy with a job search for a very long time (however, let’s just say I need to land something and soon).  You’ve been awesome.  I can’t describe the outpouring of response I got on social media platforms when I announced the implosion of my most recent role.  Even a month later, I still get a plenty of you to check in and see if there’s anything else that you can do to help.  You all are a very bright spot in something that could have turned very dark during this time.  I can’t even quantify the amount of thanks that is going to be owed (likely being converted to a certain amount of or age of bourbon) the next time we meet face to face.

To others that may, inevitably, end up in the same position that I find myself in, I tell you this.  Continue to invest in your tech communities.  If you haven’t started, start laying the foundation.  I never really thought about the impact that I might have on the community, even if it’s a smaller subset, but what I’m seeing now is that I am leaving a lasting impact of people whenever I speak at an event or even write a LinkedIn or Twitter update.  While I might have sacrificed some time away from my family to contribute to the community, I feel like I’ve turned my network into a very essential insurance policy on my career.

So, without getting too mushy on you, I have nothing but heartfelt thanks to all of you.  I hope that whatever the role that comes along next is one that I can continue to give back to the community that has helped me so much in the last month.  You’ve been awesome and it’s on me to be awesome back to you when the right offer comes along.  Keep up the good work, community!

Posted in Technical | Tagged , , , | Leave a comment

The Yin Yang of Dell EMC Storage

Chinese philosophy tells us that the concept of yin and yang are one of opposition and combination.  Examples of such opposition combinations are easy to find; light and dark, bitter and sweet, and even order and chaos.  So, why a quick overview on Chinese philosophy?  Recently, I attended a technical event, Tech Field Day 16, and during a two-hour block of time, I was presented a duality of sorts.  This duality came from one of the old storage guard, that being Dell EMC.  During this block of time, we got a lesson in how vastly different oppositions can even exist in technical portfolios from vendors.  What I speak of is the tale of the Dell EMC VMAX and the Dell EMC XtremIO.

Enter Order, the VMAX

Boring.  No, this isn’t just myself going through my usual collection of swear words when it comes to everything (and I mean everything) I dislike about storage.  Representatives from Dell EMC described the VMAX storage system as that very term.  While the platform name might have changed over the 28-year career of this storage system (you might remember this system as the EMC Symmetrix), there hasn’t really been much done to this array over that course of time.  Oh, don’t get me wrong, the system has gone through upgrades and such, but what I speak of is a complete overhaul and redesign from the ground up.

This platform is one that really doesn’t wow you with tons of features, per se.  And honestly, there isn’t much in the term of excitement when talking about this array, especially if you are performing feature-by-feature analysis against competing systems.  In fact, I harken this device to that of the American family staple, the minivan.  In no way am I ever going to confuse or even bother to compare a minivan to that of a sports car, but when I think of the minivan, two terms come to mind:  reliability and capacity.

Forgive the horrible analogy, but the VMAX is such that it’s been a rock-solid system over its lifetime.  Throughout all the name changes and adaptations (I’m not going to call them architecture changes), the VMAX has been a system that many a Fortune 500 (or even Fortune 100) has called upon to be a reliability storage platform for Tier One (or even Tier Zero) systems.  You don’t get to build a reputation like that without doing something right, but at the exact same time, not rocking the boat, so to speak, when it comes to adapting the architecture over time.

In all seriousness, it feels like all that happened in the last few years with the VMAX platform is that Dell EMC has created an all-flash version of their minivan.  While that certain helps the platform start to achieve even more performance metrics, I find this equivalent to adding racing fuel to said minivan.  Sure, you might go faster on the freeway, but, again, you didn’t buy the minivan to drag race on the freeway.  You bought the minivan to protect your precious cargo (your family, in case you forgot) as you moved around from Point A to Point B.

Blindsided by Chaos, the XtremIO

If the VMAX was the consummate family vehicle of the Dell EMC portfolio, the XtremIO has had a past that leads one to believe it that the platform is best described (in car terms) as a racing motorcycle.  With jet engines attached to it.  And maybe even a couple of rockets for good measure.  Without handle bars.

It doesn’t take long to do quick Google searches to see the checkered past of the XtremIO platform.  While not exactly earning Charlie Sheen-esque bad levels of public relations, this platform has had many question whether it truly is the Tier One platform Dell EMC had claimed it to be.  Certainly, I would stand on a mountain and shout down to the masses if I wasn’t achieving the level of expected performance or even had to go through a firmware update process that ended up requiring a forklift data migration (twice!) just to use the latest code.

Dell EMC made sure that the tone of discussion with the XtremIO 2 platform was that of calm growth.  I would even say that there was an air of maturity to the product.  It certainly felt as if the XtremIO 2 platform had learned lessons of its past and were making strides towards being a more mature product for the enterprise.

As a father to a four-year-old, I know what’s it like to watch my son struggle with even the most basic tasks, but I also have to temper my expectations about what he’s capable of until he grows and matures.  There’s a part of me that wants to believe the first-generation XtremIO platform was the equivalent to my son.  There’s been a lot of tantrums, a lot of yelling and screaming, but at the end of the day, I get a hug every night and peace of mind that my son grew a little more that day.

Maturity Cycles

Honestly, it feels like the XtremIO team took a page out of the VMAX teams operating guide.  Now, I’m sure there’s still some chaotic nature of the XtremIO platform that still needs some fine tuning, but I’m not going to judge it harshly for going through learning curves.  If anything, Dell EMC should have realized the mistakes of rushing a product to market, but I get that they really had no choice compared to the competition.

That being said, there is something to be said about watching the youngster in your group grow up and start to realize the potential you might have (fairly or unfairly) thrust upon them.  If VMAX was the example of what Dell EMC could provide to the tried and true enterprise, we see that it’s finally making strides to do the same for the XtremIO platform.  Maturity has come to the platform and with it, I hope, is stability that puts the platform right next to the VMAX in the Dell EMC portfolio under “boring reliability”.

Posted in Technical | Tagged , , , , | 1 Comment

Adding More (Red)Fish to Your Diet

Imagine, if you will, you are someone in a server operations team.  As a member of that team, you are expected to keep the multiple layers of that server up to date.  When you have only a handful of servers, this isn’t nearly a monumental task.  However, as the business you work for grows, the server farm grows larger and larger.  Your laissez faire approach to the upkeep of said servers quickly consumes what limited time you have.  Unfortunately, your vendor (or heaven forbid, multiple vendors) of choice chooses to continue with a proprietary set of technology and tools to perform these needed upgrades.  First, the scale of the task has gotten out of control and how the tools have become cumbersome as well.  Now, you are really in a bind and no matter the sheer amount of screaming you do, it is not going to get better.  Or is it?

The Way We’ve Always Done It

For decades, server maintainers have had the unfortunate pleasure of being presented IPMI (Intelligent Platform Management Interface) as their primary interface to interact with their servers in an out-of-band management fashion.  This has led to the rise of BMC (Baseboard Management Controller) in many of the servers we see in our data centers today.  If you’ve ever connected to a DRAC/iDRAC, HPE iLO, IBM Remote Supervisor Adapter, or Cisco IMC device, you’ve had the unfortunate pleasure of interacting at this level.

Now, the problem with these systems wasn’t IPMI itself.  Standards are always a good thing (well, unless you have a bad standard to start from), generally.  The problem was that each of the companies listed above did their own interpretation/implementation of those standards.  This meant that the approach Dell EMC used greatly differed from competitors in that very same space, like Cisco, HPE, or Lenovo.  This meant for each server brand, there was a completely different and unique set of tools for interacting with IPMI standards with that device.  If you have a large datacenter with multiple vendors, the last thing you ever look forward to is MORE TOOLS to manage it!

Enough is Enough

Somewhere along the line, I believe the server vendors realized that their own proprietary methods were causing entirely too much strife in their customer base(s).  Beginning in late 2015, the DMTF (Distributed Management Task Force), especially with the help of chairpersons from Dell, started to create and begin the process of ratification of a new standard called Redfish.  This standard was to drive a common (RESTful) API mechanism that would be used to interface with any vendor’s server and perform many of the rudimentary tasks that become so proprietary.  Personally, I have heard of Redfish and adoption of that standard recently, however, I was unaware of the history of the standard and how influential Dell (and Dell EMC) has been to the standard.

While recently attending Tech Field Day 16, a very important question was asked to Dell EMC.  Why did this take so long to become a reality?  Honestly, this question is likely very complex to answer.  Let’s be frank about all vendors here.  All vendors LOVE their unique ways of approaching complex problems.  Many of them pride themselves on their intellectual property.  There’s a level of inventiveness and creativity to some of the vendor approaches for using the IPMI “standard”.  Unfortunately, what a vendor wants always goes to where their users are trending.  The users spoke, and they wanted less nerd knobs and more shared experiences from vendor to vendor.

Meltdown and Spectre

As if server technicians were already under the gun for trying to keep their growing server farms up to date, along came a double whammy.  There’s no need to go into the details of these two vulnerabilities.  We will go into what this means for a server operations staff in a large enterprise environment.  It means firmware updates and many variants of them.

Now, while not every large enterprise had the wherewithal to keep up with the necessary patching before these vulnerabilities first came to light, this forced everyone to have to get up to speed on their processes and procedures for updating all their servers.  Any potential stance that involved firmware “set it and forget it” quickly went up in flames and, hopefully, that stance would never be heard from again.  Many of these organizations finally came face to face with a cold, hard fact; firmware updating a large server farm is the absolute worst of the worst!

So Long and Thanks for All the Fish?

Now, from a personal perspective, I have vivid recollections of having to roll multiple firmware updates across server farms in the thousands of devices.  It was not uncommon for myself and my team to have to spend inordinate amounts of time just working with firmware updating tools that felt half-baked and required much handholding to perform their documented task.  Many hours of productivity were lost, and it felt as if you were drowning in firmware updates in that environment.  It’s very unfortunate that it took this long for the Redfish API standards to appear.

Now, if there is a good note about the development of the Redfish API standard, it’s that it’s going to have siblings.  Dell EMC is continuing work with DMTF to drive development into other API standards for the datacenter.  Keep an eye out, as you might see APIs coming for shared storage (“Swordfish”), network switch, power, HVAC, and security systems.

While these new standards may not set the world alight from a technical perspective, they are something to pay attention too.  Complexity at scale is something that turns a rudimentary operation into a monumental nightmare.  Anything, and I mean anything, is better than the current vendor-specific implementations on these platforms we have today.  Kudos to Dell (and now Dell EMC) for continuing the drive to common APIs to lessen this pain.

Posted in Technical | Tagged , , , | 2 Comments

Harnessing the Power of PowerShell Advanced Functions

Recently, I published (https://github.com/snoopj123/NXAPI) a community-based PowerShell module so that PowerShell aficionados could interact with Cisco NX-OS switches (specifically the Nexus 5000 and 7000 families) that were running an API package called NX-API.  This API package allowed for sending NX-OS CLI to these switches, but instead of forcing either a telnet or SSH session, you could do this through HTTP or HTTPS.  The entire module shows how to initialize the connection, including building the right HTTP(S) headers, body, and URI (uniform resource identifier) to the switch endpoint.

I built this library because I was tired of some of the techniques Cisco had deployed within the automation and orchestration framework of Cisco UCS Director.  For the past four years, interaction with NX-OS was done through Java libraries, built by Cisco, that encapsulated SSH connectivity and then screen scraped the responses from the SSH session as returns, whether as success/fail criteria or as inventory information to update Cisco UCS Director’s database.  Overall, these components added massive overhead to the process, especially when you consider multiple switches to have to communicate with in a large-scale fabric.

So, the final goal of this project was to rip away UCS Director’s overhead and get back to what we wanted done:  a way to touch multiple switches in as little time as possible.

What Does this have to do with PowerShell?

Well, PowerShell is my scripting language of choice.  This project also forced me to get much more intimate with advanced function techniques, along with getting more proficient with the Invoke-RestMethod and Invoke-WebRequest cmdlets.   For the sake of this post, we are going to focus on some of the techniques used for crafting a function that I will be using regularly (Add-NXAPIVlan).  Let’s go through the code:

Let’s start with one of the first lines of code in the function:

[CmdletBinding()][OutputType('PSCustomObject')]

What exactly is this small block of code trying to convey?  The CmdletBinding() declaration is what tells PowerShell that this is an advanced function.  We are able to start adding certain parameter designations, like -WhatIf and -Confirm, which almost treats the function like a full-fledged cmdlet.  It’s simply required for advanced function capabilities.

Now, the OutputType() declaration is more of a cosmetic declaration.  This is used, in the beginning of PowerShell functions, to declare the expected return type of the object the function will return.  However, this declaration is not actually performed and validated by this declaration.  In this example, we are cosmetically declaring we are returning a .NET object type of PSCustomObject (the PowerShell custom object).

Working with Parameters

Moving on, we see the param() section of the code.  I won’t list all of these, but some of the better examples of some of the advanced functions within:

param(

[parameter(Mandatory = $true, ValueFromPipeline = $true)]
[ValidateNotNullOrEmpty()]
[string]$Switch,

[parameter(Mandatory = $true)]
[ValidateNotNullorEmpty()]
[ValidateRange(1, 4094)]
[int]$VLANID,

[parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[ValidateLength(1, 32)]
[ValidateScript( {$_ -match "^[a-zA-Z0-9_]+$"})]
[string]$VLANName

)

Inside the param() section, you’ll see a list of multiple declared parameters for this function.  Each have been given some specific validation functions to be compared against.  Let’s look at the first parameter, Switch.

[parameter(Mandatory = $true, ValueFromPipeline = $true)]
[ValidateNotNullorEmpty()]
[string]$Switch,

For this specific parameter, we’ve added two conditions to the parameter itself in the form of Mandatory and ValueFromPipeline.  Mandatory is there to ensure that the parameter is always present when calling this function.  Without that parameter, a critical error will occur and running of the function will never happen.  As far as ValueFromPipeline, this means we are declaring that a string object can be passed to this function via the PowerShell pipeline.  Here’s an example:

$switch = @(“myswitch.domain.org”,"myswitch2.domain.org","myswitch3.domain.org")
$switch | Add-NXAPIVlan -VLANID 1001 -VLANName TestVLAN -Username admin -Password $password

Notice that I did not need to explicitly declare the Switch parameter.  The reason is due to ValueFromPipeline.  By using the pipeline, the assumption was that we were sending a value for the Switch parameter.

Lastly, we have the ValidateNotNullorEmpty declaration.  This is a quick validation to make sure that the object being passed is not set to $null or does not have a declared value associated to it.  There’s no point in processing through the function if the parameter has no value!

Later on in the param() section, you’ll notice a few more validation declarations.  ValidateRange allows for the function author to set a range in which the object can have a value.  In the case of this function, we are stating that the integer for VLANID must be between 1 and 4094.  Any attempt to provide a value outside of this range will net in the function returning an error.  The same goes for ValidateLength, however, this one is used to specific the minimum and maximum character length the parameter VLANName can have.  Lastly, there’s a ValidateScript declaration.  This declaration allows authors to produce their own validation script.  In this example, we are checking the characters in VLANName against an approved list of character values, specified in a Regex format.  Each character much be an upper-case letter (A-Z), a lower-case letter (a-z), or a numeric digit (0-9).  All other characters are considered invalid to this function.

You might notice that there are some other parameters in which I’ve specifically set the Mandatory declaration to $false.  This is because I want those parameters to be optional.  In the overall functions, they are there for very specific returns, whether verbose logging or optional defined functionality that I do not want to be executed by default.

Begin/Process/End

Lastly, you may notice that there’s a particular form to the actual meat of the advanced function.  If you’ve worked with a Try/Catch/Finally error handling block, you can kind of get the idea what the meaning behind Begin/Process/End is all about.  The Begin/Process/End block is a requirement for working with arrays or multiple objects coming into the function.  The reasoning will become apparent further in the explanation.

A Begin block is used for a very specific purpose.  In the event that you are going to be handing multiple objects (as an example, from the pipeline), this block of code is used for a single execution of code before the main body of code is processed.  As an example, I include a lot of EnableVerbose parameters on these functions.  In my Begin block, I’ll check to see if the parameter has been passed and set the VerbosePreference for the entire execution time of the function.  Having that setting run in the Process block for every object being passed is a waste of execution time and resources.

A Process block is used to specify the code you want executed for every single object that might be passed to the function.  Not much really needs to be explained about this section.  Your biggest hurdle might be determining what code needs to go to the Begin or End blocks instead of continually performing that operation on every object, especially if you plan on sending quite a few objects to this function.

Lastly, we have the End block.  Similar to the Begin block, you get a one-time run of code contained within when the function is complete.  If I’m setting the VerbosePreference in the Begin block, then I’m setting the value back to what it was after completion.  Please note, that if you break out of the function for any reason or have a critical stop somewhere in the code, the End block will not process.  This deviates from the Try/Catch/Finally code block, where Finally is always processed.

Now, why do we use this block?  You want to get multiple returns from your function!  If you do not use the Begin/Process/End block, what happens is that the function only returns information on the last object processed within.  If you wanted success or fail criteria from all the objects you sent through the pipeline, you will be sorely disappointed when all you receive is the last object without the block

Conclusion

This was a fun project, from multiple fronts.  I feel like I got a greater idea of what advanced functions within PowerShell are capable of.  I also feel I’ve grown in better identifying how to carve up my code for single/global execution within an advanced function.  I can’t wait to learn more!

Posted in Technical | Tagged , , , , , | Leave a comment

The Dichotomy of Mentoring

I can’t help but notice a certain amount of chatter on my social media timelines about “mentoring”.  In our industry, we tend to associate this in the form of career advancement.  We take someone with less experience (a “junior”) and help provide them guidance and advice so that pitfalls from personal experiences do not become long-term roadblocks (usually in the form of a “senior”).  However, I’m starting to notice a disturbing trend in some of these discussions.  That trend is that you need to be a “senior” to be able to transfer this wisdom down to a “junior”.  I’m sorry, mentoring isn’t a one-way street.

I get that someone that earned a title with the term of “senior” in it has likely gotten quite a bit of experience.  I’m not trying to discredit the notion that a senior should pass down information to a junior.  I’m discrediting the single direction notion that it’s the only way that matters.  Look, every single one of us learns in their own unique way.  As someone who’s been told that they do a decent job in mentoring others, I can tell you that it works both ways.  The amount of learning I’ve been able to do from those who call themselves a “junior” has been just as important as from those I consider my “senior”.

The main point I’m trying to make here is that for you to really succeed in mentoring, you have to be willing to be mentored.  If you aren’t receptive to being mentored, regardless of your “status”, you are soon going to be left behind in this industry.  An industry, I’ll remind you, that evolves at a highly accelerated pace.  If you aren’t learning, from EVERYWHERE, in this industry, you’ve failed.  Harsh?  Yep.  The time for tough love in upon us.

Now that I’ve likely pissed off the “senior” crowd, let’s go back to some of the “juniors” out there.  I want to tell you something very important.  There’s plenty of us “veterans” in this industry that need a good shake up.  Keep up with asking questions and pointing out that maybe, just maybe, the way they do things isn’t always the best way.  It’s going to continue to force EVERYONE to keep learning.  Sitting on laurels is an immediate off-ramp to irrelevancy.

Mentoring isn’t a single direction.  Like DevOps, mentoring is a feedback loop.  The more feedback you give and get, the better everyone involved in that loop becomes.  As far as I’m concerned, if you are involved in that loop, your relationships should be more classified as Any to Any.  You give advice to everyone and you get advice from everyone.  That’s how it should work.  The expectation that some great oracle on high is going to pass down wisdom and should be your only source is, well, bunk.

Last point; I promise.  George Bernard Shaw gave us the infamous quote of “He who can, does; he who cannot; teaches.”  Many people still believe in this quote.  Sorry to burst a bubble here, but the quote is bullshit.  If you believe that ever action you take is a teachable moment, then you believe your entire actions dictate that you cannot.  Every single one of us is a teacher.  Every single one of us, as it would happen, is also a student.  For every single one of you, keep teaching AND keep learning.

Posted in Technical | Tagged , , | 1 Comment

When “Culture” Really Isn’t Culture

As we approach the end of the calendar year, many of us in the information technology field use this time to reflect on the prior year and may even use that reflection period to consider whether it’s the right time for a change.  The marketing machine, known as human resources, of many companies is in full swing and ready to try to sell you on how working for them is the best thing since sliced bread.   However, I want you to be weary of misuses of terminology that many get caught up in.  These misuses end up selling you on something that either doesn’t exist or just isn’t up to the descriptions so elegantly laid out in Glassdoor or LinkedIn job postings.

Culture:  the set of shared attitudes, values, goals, and practices that characterizes an institution or organization

One of the main points that now seems to be prevalent throughout job descriptions (or company descriptions) is the term of culture.  Now, what ends up happening is a twisting on this term to highlight certain perks of the company.  The last I checked, perks != (that’s DOES NOT EQUAL for some of our programming illiterate friends) culture.  No doubt you’ll be bombarded by pictures of fancy break rooms, stocked full of all sorts of beverages and snacks (not to mention the term FREE).  You might even see pictures of a game room, complete with a ping pong table (and if you are lucky, an arcade machine or two).  Excellent!  Fantastic!  However, what this has to do with culture is beyond me.

The last I checked, most businesses aren’t in business to field professional ping pong teams or compete in E-sports.  So, why is it that we see, too often, these sorts of things associated with culture?  Personally, I get the fact that we need avenues to blow off some steam from a hard bit of project work or in the trenches support work.  Some of these perks give a great impression of reward to the people putting in the hours to accomplish said business goals.  I’m just wondering why THIS tends to be the definition of an organization’s culture?  What happened with actually describing how human interactions are going to occur from an intra team level to inter team levels?

Maybe this harkens back to some of our lacking in asking really good questions during the interview process.  Personally, I’ve only had to participate in less than a handful of interviews.  I know most of my time was spent trying to show someone my technical acumen, as if that was the only thing that mattered to the individual across the table from me.  Too often, when it comes time for the interviewee to ask a question, blanks are drawn.

To arm yourself, especially to understand the culture of the organization you are trying to get into, maybe it’s time to start asking some hard questions that don’t go back to human resources marketing material.  “How are mistakes handled within the organization?”  “How are teams typically structured (as in, do you have senior members who make all the decisions and juniors are expected to fall in rank and not question anything)?”  “How receptive is the organization to a diversity of opinion?”  “Is there a clear path of career growth within the organization?”  “Does the organization have any respect for personal time to be used to better one’s self through education opportunities?”

To balance this out, I know I’ve also been on the opposite end of the spectrum with being the one asking the interviewee questions.  I try to keep an open mind to the person in question and try to bring up some of these topics.  Many times, the interviewee is surprised that I would be asking any sort of question about how they like to be heard or advance through an organization.  I also know, that as the interviewer, I need to drop any sort of bias that would make me start to ask questions about things that I feel don’t really matter to the role.  I get we like to ask others to answer questions to see if they are a “fit” in our teams, but at what point are we asking questions for the sake of finding a great teammate instead of asking questions to find a drinking buddy?

So, I think it’s on us to start challenging the interview process and start asking questions about culture that matter.  We aren’t ever going to find out how human interaction is expected if we don’t open our mouths.  I know the Glassdoor pictures of the new offices and fancy drink machines are nice, but in the end, you want to fit in and you want to do it on your terms.  You need to take the initiative, asking the right questions before it becomes too late.  I bet those free drinks and food are going to taste a whole lot better when you find the right culture to exist (and thrive) in, instead of immediately realizing you made a mistake and are looking for a new position when you haven’t even hit triple digits in the number of days employed.  Do yourself a favor; ask good, hard culture questions during your interview process.

Posted in Technical | Tagged , | Leave a comment