Amplify: The Story of My 2017

End of the year recaps are about as clichéd as they come.  However, I do feel it’s important to see where we start the year and how we end the year.  Needless to say, 2017 has been a rather interesting year.  When I recapped 2016, I talked about establishing my voice.  2017 was about amplifying my voice.  From a personal perspective, I know I’m not the brightest individual out there in the technical communities that I frequent.  However, I don’t feel that’s an absolute requirement to have a voice and use it.  2017, can be summed up in a couple of words:  community presenter.

3584

For those playing along, no, the header to this paragraph is not a product of a sneeze while typing.  Instead, it’s the summation of the mileage I drove to present at multiple VMUG UserCons this year.  Three thousand, five hundred, eighty-four miles from my residence to the locations where I presented.  Granted, it was only a total of five UserCons, but the road trips with friends and the thrill of honing my skills as a presenter were well worth the experience.

Unfortunately, everyone has that one event they wish they could do a little differently.  I try my hardest to ensure I don’t do a horrible job with my presentations.  However, sometimes they get away from you.  So, this special session is for those that saw me at the St. Louis VMUG UserCon in March.  I tried to pare down a 50-minute presentation desk to 15 minutes and it did not go well.  For that, I will forever feel I owe the St. Louis VMUG leaders a do over.

Outside of St. Louis, I presented in Minneapolis, Kansas City (my own backyard, essentially), Indianapolis, and Cincinnati.  I was involved in plenty of discussions about other UserCons, however, they didn’t seem to work out in the end.

One of the biggest compliments I got from my presentation (which was a return to my IT Culture and DevOps presentation I did for vBrownBag’s Commitmas 2016), was that with a little TLC to the slide deck, I might have a keynote session on my hands.  To be honest, I was perfectly content getting a community slot, but there’s something to be said about pursuing something a little higher than where your skills are right now.

My Favorite Podcasts

Podcasts have become a very powerful medium in our industry and within our technical communities.  It was an awesome experience to be able to participate in some well-established podcasts as a featured guest.  While I might have laid a clunker on the presentation in St. Louis, Keith Townsend resurrected his dormant “The CTO Advisor” podcast where I was able to talk about DevOps as a cultural movement for 15 minutes.  Later in the year, at the Indianapolis VMUG UserCon, Keith, Mark May, Tim Smith, and myself recorded a part of an episode chatting about the power of community in our day to day technical lives.

One of the more interesting podcasts I got to be a part of happened on the exhibit floor in Orlando at Microsoft Ignite 2017.  Clint Wyckoff had me on as a guest for the Veeam Community Podcast where we talked about Microsoft Azure Stack, DevOps, and many other topics.  Clint and I have run into each other at plenty of technical events, however, this was the first time we were able to actually record anything.  It was nice to be able to talk about things well outside of the VMware-based wheelhouse we knew each other from.

Lastly, my crown jewel appearance was as a guest on the Packet Pushers podcast.  Ethan and Greg are more known for having networkers as guests, but this time, my topic of choice about IT culture ended up being a good enough topic for a full episode.  Sprinkle in a little bit about paralleling Nietzsche with ITIL, and we had ourselves a fantastic discussion that probably could have gone on for multiple episodes.  I can’t thank those gentlemen enough for allowing me to be a guest.

The Moment of 2017

In my 2016 recap post, I went on to tell a story of how I brought some good ol’ Kentucky bourbon to VMworld 2016 and had a fun couple of hours just talking with Josh Atwell.  It wasn’t necessarily technical, which is why it stood out the most.  For 2017, I have another good story that doesn’t deviate into the technical realm either.

To set the stage, I travelled to Boston in May 2017.  This was a combination week, as I was going to be attending OpenStack Summit (my first time) and Tech Field Day 14.  The week turned into a rather hefty logistical nightmare, as I had to get my pass for OpenStack Summit well in advance, along with setting up lodging outside of what the Field Day family sets up for their event.  I scheduled an airbnb stay about 15 minutes from the convention center that OpenStack Summit was going to happening at.  Unfortunately, as Stephen (Foskett) will be able to tell you, Tech Field Day 14 had some issues with presenting companies.

Originally, Tech Field Day 14 was supposed to be three days, with upwards of seven to eight different companies presenting to the delegate panel.  As the event drew closer, the schedule changed drastically and the event was whittled down to four presenting companies and one of the days was dropped from the schedule.  As I had already reserved my airbnb location before all of this happened, I was stuck with a night in which I was lacking lodging.  I offered to pay for my own room for that gap night at the hotel we, the delegates, would be staying at.  Stephen informed me that it was all right and that he would pick me up at the conference and take me out there, as he was already going to be doing some chatting with prospective companies for the Field Day events.

Now, all of this setup leads me to my favorite moment of 2017.  As OpenStack Summit was drawing to a close, Stephen wanted to know if I would be interested in heading up to one of the restaurants at the top of the Prudential Tower.  While the company he wanted to talk to escapes me, the view was awesome and the food, drink, and conversations were really great.  Normally, it takes me a while to warm up to these types of conversations, but it felt rather natural to discuss some technical things with the folks hosting the event.

As the evening drew to a close, Stephen wanted to stop by one of the shops in the shopping district around the Prudential Tower to buy something to discount parking, which lead us to a sports wear store.  Inside, Stephen, being a huge Boston Red Sox fan, chose to pick up a Boston Red Sox shirt or two.  This lead us to having a rather lengthy discussion about baseball.

You see, baseball is a passion for me.  I’ve been a lifelong Chicago Cubs fan.  I remember the days before Wrigley Field got night time lights, so all the afternoon games were available to watch on a local channel where I grew up.  At the same time, currently residing in Kansas City, I can easily spend an evening at Kauffman Stadium.  As long as I live, I will always be able to remember October of 2014 (the Royals lost in Game 7 of the World Series to the San Francisco Giants), 2015 (the Royals returned to the World Series, this time winning it in five games), and 2016 (the Cubs finally ended 108 years of futility and won the World Series).

On the drive from downtown Boston out to the hotel in Waltham, we talked each other’s ears off about baseball.  Stephen pointed out how well the Red Sox looked.  I talked about whether they had a bullpen that would be able to get them over the hump and into the World Series again.  We talked about rookies with hot starts.  We even chatted about the various experiences we had with our teams getting to and even winning the World Series.  It was a great evening of talking with another fellow baseball fan.  It’s something I’m going to remember for quite some time, even if I end up jinxing the Red Sox, Cubs, or Royals from not winning another World Series in either of our lifetimes.

The Next Steps

The next steps are going to be rather interesting ones.  I’m currently at a multi-path crossroads that feels like it’s going in hundreds of different directions.  I’ve started to get involved in more cloud-native efforts, especially in the Microsoft Azure ecosystem (public and Azure Stack).  I’ve gotten on radars for more community programs (like the Microsoft MVP program, although, I still haven’t been able to focus on obtaining it like I’d like to).  I’ve been given options to jumpstarting user groups for Microsoft Azure in my local area.  I’m even having discussions with VMUG people about putting in another presentation to make the rounds for UserCons in 2018.

The main point is that it seems like I’m rather content to keep doing what I’m doing.  Keep interacting with the technical community.  Keep branching out to those I can call as friends and having excellent conversations about tech, life, and everything.  At this rate, I might be able to cut and paste this post for 2018!

So, here’s to 2017!  While I may have never met many of you directly, you helped shape my last 12 months.  I can only hope that I’m able to return the favor and help you shape your next 12 months in the same fashion!

Advertisements
Posted in Technical | Tagged , , , , | Leave a comment

The Power of Community: Job Search Edition, Volume 1

Sitting here in my home office, I’m pondering whether to refer to my career in the present tense or the past tense.  Technically, I’m currently without employment.  It’s been nearly a month since I was let go by my prior employer.  I’ve ran through a gamut of emotions since, however, I still do truly believe that it will benefit me in the long term, even if I might need to figure out what the hell COBRA is and how to pay for the benefits of said acronym.

I want to inform everyone that the job search is still ongoing.  It’s been part surprising and many parts painful.  I’ve ran through emotions in which I feel like I’m going to nail an interview to the sneaking voices in my head telling me that I ranked just above plankton after.  I’m still generally positive on this process, but I’m getting antsy.  My family, without going into too many details, needs steady medical coverage and I would like to ensure paychecks continue to come to my address well after the beginning of June.

Now, I’m going to gush on you, my networked tech communities.  Many of you, who I may not even recognize, have sent me plenty of opportunities to keep me busy with a job search for a very long time (however, let’s just say I need to land something and soon).  You’ve been awesome.  I can’t describe the outpouring of response I got on social media platforms when I announced the implosion of my most recent role.  Even a month later, I still get a plenty of you to check in and see if there’s anything else that you can do to help.  You all are a very bright spot in something that could have turned very dark during this time.  I can’t even quantify the amount of thanks that is going to be owed (likely being converted to a certain amount of or age of bourbon) the next time we meet face to face.

To others that may, inevitably, end up in the same position that I find myself in, I tell you this.  Continue to invest in your tech communities.  If you haven’t started, start laying the foundation.  I never really thought about the impact that I might have on the community, even if it’s a smaller subset, but what I’m seeing now is that I am leaving a lasting impact of people whenever I speak at an event or even write a LinkedIn or Twitter update.  While I might have sacrificed some time away from my family to contribute to the community, I feel like I’ve turned my network into a very essential insurance policy on my career.

So, without getting too mushy on you, I have nothing but heartfelt thanks to all of you.  I hope that whatever the role that comes along next is one that I can continue to give back to the community that has helped me so much in the last month.  You’ve been awesome and it’s on me to be awesome back to you when the right offer comes along.  Keep up the good work, community!

Posted in Technical | Tagged , , , | Leave a comment

The Yin Yang of Dell EMC Storage

Chinese philosophy tells us that the concept of yin and yang are one of opposition and combination.  Examples of such opposition combinations are easy to find; light and dark, bitter and sweet, and even order and chaos.  So, why a quick overview on Chinese philosophy?  Recently, I attended a technical event, Tech Field Day 16, and during a two-hour block of time, I was presented a duality of sorts.  This duality came from one of the old storage guard, that being Dell EMC.  During this block of time, we got a lesson in how vastly different oppositions can even exist in technical portfolios from vendors.  What I speak of is the tale of the Dell EMC VMAX and the Dell EMC XtremIO.

Enter Order, the VMAX

Boring.  No, this isn’t just myself going through my usual collection of swear words when it comes to everything (and I mean everything) I dislike about storage.  Representatives from Dell EMC described the VMAX storage system as that very term.  While the platform name might have changed over the 28-year career of this storage system (you might remember this system as the EMC Symmetrix), there hasn’t really been much done to this array over that course of time.  Oh, don’t get me wrong, the system has gone through upgrades and such, but what I speak of is a complete overhaul and redesign from the ground up.

This platform is one that really doesn’t wow you with tons of features, per se.  And honestly, there isn’t much in the term of excitement when talking about this array, especially if you are performing feature-by-feature analysis against competing systems.  In fact, I harken this device to that of the American family staple, the minivan.  In no way am I ever going to confuse or even bother to compare a minivan to that of a sports car, but when I think of the minivan, two terms come to mind:  reliability and capacity.

Forgive the horrible analogy, but the VMAX is such that it’s been a rock-solid system over its lifetime.  Throughout all the name changes and adaptations (I’m not going to call them architecture changes), the VMAX has been a system that many a Fortune 500 (or even Fortune 100) has called upon to be a reliability storage platform for Tier One (or even Tier Zero) systems.  You don’t get to build a reputation like that without doing something right, but at the exact same time, not rocking the boat, so to speak, when it comes to adapting the architecture over time.

In all seriousness, it feels like all that happened in the last few years with the VMAX platform is that Dell EMC has created an all-flash version of their minivan.  While that certain helps the platform start to achieve even more performance metrics, I find this equivalent to adding racing fuel to said minivan.  Sure, you might go faster on the freeway, but, again, you didn’t buy the minivan to drag race on the freeway.  You bought the minivan to protect your precious cargo (your family, in case you forgot) as you moved around from Point A to Point B.

Blindsided by Chaos, the XtremIO

If the VMAX was the consummate family vehicle of the Dell EMC portfolio, the XtremIO has had a past that leads one to believe it that the platform is best described (in car terms) as a racing motorcycle.  With jet engines attached to it.  And maybe even a couple of rockets for good measure.  Without handle bars.

It doesn’t take long to do quick Google searches to see the checkered past of the XtremIO platform.  While not exactly earning Charlie Sheen-esque bad levels of public relations, this platform has had many question whether it truly is the Tier One platform Dell EMC had claimed it to be.  Certainly, I would stand on a mountain and shout down to the masses if I wasn’t achieving the level of expected performance or even had to go through a firmware update process that ended up requiring a forklift data migration (twice!) just to use the latest code.

Dell EMC made sure that the tone of discussion with the XtremIO 2 platform was that of calm growth.  I would even say that there was an air of maturity to the product.  It certainly felt as if the XtremIO 2 platform had learned lessons of its past and were making strides towards being a more mature product for the enterprise.

As a father to a four-year-old, I know what’s it like to watch my son struggle with even the most basic tasks, but I also have to temper my expectations about what he’s capable of until he grows and matures.  There’s a part of me that wants to believe the first-generation XtremIO platform was the equivalent to my son.  There’s been a lot of tantrums, a lot of yelling and screaming, but at the end of the day, I get a hug every night and peace of mind that my son grew a little more that day.

Maturity Cycles

Honestly, it feels like the XtremIO team took a page out of the VMAX teams operating guide.  Now, I’m sure there’s still some chaotic nature of the XtremIO platform that still needs some fine tuning, but I’m not going to judge it harshly for going through learning curves.  If anything, Dell EMC should have realized the mistakes of rushing a product to market, but I get that they really had no choice compared to the competition.

That being said, there is something to be said about watching the youngster in your group grow up and start to realize the potential you might have (fairly or unfairly) thrust upon them.  If VMAX was the example of what Dell EMC could provide to the tried and true enterprise, we see that it’s finally making strides to do the same for the XtremIO platform.  Maturity has come to the platform and with it, I hope, is stability that puts the platform right next to the VMAX in the Dell EMC portfolio under “boring reliability”.

Posted in Technical | Tagged , , , , | 1 Comment

Adding More (Red)Fish to Your Diet

Imagine, if you will, you are someone in a server operations team.  As a member of that team, you are expected to keep the multiple layers of that server up to date.  When you have only a handful of servers, this isn’t nearly a monumental task.  However, as the business you work for grows, the server farm grows larger and larger.  Your laissez faire approach to the upkeep of said servers quickly consumes what limited time you have.  Unfortunately, your vendor (or heaven forbid, multiple vendors) of choice chooses to continue with a proprietary set of technology and tools to perform these needed upgrades.  First, the scale of the task has gotten out of control and how the tools have become cumbersome as well.  Now, you are really in a bind and no matter the sheer amount of screaming you do, it is not going to get better.  Or is it?

The Way We’ve Always Done It

For decades, server maintainers have had the unfortunate pleasure of being presented IPMI (Intelligent Platform Management Interface) as their primary interface to interact with their servers in an out-of-band management fashion.  This has led to the rise of BMC (Baseboard Management Controller) in many of the servers we see in our data centers today.  If you’ve ever connected to a DRAC/iDRAC, HPE iLO, IBM Remote Supervisor Adapter, or Cisco IMC device, you’ve had the unfortunate pleasure of interacting at this level.

Now, the problem with these systems wasn’t IPMI itself.  Standards are always a good thing (well, unless you have a bad standard to start from), generally.  The problem was that each of the companies listed above did their own interpretation/implementation of those standards.  This meant that the approach Dell EMC used greatly differed from competitors in that very same space, like Cisco, HPE, or Lenovo.  This meant for each server brand, there was a completely different and unique set of tools for interacting with IPMI standards with that device.  If you have a large datacenter with multiple vendors, the last thing you ever look forward to is MORE TOOLS to manage it!

Enough is Enough

Somewhere along the line, I believe the server vendors realized that their own proprietary methods were causing entirely too much strife in their customer base(s).  Beginning in late 2015, the DMTF (Distributed Management Task Force), especially with the help of chairpersons from Dell, started to create and begin the process of ratification of a new standard called Redfish.  This standard was to drive a common (RESTful) API mechanism that would be used to interface with any vendor’s server and perform many of the rudimentary tasks that become so proprietary.  Personally, I have heard of Redfish and adoption of that standard recently, however, I was unaware of the history of the standard and how influential Dell (and Dell EMC) has been to the standard.

While recently attending Tech Field Day 16, a very important question was asked to Dell EMC.  Why did this take so long to become a reality?  Honestly, this question is likely very complex to answer.  Let’s be frank about all vendors here.  All vendors LOVE their unique ways of approaching complex problems.  Many of them pride themselves on their intellectual property.  There’s a level of inventiveness and creativity to some of the vendor approaches for using the IPMI “standard”.  Unfortunately, what a vendor wants always goes to where their users are trending.  The users spoke, and they wanted less nerd knobs and more shared experiences from vendor to vendor.

Meltdown and Spectre

As if server technicians were already under the gun for trying to keep their growing server farms up to date, along came a double whammy.  There’s no need to go into the details of these two vulnerabilities.  We will go into what this means for a server operations staff in a large enterprise environment.  It means firmware updates and many variants of them.

Now, while not every large enterprise had the wherewithal to keep up with the necessary patching before these vulnerabilities first came to light, this forced everyone to have to get up to speed on their processes and procedures for updating all their servers.  Any potential stance that involved firmware “set it and forget it” quickly went up in flames and, hopefully, that stance would never be heard from again.  Many of these organizations finally came face to face with a cold, hard fact; firmware updating a large server farm is the absolute worst of the worst!

So Long and Thanks for All the Fish?

Now, from a personal perspective, I have vivid recollections of having to roll multiple firmware updates across server farms in the thousands of devices.  It was not uncommon for myself and my team to have to spend inordinate amounts of time just working with firmware updating tools that felt half-baked and required much handholding to perform their documented task.  Many hours of productivity were lost, and it felt as if you were drowning in firmware updates in that environment.  It’s very unfortunate that it took this long for the Redfish API standards to appear.

Now, if there is a good note about the development of the Redfish API standard, it’s that it’s going to have siblings.  Dell EMC is continuing work with DMTF to drive development into other API standards for the datacenter.  Keep an eye out, as you might see APIs coming for shared storage (“Swordfish”), network switch, power, HVAC, and security systems.

While these new standards may not set the world alight from a technical perspective, they are something to pay attention too.  Complexity at scale is something that turns a rudimentary operation into a monumental nightmare.  Anything, and I mean anything, is better than the current vendor-specific implementations on these platforms we have today.  Kudos to Dell (and now Dell EMC) for continuing the drive to common APIs to lessen this pain.

Posted in Technical | Tagged , , , | 2 Comments

Harnessing the Power of PowerShell Advanced Functions

Recently, I published (https://github.com/snoopj123/NXAPI) a community-based PowerShell module so that PowerShell aficionados could interact with Cisco NX-OS switches (specifically the Nexus 5000 and 7000 families) that were running an API package called NX-API.  This API package allowed for sending NX-OS CLI to these switches, but instead of forcing either a telnet or SSH session, you could do this through HTTP or HTTPS.  The entire module shows how to initialize the connection, including building the right HTTP(S) headers, body, and URI (uniform resource identifier) to the switch endpoint.

I built this library because I was tired of some of the techniques Cisco had deployed within the automation and orchestration framework of Cisco UCS Director.  For the past four years, interaction with NX-OS was done through Java libraries, built by Cisco, that encapsulated SSH connectivity and then screen scraped the responses from the SSH session as returns, whether as success/fail criteria or as inventory information to update Cisco UCS Director’s database.  Overall, these components added massive overhead to the process, especially when you consider multiple switches to have to communicate with in a large-scale fabric.

So, the final goal of this project was to rip away UCS Director’s overhead and get back to what we wanted done:  a way to touch multiple switches in as little time as possible.

What Does this have to do with PowerShell?

Well, PowerShell is my scripting language of choice.  This project also forced me to get much more intimate with advanced function techniques, along with getting more proficient with the Invoke-RestMethod and Invoke-WebRequest cmdlets.   For the sake of this post, we are going to focus on some of the techniques used for crafting a function that I will be using regularly (Add-NXAPIVlan).  Let’s go through the code:

Let’s start with one of the first lines of code in the function:

[CmdletBinding()][OutputType('PSCustomObject')]

What exactly is this small block of code trying to convey?  The CmdletBinding() declaration is what tells PowerShell that this is an advanced function.  We are able to start adding certain parameter designations, like -WhatIf and -Confirm, which almost treats the function like a full-fledged cmdlet.  It’s simply required for advanced function capabilities.

Now, the OutputType() declaration is more of a cosmetic declaration.  This is used, in the beginning of PowerShell functions, to declare the expected return type of the object the function will return.  However, this declaration is not actually performed and validated by this declaration.  In this example, we are cosmetically declaring we are returning a .NET object type of PSCustomObject (the PowerShell custom object).

Working with Parameters

Moving on, we see the param() section of the code.  I won’t list all of these, but some of the better examples of some of the advanced functions within:

param(

[parameter(Mandatory = $true, ValueFromPipeline = $true)]
[ValidateNotNullOrEmpty()]
[string]$Switch,

[parameter(Mandatory = $true)]
[ValidateNotNullorEmpty()]
[ValidateRange(1, 4094)]
[int]$VLANID,

[parameter(Mandatory = $true)]
[ValidateNotNullOrEmpty()]
[ValidateLength(1, 32)]
[ValidateScript( {$_ -match "^[a-zA-Z0-9_]+$"})]
[string]$VLANName

)

Inside the param() section, you’ll see a list of multiple declared parameters for this function.  Each have been given some specific validation functions to be compared against.  Let’s look at the first parameter, Switch.

[parameter(Mandatory = $true, ValueFromPipeline = $true)]
[ValidateNotNullorEmpty()]
[string]$Switch,

For this specific parameter, we’ve added two conditions to the parameter itself in the form of Mandatory and ValueFromPipeline.  Mandatory is there to ensure that the parameter is always present when calling this function.  Without that parameter, a critical error will occur and running of the function will never happen.  As far as ValueFromPipeline, this means we are declaring that a string object can be passed to this function via the PowerShell pipeline.  Here’s an example:

$switch = @(“myswitch.domain.org”,"myswitch2.domain.org","myswitch3.domain.org")
$switch | Add-NXAPIVlan -VLANID 1001 -VLANName TestVLAN -Username admin -Password $password

Notice that I did not need to explicitly declare the Switch parameter.  The reason is due to ValueFromPipeline.  By using the pipeline, the assumption was that we were sending a value for the Switch parameter.

Lastly, we have the ValidateNotNullorEmpty declaration.  This is a quick validation to make sure that the object being passed is not set to $null or does not have a declared value associated to it.  There’s no point in processing through the function if the parameter has no value!

Later on in the param() section, you’ll notice a few more validation declarations.  ValidateRange allows for the function author to set a range in which the object can have a value.  In the case of this function, we are stating that the integer for VLANID must be between 1 and 4094.  Any attempt to provide a value outside of this range will net in the function returning an error.  The same goes for ValidateLength, however, this one is used to specific the minimum and maximum character length the parameter VLANName can have.  Lastly, there’s a ValidateScript declaration.  This declaration allows authors to produce their own validation script.  In this example, we are checking the characters in VLANName against an approved list of character values, specified in a Regex format.  Each character much be an upper-case letter (A-Z), a lower-case letter (a-z), or a numeric digit (0-9).  All other characters are considered invalid to this function.

You might notice that there are some other parameters in which I’ve specifically set the Mandatory declaration to $false.  This is because I want those parameters to be optional.  In the overall functions, they are there for very specific returns, whether verbose logging or optional defined functionality that I do not want to be executed by default.

Begin/Process/End

Lastly, you may notice that there’s a particular form to the actual meat of the advanced function.  If you’ve worked with a Try/Catch/Finally error handling block, you can kind of get the idea what the meaning behind Begin/Process/End is all about.  The Begin/Process/End block is a requirement for working with arrays or multiple objects coming into the function.  The reasoning will become apparent further in the explanation.

A Begin block is used for a very specific purpose.  In the event that you are going to be handing multiple objects (as an example, from the pipeline), this block of code is used for a single execution of code before the main body of code is processed.  As an example, I include a lot of EnableVerbose parameters on these functions.  In my Begin block, I’ll check to see if the parameter has been passed and set the VerbosePreference for the entire execution time of the function.  Having that setting run in the Process block for every object being passed is a waste of execution time and resources.

A Process block is used to specify the code you want executed for every single object that might be passed to the function.  Not much really needs to be explained about this section.  Your biggest hurdle might be determining what code needs to go to the Begin or End blocks instead of continually performing that operation on every object, especially if you plan on sending quite a few objects to this function.

Lastly, we have the End block.  Similar to the Begin block, you get a one-time run of code contained within when the function is complete.  If I’m setting the VerbosePreference in the Begin block, then I’m setting the value back to what it was after completion.  Please note, that if you break out of the function for any reason or have a critical stop somewhere in the code, the End block will not process.  This deviates from the Try/Catch/Finally code block, where Finally is always processed.

Now, why do we use this block?  You want to get multiple returns from your function!  If you do not use the Begin/Process/End block, what happens is that the function only returns information on the last object processed within.  If you wanted success or fail criteria from all the objects you sent through the pipeline, you will be sorely disappointed when all you receive is the last object without the block

Conclusion

This was a fun project, from multiple fronts.  I feel like I got a greater idea of what advanced functions within PowerShell are capable of.  I also feel I’ve grown in better identifying how to carve up my code for single/global execution within an advanced function.  I can’t wait to learn more!

Posted in Technical | Tagged , , , , , | Leave a comment

The Dichotomy of Mentoring

I can’t help but notice a certain amount of chatter on my social media timelines about “mentoring”.  In our industry, we tend to associate this in the form of career advancement.  We take someone with less experience (a “junior”) and help provide them guidance and advice so that pitfalls from personal experiences do not become long-term roadblocks (usually in the form of a “senior”).  However, I’m starting to notice a disturbing trend in some of these discussions.  That trend is that you need to be a “senior” to be able to transfer this wisdom down to a “junior”.  I’m sorry, mentoring isn’t a one-way street.

I get that someone that earned a title with the term of “senior” in it has likely gotten quite a bit of experience.  I’m not trying to discredit the notion that a senior should pass down information to a junior.  I’m discrediting the single direction notion that it’s the only way that matters.  Look, every single one of us learns in their own unique way.  As someone who’s been told that they do a decent job in mentoring others, I can tell you that it works both ways.  The amount of learning I’ve been able to do from those who call themselves a “junior” has been just as important as from those I consider my “senior”.

The main point I’m trying to make here is that for you to really succeed in mentoring, you have to be willing to be mentored.  If you aren’t receptive to being mentored, regardless of your “status”, you are soon going to be left behind in this industry.  An industry, I’ll remind you, that evolves at a highly accelerated pace.  If you aren’t learning, from EVERYWHERE, in this industry, you’ve failed.  Harsh?  Yep.  The time for tough love in upon us.

Now that I’ve likely pissed off the “senior” crowd, let’s go back to some of the “juniors” out there.  I want to tell you something very important.  There’s plenty of us “veterans” in this industry that need a good shake up.  Keep up with asking questions and pointing out that maybe, just maybe, the way they do things isn’t always the best way.  It’s going to continue to force EVERYONE to keep learning.  Sitting on laurels is an immediate off-ramp to irrelevancy.

Mentoring isn’t a single direction.  Like DevOps, mentoring is a feedback loop.  The more feedback you give and get, the better everyone involved in that loop becomes.  As far as I’m concerned, if you are involved in that loop, your relationships should be more classified as Any to Any.  You give advice to everyone and you get advice from everyone.  That’s how it should work.  The expectation that some great oracle on high is going to pass down wisdom and should be your only source is, well, bunk.

Last point; I promise.  George Bernard Shaw gave us the infamous quote of “He who can, does; he who cannot; teaches.”  Many people still believe in this quote.  Sorry to burst a bubble here, but the quote is bullshit.  If you believe that ever action you take is a teachable moment, then you believe your entire actions dictate that you cannot.  Every single one of us is a teacher.  Every single one of us, as it would happen, is also a student.  For every single one of you, keep teaching AND keep learning.

Posted in Technical | Tagged , , | 1 Comment

When “Culture” Really Isn’t Culture

As we approach the end of the calendar year, many of us in the information technology field use this time to reflect on the prior year and may even use that reflection period to consider whether it’s the right time for a change.  The marketing machine, known as human resources, of many companies is in full swing and ready to try to sell you on how working for them is the best thing since sliced bread.   However, I want you to be weary of misuses of terminology that many get caught up in.  These misuses end up selling you on something that either doesn’t exist or just isn’t up to the descriptions so elegantly laid out in Glassdoor or LinkedIn job postings.

Culture:  the set of shared attitudes, values, goals, and practices that characterizes an institution or organization

One of the main points that now seems to be prevalent throughout job descriptions (or company descriptions) is the term of culture.  Now, what ends up happening is a twisting on this term to highlight certain perks of the company.  The last I checked, perks != (that’s DOES NOT EQUAL for some of our programming illiterate friends) culture.  No doubt you’ll be bombarded by pictures of fancy break rooms, stocked full of all sorts of beverages and snacks (not to mention the term FREE).  You might even see pictures of a game room, complete with a ping pong table (and if you are lucky, an arcade machine or two).  Excellent!  Fantastic!  However, what this has to do with culture is beyond me.

The last I checked, most businesses aren’t in business to field professional ping pong teams or compete in E-sports.  So, why is it that we see, too often, these sorts of things associated with culture?  Personally, I get the fact that we need avenues to blow off some steam from a hard bit of project work or in the trenches support work.  Some of these perks give a great impression of reward to the people putting in the hours to accomplish said business goals.  I’m just wondering why THIS tends to be the definition of an organization’s culture?  What happened with actually describing how human interactions are going to occur from an intra team level to inter team levels?

Maybe this harkens back to some of our lacking in asking really good questions during the interview process.  Personally, I’ve only had to participate in less than a handful of interviews.  I know most of my time was spent trying to show someone my technical acumen, as if that was the only thing that mattered to the individual across the table from me.  Too often, when it comes time for the interviewee to ask a question, blanks are drawn.

To arm yourself, especially to understand the culture of the organization you are trying to get into, maybe it’s time to start asking some hard questions that don’t go back to human resources marketing material.  “How are mistakes handled within the organization?”  “How are teams typically structured (as in, do you have senior members who make all the decisions and juniors are expected to fall in rank and not question anything)?”  “How receptive is the organization to a diversity of opinion?”  “Is there a clear path of career growth within the organization?”  “Does the organization have any respect for personal time to be used to better one’s self through education opportunities?”

To balance this out, I know I’ve also been on the opposite end of the spectrum with being the one asking the interviewee questions.  I try to keep an open mind to the person in question and try to bring up some of these topics.  Many times, the interviewee is surprised that I would be asking any sort of question about how they like to be heard or advance through an organization.  I also know, that as the interviewer, I need to drop any sort of bias that would make me start to ask questions about things that I feel don’t really matter to the role.  I get we like to ask others to answer questions to see if they are a “fit” in our teams, but at what point are we asking questions for the sake of finding a great teammate instead of asking questions to find a drinking buddy?

So, I think it’s on us to start challenging the interview process and start asking questions about culture that matter.  We aren’t ever going to find out how human interaction is expected if we don’t open our mouths.  I know the Glassdoor pictures of the new offices and fancy drink machines are nice, but in the end, you want to fit in and you want to do it on your terms.  You need to take the initiative, asking the right questions before it becomes too late.  I bet those free drinks and food are going to taste a whole lot better when you find the right culture to exist (and thrive) in, instead of immediately realizing you made a mistake and are looking for a new position when you haven’t even hit triple digits in the number of days employed.  Do yourself a favor; ask good, hard culture questions during your interview process.

Posted in Technical | Tagged , | Leave a comment