Cisco UCS Director and the PowerShell Agent – Part 1

If you’ve installed Cisco UCS Director before, you know that there exists a small component that can be installed onto a Windows device that allows for remote execution of PowerShell scripts.  These scripts can be harnessed and used in ways to add automation and orchestration functionality to UCS Director where native integration into UCS Director may not already be happening.  This series of posts is to explain what the use cases for the PowerShell Agent are, what the PowerShell agent does, and how you can utilize PowerShell for some advanced techniques within the Cisco UCS Director platform.

Installation and Configuration

 I will not go into the full details of the base install and configuration of the agent onto a Windows server in your UCS Director environment.  I will make sure to mention, however, that you should remember these key details:

  1. Cisco UCS Director and the PowerShell Agent services default communication port is 43981/tcp
  2. An access key is created by the main UCS Director services that will be used for secure communication on that port. Make sure to copy that key from the main UCS Director interface and enter it into the PowerShell Agent configuration on the Windows server the Agent was installed
  3. Consider your third-party modules that you will plan on installing and check with the support matrices to see what maximum versions of the .NET Framework and PowerShell you can install on the Agent. I would recommend going as high as you can to start.  (Special Note:  At the time of writing this post, PowerShell 6.0 was still in alpha and I would not consider that version ready for production use, especially due to many issues I’ve personally had with opening remote PowerShell sessions).
  4. You will likely have to configure WinRM on any device you plan to have the agent server communicate with. Most WinRM configurations are set, by default, to disallow every host in their TrustedHosts configuration

The Flow of Communication

One of the misconceptions of the PowerShell Agent (or at least what I thought for the longest time) was that by default the PowerShell Agent processes all the PowerShell requests locally.  This was proven to be quite incorrect.  The PowerShell Agent initiates a remote PowerShell session (PSSession) to any particular device that may be included in the WinRM configuration and that you can communicate with on the default WinRM port.

Breaking through this wall was key to start understanding exactly how to troubleshoot some potential problems with the PowerShell Agent and some key PowerShell cmdlets.  You can read more about that in another blog post I wrote here:  Using Invoke-WebRequest with the Cisco PowerShell Agent

This idea comes in handy, especially when you have certain PowerShell modules that may not necessarily function correctly unless on a host with certain Windows features.  During many of the test sessions I’ve opened with various devices in my lab, I did come across a unique case in which Windows Active Directory cmdlets could not be executed unless ran from a device that had a specific Windows Active Directory role associated with it.

Testing Communication

You can easily test communication with a PowerShell Agent in the UCS Director interface.  As an administrator of UCS Director, navigate to the follow location:  Administration > Virtual Accounts.  Select the PowerShell Agents tab.  You should see your PowerShell agent that you registered with your Director instance.  If you select your instance, two new task options appear in the bar above your list.  You can select Test Connection if you need to test just simple network communication to the device.  You can select Execute Command if you would like to initiate communication with the PowerShell Agent service and get information back from PowerShell to UCS Director.

screen-shot-2017-02-10-at-11-32-25-am

Above is the screen that you are presented.  You must provide these five objects whenever communicating with PowerShell Agent.  As many of these are self-explanatory, I won’t go into basic details, but I will state that there are some nuances to the Commands/Script field.  There are certain character sequences that you may have problems with, like “/” and “$”.  The reason for this is that the PowerShell Agent is essentially running an Invoke-Command cmdlet with the remote PSSession and there are some nuances in sending special characters to that cmdlet that must be taken into consideration or your syntax is going to be off when remotely executing the command.

What I’m going to do here is show a quick example of how information is returned from the PowerShell Agent.  In my lab, I have a device, called UCSD-PowerShell, that I’m going to run the simple cmdlet of Get-Host.  This screen will show you what I filled out:

screen-shot-2017-02-10-at-12-51-22-pm

After clicking on the Execute button, I am told that my command completed successfully.

screen-shot-2017-02-10-at-12-52-09-pm

If I scroll down, I can see some formatted output of the response:

screen-shot-2017-02-10-at-12-52-52-pm

This appears to be the object information I would get from a Get-Host, with some of the PowerShell session information sprinkled in there.

To Be Continued

 In the next post in this series, we will explore using these building blocks and creating a small UCS Director workflow that uses the Execute PowerShell Command task and what we can do with the response to use returned PowerShell object information in other UCS Director workflow tasks.

Posted in Technical | Tagged , , , , | 1 Comment

Using Invoke-WebRequest with the Cisco PowerShell Agent

I’ve had a major initiative at the Day Job to overhaul some of our existing Cisco UCS Director workflows and try to squeeze more efficiency and reduce the potential critical stopping points in them, as we’ve continued to evolve our technical processes.  I decided that while Cisco UCS Director has some great visual tools for mapping out workflows, sometimes the tasks within are very rigid and the way task flow happens tends to lend a single task focus.  This doesn’t bode well when trying to reduce the over all execution times of the workflow, as you end up having to work in a pattern in which tasks can’t be executed independently from each other.  As an example, to execute the fourth task, which has no dependency upon tasks one through three, you have to wait for the execution of task one through three.  In my opinion, as long as I understand how my workflow is going to function, this is highly ineffecient.

To try to curb this issue, I decided to research some more into the idea of parallel processing with UCS Director.  There are some examples using the native JavaScript implementation of the tool (UCSD Parallel Workflow Execution Example), but I was much more interesting in flexing my muscle with PowerShell, as that’s my language of choice.  Cisco ships a tool for remote execution of PowerShell code in the form of an agent that can be installed on a Windows device that is added to the configuration of Cisco UCS Director.  From there, you can specify the account information and the script/code block you wish to execute.  A return response is sent back to UCS Director and you can use some XML parsing techniques, which can be very handy if you need variables back for other parts of workflows.

To make this happen, I realized that we would need to execute some of Cisco UCS Director’s REST APIs to be able to launch workflows within my script.  In PowerShell, this usually means pulling out the Invoke-WebRequest cmdlet.  In the case of this cmdlet and Cisco UCS Director, you will typically need three things to make calls to some of the REST APIs:  the URI, the header (including your X-Cloupia-Request-Key name/value pair in the form of a hashtable), and the method type.

Unfortunately, this didn’t exactly work as easily advertised.  When starting to trace exactly what the Cisco PowerShell Agent does, I found that the service really does nothing more than create a remote PowerShell session to the target you specify.  In my case, I tend to redirect the PSA to itself, as I have my modules and scripts easily accessible from this device.  When trying to execute an Invoke-WebRequest cmdlet through this created session, I receive the following error:

screen-shot-2017-01-18-at-2-28-30-pm

When looking through the Cisco PowerShell Agent log file we find that an error 3 is a pretty generic error.  Any sort of PowerShell error will trigger the PSA to report this back.  The log file includes some of the error message, so I was able to find that the specific error was a “Object reference not set to an instance of an object.”  Anyone who’s done enough PowerShell authoring knows this response well.  Typically, one of your arguments is either null or of the wrong error type.  So, I decided to try a couple of troubleshooting techniques to see what the issue was with Invoke-WebRequest.  I first tried to nest this in a try..catch sequence.  This way, I could potentially get a look at the error in question.  Unfortunately, the same problem occurred, but I was not presented with any sort of error.

I felt this was very odd, as this meant that something was happening at the level of the Cisco PSA called remote PowerShell session.  Armed with the idea that the error message being reported meant something might be up with my arguments to the cmdlet, I decided to look into some of the Invoke-WebRequest parameters.  I found the –UseBasicParsing parameter and decided to give that a whirl.  As you can see by the results below, it worked.

screen-shot-2017-01-18-at-2-32-26-pm

Now, UseBasicParsing isn’t a required parameter of the cmdlet.  Now, I wanted to test to make sure that I was potentially going to catch an error message within this remote PowerShell session, so I found another parameter in –Proxy and fed a dummy domain name and port to it.  This was the response.

screen-shot-2017-01-18-at-2-34-30-pm

Now that’s more like it!  That’s the type of error object I was expecting.  For this test, I only fed the Proxy parameter and did not apply the UseBasicParsing parameter.  At this point, I’m really starting to think there is something going on with the Cisco PSA and my cmdlet here.  To rule out any sort of remote PowerShell session issues, I wrote a quick script (script below) that created a new remote PowerShell session (to the same server) and tried launching it through the Cisco PSA.  Also, I did not use the UseBasicParsing parameter on the cmdlet.

Script:


$username = "*username*"
$the_password = ConvertTo-SecureString -String "*password*" -AsPlainText -Force
$the_cred = New-object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $the_password
$the_session = New-PSSession -ComputerName *PSA IP/FQDN* -Credential $the_cred
$response = Invoke-Command -Session $the_session -ScriptBlock { invoke-webrequest -Uri *HTTP/HTTPS URI*}
$the_session | Disconnect-PSSession | Remove-PSSession
return $response 

Result:

 

screen-shot-2017-01-18-at-2-40-00-pm

From the response, I got what I need.  I have a Content property, along with the properties of the remote PSSession that was created to get this response.

I have this information in to the Cisco UCS Director people and (long story) once I get my support contract renewed, I may run this through Cisco TAC to see if this can be logged as a potential defect (due to my belief that Invoke-WebRequest isn’t being handled correctly by the PSA).

So, forewarning for those trying to use PowerShell and Invoke-WebRequest (and to some degree Invoke-RestMethod), be wary of some weird issues with the session and remember to potentially use the UseBasicParsing parameter on the cmdlet OR resort to nesting remote PowerShell sessions.  In my case, I stuck with the remote PowerShell session nesting.  I’ll provide an update when/if I get this case to TAC.

Posted in Technical | Tagged , , , , | 2 Comments

Here’s to 2016 (A year in the life…)

While 2015 was a year of many great accomplishments, I ended the year on a personal downer.  While most people that get invited as a delegate to the Tech Field Day events consider it a great accomplishment, I went away from my first experience not exactly feeling a great deal of that said accomplishment.  Anyone close to me with recognize that when I do this, I’m suffering greatly from imposter syndrome.  We all suffer from it from time to time, but in my case, I felt weighed down by it.  2015 was the year in which I decided to put myself out there and try to establish myself into the big, big world of technical communities, yet, I felt even more weighed down by the prospect of what I’d had done.  I realized that while I wanted to be known, I was struggling to know what I would be known for.  Essentially, I had decided to stand in front of everyone, but I forgot to write a speech.

So, 2016 became all about establishing my voice.  When I returned from my first Tech Field Day event, I felt very overwhelmed about what I had seen and heard while out there.  Those who attended that event (Virtualization Field Day 6) will probably remember me as the guy who didn’t say a single thing on camera.  This was completely out of fear…not of the camera, mind you, but of just using my voice.  It also didn’t help that I was extremely intimidated by who I was around at that event.  There are many things that can knock you out of your best mindset and feeling intimidated by the brain trust in the room is one of them.  I basically spent most of the event in the shadows, not saying much, but not doing much either.  I could author an excuse about how I was just feeling things out and that I have the type of personality that takes a while to come out in a more intimate setting, however, it would be just that…an excuse.  I’ve given multiple presentations in rooms full of strangers and could drive authority into my message and hopefully provide information to those seeking it.  Not in this case…

This leads me into 2016 (again).  Personally, I thought I gave such a bad impression at Virtualization Field Day 6 that there was no way Stephen and his crew would ever invite me back.  I was greatly surprised to be invited to Tech Field Day 11 in mid-June.  Not only did I break my streak of not asking things on camera, but I had a lot of great personal conversations with individuals at this event.  I continued with this momentum and to be invited to two more Field Day events (Tech Field Day Extra @ VMworld 2016 and Tech Field Day 12 in California).  Each time, I felt myself growing more and gaining more confidence, especially in my area of expertise.

Why was this?  Why did I clam up so badly in 2015 and seem to finally figure it out in 2016?  Through 2016, I was presented some new and interesting opportunities that I feel contributed.  I found myself some new angles to become a student on and started using the topics as my way to contribute back to the technical communities in which I’ve enjoyed for so long.

I’ve made quite a few great friends and contacts through VMUGs, especially those close to my area.  I make it an effort to always ask the nearby VMUG leaders whether they need someone who can fill a time slot for their UserCons.  This year, I had the pleasure of speaking at the St. Louis VMUG UserCon (mid-March 2016) and the Minneapolis VMUG UserCon (early June 2016).  In each case, I chose to talk about DevOps and what that might mean to many of the technical individuals at the UserCons.  While the St. Louis presentation was short in nature, I did end up talking with a good number of people after the fact who wanted more information about DevOps.  I parlayed this into a much longer presentation for the Minneapolis VMUG UserCon.  It was a fantastic opportunity and from what I gathered, it was a highly rated session for the UserCon.  There is nothing that gives you more confidence about your material than excellent feedback (positive or negative)!

By the time, at least for me, major conference season hit, I was ready to start tackling some of my insecurity with conversations with many good, yet random people at VMworld US 2016.  Very similar to my Cisco Live 2015 attendance, I was not really expecting to attend this conference.  Personally, VMware-based technology was no longer my bread and butter, as all I used them for these days was simple virtualization.  My employer was in the process of making some major decisions based on the technology we were going to have in our datacenters moving forward and VMware was just no longer going to be a crown jewel in our infrastructure.  However, I had won a pass to attend through the vExpert program as an official blogger, so I made sure to attend.

I made sure to interface with just about everyone I could, especially under the premise (see, this is how you use this word, for the record) that this would likely be the last VMworld I would be attending, barring any sort of major employment change.  I reached out to a good many people that I knew were attending and made sure to give them some time and offer some thoughts to them on a bevy of topics.  However, one interaction stood out amongst the rest.  It was a series of interactions.  However, it was how this came to be that’s of interest.  I’ve known Mr. Atwell for a while now.  In most cases, it was just a Twitter thing or a VMware communities site thing in the PowerCLI forum.  We had known of each other and just by happenstance we ended up on the same panel about automation at the Opening Acts sessions in 2015 (which I’ll mention that I repeated in 2016, FWIW).  I took from friendly shots at the man over the last 12-18 months, but he was always one that was willing to answer a question, regardless of when and where I’d asked it.  I remember that he was lamenting about multiple papers that were rejected during VMworld 2016’s call for papers.  Initially, as a joke, I sent a tweeted at him about bringing a bottle of bourbon to VMworld to drown his sorrows over the rejection.  What transpired over the next month or two leading up to the conference was more seriousness about doing this and I finding a bottle of something he could not obtain, but felt that would be good to drink at the event.

Now, with the bourbon selected and the timeframe set, all we needed to do was find the time at the conference to make it happen.  I found some of the guys from my local area and were invited up to a suite to enjoy some bourbon.  I’m going to be honest with you here.  A bunch of nerds getting around a bottle of bourbon doesn’t seem like a very good story to the masses, but what transpired in that suite over a couple of hour period was a ton of laughs, a whole lot of ribbing, and deepening of bonds within my own personal community.  To be frank, it was one of the best times I’ve had at a major technical conference in a very long time.  I can’t thank Josh enough for letting myself and a few of my local pals for invading some space and enjoying some drink.

Now, why was this important?  It wasn’t this that made my conference, it was what happened later in the event.  I wanted to have a less-boozy discussion about issues I was having with my day job and trying to be a mentor and a technical leader with those in my day-to-day circle.  Josh was a trooper (it was an early breakfast, in Vegas, mind you) and I went away from that conversation better prepared for the challenges that faced me back in the office.  It also laid the groundwork for investment into DevOps (as the cultural movement, not as the technical one).

Since that conversation, I’ve become a student of some of the less tangible things of IT.  Business interaction.  Culture.  Team interactions.  I even went so far as to agreeing to upgrade my vBrownBag sticker on my laptop from just a fan of the group to a presenter.  As it’s one of the freshest things in my mind from this last year, I can tell you that I had no idea what I was originally getting myself into, but I thank the vBrownBag crew for letting me do it.  Somehow, I put on a (nearly) hour long presentation on something that contained little to no technical information.  Not only that, I got a ton of positive feedback from this session.

Why this entire story?  Well, when I started this year, I felt unsure of myself.  I struggled for a very long time to figure out that it was just that I needed to reign in how to use my voice.  I ended the year putting all the puzzle pieces together and feeling the most confident I’ve felt in this industry in a very long time.  As we close 2016, I just wanted to thank everyone that I’ve had the pleasure of interacting with over this last calendar year.  Each one of you has had a part to play in my successes and maturity during 2016.  I can’t wait to see what 2017 has in store and I hope that I’m able to return the favor that so many of you have provided for me.

Salut!

 

Posted in Technical | Tagged , , , | 1 Comment

Docker?  On Windows? Yep!

There’s starting to become a major timeline distinction between those of us in the IT industry.  There are those of us who grew up in the industry with Microsoft being the evil empire and those that are coming into IT and seeing Microsoft as company embracing the very thing it said it never would and, dare I say, championing the use of open source technologies.  I know I fall into the camp of having to deal with Microsoft at it’s very peak of being a closed company.  Back in the days of the domination of Win32 based applications, I never could have imagined what has transpired with Microsoft in the past few years (and specifically in the last 12 months).

This brings me to a recent event I attended (Tech Field Day 12, read more about the Field Day series of events Tech Field Day).  Docker was presenting and they focused a good section of their presentation about their integration of Docker into the Windows Server 2016 operating system.  This isn’t a pseudo version of Docker being embedded within a virtual machine running on Windows Server 2016, this is Windows-native application!  Along with this, all the full support for parts of Windows we nearly thought impossible to be able to isolate in containers, like the Windows registry.

When you ponder the common Windows application of (what should be) a bygone era, you will likely think of an application that seems bloated and have a GUI driven look and feel to it.  Over the last few years, Windows applications have been going through a metamorphosis, of sorts.  Similarly, to some Linux counterparts, Windows applications are starting have their monolithic parts broken off into other smaller microservices.  This is starting to allow for the very same scale capabilities that we’ve been hearing about from Docker since containerization has come out of the woodworks.

However, to pull this off, one would think you’d have to break away from the GUI that seems to dominate what most people consider a “Windows application”.  Docker on Windows will work with two versions of Windows 2016:  Windows Server Core and Nano Server.  Windows Server Core is the full version of Windows Server that is essentially missing the GUI.  You can still install some sort of GUI mechanism to a Server Core installation (whether that’s VNC or Remote Desktop Protocol [RDP]).  This will provide you a GUI interface to be able to interact with the applications installed onto that instance.  You could easily install your tried and true SQL Server instance onto it and manage it just like it’s been managed for many years (RDP into the instance, use local MMC installation components for application management).

However, I believe the true magic with Windows and containerization is going to come in the form of Nano Server.  If you, the reader, haven’t been paying attention to Microsoft technologies in the last year or two, Nano Server is a heavily refactored version of Window Server.  It has a very small footprint and can only be managed remotely (the installation is stripped of GUI capabilities).  Also, only the components that are necessary to the installation will be installed.  When focusing on the applications in this realm, this is where Microsoft and .NET Core (along with PowerShell Core for remote management) come into the equation.  By writing applications to take advantage of these new layers, you can start to see Microsoft’s vision of new application development mirror that of what Docker is trying to provide with other operating systems.  The only unfortunate side effect to this is that Microsoft containers can only be ran on Windows Server 2016.  So, portability to multiple Docker Engine’s, regardless of operating system, is going to be impossible for the moment.

Again, if you haven’t been paying attention to Microsoft in the last 1-2 years, this may come as a shock.  I’m extremely excited to see how this plays out, especially with this partnership with Docker.  Including Docker into the core of Windows Server 2016 is something that I never expected, but then again, maybe I shouldn’t be applying any legacy thinking to Microsoft these days, especially in regards to cloud technologies.

Posted in Technical | Tagged , , , | 3 Comments

Why REST APIs are Not Enough

Automation is a very hot topic these days.  Actually, that’s probably one of the understatements of the current state of IT.  Everywhere you turn, you get some sort of message about how important automation is.  Unfortunately, due to the sad state of IT up until “right now”, very few people have been able to devote the cycles necessary to understand automation and the processes automation is supposed to represent.

Back at VMworld US 2016, I was privileged to be a panelist for an Opening Acts panel that had automation and DevOps (although we didn’t even touch DevOps, much to my dismay) as the topics.  One of the opening questions was about barriers to automation and I piped up about the fact that many Operations folk are just not versed in programming/scripting skills.  I was quickly drown out by others bringing up that process was the biggest barrier to automation within existing IT shops.

I’m going to wholeheartedly disagree with some of my panelists.  Even in my current day job, many of our Operations personnel have the processes defined, as per specific industry certifications.  Documentation is constantly being updated about these processes and kept relatively up to date.  What my Operations team lacks is the programming specific knowledge to interface with all these disparate systems.

Internally, we’ve specifically targeted initiatives to teach (both internally and externally) PowerShell to our Operations personnel.  We’ve identified that many of our systems come with PowerShell modules to easily create multifaceted scripts to touch many systems within a single script or line of code.  My goal is to get my Operations team up to speed on what I’ve personally done with PowerShell and integration in our automation/orchestration system in Cisco UCS Director.  Unfortunately, they have a steep learning curve with some vendors in the infrastructure space.

Why is this?  It comes down to some companies feeling that just having a RESTful API as being “good enough” for integrators out there.  For those administrators that are learning the ways of programming, a RESTful API call can look a little daunting, considering some of the languages you actually wrap that request into.

I’m going to go back to a presentation I sat through from Zerto back at Tech Field Day 11.  The presenter had a many lines of PowerShell code up on the screen (somewhere in the 300+ lines category).  I asked the question of whether Zerto had considered wrapping all those Invoke-WebRequest and Invoke-RestMethod calls into their own specific PowerShell cmdlets and I was met with a response that seemed to indicate that maybe they hadn’t considered it.

It’s going to feel like I’m going to be picking on Zerto here, but when you dig into their architecture and what they were specifically trying to show us in that demonstration, nearly all the endpoints they were touching had PowerShell modules available so that all calls could be integrated into a single script.  Microsoft Azure has many PowerShell modules for accessing subscription information and provisioning virtual machines; VMware has their PowerCLI modules that could be leveraged for the on-premises virtual machines in which Zerto was trying to replicate out to public cloud resources; AWS even has a subset of either official or user/community created modules for accessing EC2 instances.

The point being, that many of the system administration community is learning how to automate their environments in the form of very human readable cmdlets within PowerShell.  If you, as a company working on enabling APIs for your user base, haven’t considering wrapping these up into a much easier format for use, maybe you should.  That community is not, and will likely never be, full-fledged integrators.  It’s time to start making their lives a little easier by creating better tools that wrap the RESTful APIs up in a more system administrator/beginner scripter-friendly format.  I highly suggest to these companies to do so with PowerShell, especially considering the now open source nature of PowerShell.

Posted in Technical | Tagged , , | 1 Comment

Ready…or not.

I’m going to preface this blog post with the fact that I’ve only completed the 1st day of the Microsoft Ignite conference (2016 edition), so there might actually be more substance to some of the things that I was made aware of through refreshed chats and multiple large keynotes.  I will also say to please note that

I had the privilege of attending VMworld 2016 via the vExpert blogger pass.  Now, before I go and seemingly eviscerate VMware, I will say that I still do have a lot of faith in their product portfolio, but that only really comes from the fact that I know many large enterprise entities depend upon VMware based products to run their datacenters.  I have zero problem with this premise (note the correct usage of the term).  What my beef with VMware is about is their flip flop nature on all things cloud.

In 2015, we were treated to a VMworld theme/slogan in “Ready For Any”.  It only took a single calendar year for that message to seemingly be tossed out of the window and be replaced with a message that felt like a major retreat from that idea of being ready for anything.  In 2016, this message was replaced with one that felt disjointed and withdrawn from the prior year.  Those of us that think more in a cloudy nature were treated to a batch of solutions that felt more like minor nervous toe dips into the pool of cloud than the prior year’s message of being immersed.

Contrast that with this year’s Microsoft Ignite.  As my employer has just recently started partnering up with Microsoft on some major initiatives, I was not privy enough to attend Ignite in 2015.  However, during some of the lead ups and into the first keynote of the day, it was easy to see that Microsoft truly has established a very good path for those using many of their applications, as well as those still seeking forms of IaaS, to the cloud.  Azure is weaved through just about every fiber of Microsoft’s being right now and that’s awfully exciting to see.

Also, they’ve managed to do something I never thought a big public cloud player would ever do.  They are working at taking a scaled out architecture and reducing it down to smaller bits for the rest of us to be able to run in our datacenters.  This isn’t a simple bolt on orchestration and/or user interface tool to manipulate IaaS.  I will admit that I’m awfully giddy about Azure Stack and how that’s going to be used in the service provider space.  We may finally have a tool to our disposal for truly making hybrid clouds within our own datacenter.

My take

I went back to a Twitter conversation I was having with Tim Carr (@timmycarr) shortly after VMworld 2016 and I realized that I had a near perfect tweet describing my issue with VMware and how to best portray it.

I have thoroughly enjoyed transforming the various datacenters with VMware based products.  I will still remember my first vMotion and storage vMotion fondly.  However, the harsh reality is that I’m not using any of their product portfolio that isn’t just ESXi.  VSAN and NSX, to me, are just another way for VMware to continue a major legacy thought onto large enterprises that haven’t really worked towards making the application the king of their data centers.  They still drive their datacenters through infrastructure first and will continue to do so until the economics show that there’s a true financial reason for refactoring/rewriting that major application they have.

I really wanted VMware to continue pushing out towards the cloud.  I really wanted them to figure out containers and microservices.  Instead, I’m left with a wide range of expectations that, as of right now, are feeling like they never will be met.  I realize they are just my expectations, but after what I’ve now seen from other angles (mainly Microsoft’s), I can’t help feel like VMware’s time as an innovator is waning.  They feel and seem like they have the anchor of the term “legacy” attached to them now.  It’s a shame, really…a damn shame.

Posted in Technical | Tagged , , , | Leave a comment

Is DevOps a Load of BS?

So, I tuned in at nearly the right moment to hear an extra snippet of discussion at Cloud Field Day 1 (CFD1) and it was a topic that gets my brain going.  The topic was whether or not DevOps was bullshit.  I’ve done quite a bit of thought on this topic and I’ve come to my own conclusions that there’s a very good chance that DevOps is pure BS.

I’m going to point to a blog post by Tom Hollingsworth (@NetworkingNerd) about specifics into the Ops side (specifically the networking angle).  What I’ve found very interesting in most of my discussions with people about DevOps is that rarely do you hear from the Ops side of the equation.  That makes some sense, but is awfully important to making DevOps, well, DevOps.  Continuous integration and continuous deployment NEED a static environment in which to function.  Without this highly static environment, you aren’t really changing much when it comes to running production applications in your environment.

The more I pondered why DevOps is so hard to grasp from an operational perspective, the more I started to realize is that when it comes to a highly successful DevOps enabled organization, I see a trend (not an absolute) that many of these organizations tackled the operational problems with greenfield environments.  Not all of the components may have been completely greenfield, but the point remains that new technologies were brought in (from Operations perspective) to address the issue.  If you think about it, this is what the term “shadow IT” was all about.  They just greenfield’ed the application to an area in which the infrastructure was statically presented.  They removed existing operational problems from their equation and got on with building and deploying new applications in a more rapid fashion.

Why is that?  I think we all know the answer.  We (as in those of us in operational roles) are expected to be everything to everyone with our subject matters.  Whether it’s network or storage or virtualization, we have to appease every person in the business and their unique needs.  This makes it very hard to start working on transforming operations internally.  How is one supposed to automate their environment if you are being presented 25 different business use cases to automate in a single silo (yes, I used the word silo, but let’s be honest, silos are still everywhere).  We have other tenants other than developers that we need to make happy in the business.

I’ve read through “The Phoenix Project” multiple times and this is my main problem with it:  it glosses over this very detail.  You never really get to understand how they made the rest of their consumer base pleased with the levels of service needed to run the business.  The book only came from the perspective of making the develops the defacto tenant and to hell with everyone else.  I can see now why so many in Ops roles have no clearly defined methods to really making this work because they aren’t likely to be able to focus on the applications internal developers are pushing out.  There is a delicate balancing act that goes on within an IT organization and only a finite amount of hours to accomplish tasks in.

I’ll end this by stating that I’m a huge fan of what the concept of DevOps can do to an organization.  On it’s very simplistic nature, it’s trying to do what we’ve been saying for years (get the applications and operators closer together and make they work together to eliminate the friction between the two groups).  Unfortunately, there’s been too much focus on technology and not enough on the people and process side of the equation.  Many organizations could really use a business analyst to come in and get some of these people and process issues fixed before proceeding down a path in which all you’ve done is amputate a broken arm, rather than giving it the care it actually deserves.

Posted in Technical | Tagged , | 1 Comment