Cisco UCS Director and the PowerShell Agent – Part 3

In this blog post, we will be discussing how we can use the Cisco UCS Director macros (also known as variables) from either workflow inputs or from task outputs.  We will also show how these macros can be passed as arguments to our PowerShell scripts through the PowerShell Agent.

UCS Director Macros

Cisco UCS Director uses a variable system to be able to use inputs or outputs of tasks in subsequent tasks.  Based on the Cisco documentation, they call these macro variables or “macros” for short.  Not only can these macros come from workflow tasks, but there is also a slew of system macros that are available.  The orchestration within UCS Director can allow for you to not only use these macros for workflow input and task outputs, but you can also use them for many virtual machine level annotations.

When you create a workflow, you can define the inputs that you wish to either be entered in by the person running the workflow or by defining admin level inputs that require no manual entry.  You can access this functionality by navigating an admin level session to UCS Director to Policies > Orchestration.  From the Workflows tab, you can click on the +Add Workflow button to create a brand-new workflow.

screen-shot-2017-02-20-at-2-54-12-pm

 

Above is the first screen given to you.  From here, you can set some of the workflow settings, like name, description, and context.  You can also select some default behaviors of the workflow, like whether you want to set default email notifications to the initiating user of the workflow.  For the sake of this example, we’ll just fill out the bare basics (Workflow Name, Folder to place the Workflow).  Keep in mind that a workflow name CANNOT be duplicated, regardless of folder placement.  You will need unique names for all workflows!  Click on Next to advance.

screen-shot-2017-02-20-at-2-58-00-pm

This next screen is where the workflow input magic happens.  By clicking on the + button below the “Associate to Activity” section, you will begin the process of adding a new workflow input to the workflow.

screen-shot-2017-02-20-at-2-59-15-pm

At a bare minimum, the only thing required for UCS Director to enable an input is to give it a label (extremely important for later!) and the input type.  The input type comes in handy when dealing with task inputs that require the input to be in a preformatted type.  We will show examples of this later.  For this post, we will just show creating a Generic Text Input type.

I put in a label of Test and clicked on the Select button.  From there, a listing of all input types is available to be searched through.  In the upper right hand corner of that screen, enter in “generic” and it will filter the listing and look like this:

screen-shot-2017-02-20-at-3-01-14-pm

Clicking on the checkbox for Generic Text Input will highlight the option.  If we click on the Select button, we will see that the workflow input should look like this:

screen-shot-2017-02-20-at-3-03-11-pm

You’ll notice there are a couple of other checkboxes now available.  The first is the Multiline/MultiValue Input checkbox.  This option will allow for you to common separate multiple inputs and can be extremely useful when processing multiple values with a task that can take it as a workflow.  Otherwise, you can process this list in a Start…End loop in the UCS Director Workflow Designer.  We will get into looping in a workflow in a future blog post.

The last option that is available is the Admin Input checkbox box.  By checking this box, the admin can either select the object from the UCS Director database or enter in a hardcoded value for this variable that cannot be changed.

If neither of these checkboxes is selected, upon executing the workflow, the person executing will be presented with fields in which they will have to enter in their own text string.  Clicking on the Submit button will place this macro into the screen and you can close out the original workflow creation by clicking on Next button to the User Outputs (this comes in handy once you start implementing the concept of Parent/Child workflows, covered in later blog posts).  Lastly, click on Submit to save the workflow.

Using the Macro in a Workflow Task

Now that we’ve registered a workflow input, we can pass it to any task that accepts a Generic Text Input type for input.  Open the Workflow Designer on this new workflow we’ve created.  Let’s drag a test task over to show this.  Since I like the Execute PowerShell Command task, I’ve dragged that over and have begun filling out the task and advanced the screens to the User Input Mapping section.

screen-shot-2017-02-20-at-3-11-13-pm

In this example, we can see that PowerShell Agent takes an input type of Generic Text Input.  You can click on the Map to User Input checkbox and the User Input drop down will have all Generic Text Input macros available from either workflow inputs or other task outputs.  Since we have no other task outputs right now, the only macro to choose from was our previously created Test macro.

We can also use this macro as an inline macro for a text field.  If we click on Next, we can advance to the Task Inputs screen.  You can put the value inline by referencing the macro in the following format:  ${<macro name>}

In this case, we will place ${Test} into one of the fields.

screen-shot-2017-02-20-at-3-15-55-pm

The Label field will now automatically use whatever value is input by workflow executor.

Passing Macro Information to PowerShell

 Now that we’ve shown that you do put the macro value into inline fields, we can use this information for passing arguments into PowerShell scripts.  From this same task, let’s say that there is a PowerShell script called “HelloWorld.ps1”.  I need to pass the Test macro to it for processing.  In the Command/Script field, I would put the following:

screen-shot-2017-02-20-at-3-19-06-pm

This is a very primitive way to pass arguments to a PowerShell script.  Inside my script, to use this value, you could easily store this string information with a single command using the $args array.  You could do this like so:

screen-shot-2017-02-20-at-3-22-39-pm

You can pass many more macros this way, just remember which position you put those arguments.  From there, you can take the information in those macros and perform any of the PowerShell options you have at your disposal.

To Be Continued…

In the next blog post, we will explore some more advanced techniques using PowerShell.  One of the use cases I’ve found to be highly unique is returning multiple values from a PowerShell script and storing them into multiple task outputs for future usage using PowerShell hash tables and UCS Director CloupiaScript XML parsing, in the form of a Custom Task.

Posted in Technical | Tagged , , , | Leave a comment

Cisco UCS Director and the PowerShell Agent – Part 2

In this blog post, we will be discussing how to utilize the Cisco PowerShell Agent and the provide Cisco UCS Director task, Execute PowerShell Command.  We will also go over what it’s going to take to parse the response of this task and retrieve information to be used as Cisco UCS Director variables for other tasks in our workflow.

Execute PowerShell Command

 First things first, we need to create a new workflow to begin using this task.  You can easily navigate to the workflow designer, by using the menu bars while logged in as an administrator.  Navigate to the following location:  Policies > Orchestration.  Make sure you are on the Workflows tab.  Create a new workflow from the menu options in these screens.

Once you’ve created the workflow, enter the Workflow Designer.  Along the left side of the window, you should be able to see what sort of tasks are available to be placed in the designer.  In the text entry field near the top, go ahead and enter the word “PowerShell”.  You will find the Cisco created task under the Cloupia Tasks > General Tasks folder.  Click on the task and drag it to the designer layout portion of the screen.  Once you’ve done that, double click on the task to begin editing the task.

You can proceed right through the section for User Input Mapping, as we don’t have any sort of inputs we are assigning to required values of the task.  Proceed to the “Task Inputs” section of the task edit process.  You should see something like this:

screen-shot-2017-02-10-at-3-42-29-pm

As you can see, I have already entered in some of the values for this task.  This looks very like what we had entered in the last blog post (Cisco UCS Director and the PowerShell Agent – Part 1).  The only major difference is that there is a PowerShell Agent selection box.  Populating this box is the different PowerShell agents we’ve registered with UCS Director.

One of the other major differences is that the screen has a rather lengthy scrollbar.  Using the scrollbar, we can see that there are some other entries that can be made.  For instance, you can perform a rollback of this task, in the form of calling upon another script.  This comes in handy for cleaning up whatever was added or changed in your environment.  As a good example, if you use PowerShell to perform operations in Cisco UCS Manager, when you rollback the workflow and remove those services, you would need to remove the changes you just made.  If you create a service profile and associated it with a blade server, you’d want to disassociate the service profile and delete it when that service is no longer necessary.

screen-shot-2017-02-10-at-3-45-46-pm

Other key parts of the task inputs include these last options:

screen-shot-2017-02-10-at-3-50-03-pm

The task has the ability for you to specify how you want to handle the task output.  Up until recently, the only output format that was available was XML.  Since UCS Director 6.0 was released, the option to return the output in JSON format was introduced.  The Depth option comes in handy for JSON format.  The last component is the Maximum Wait Time.  This is very important in determining how long you want UCS Director to keep an eye on this task before it automatically ends checking on the task.  Before setting up this task, it’s highly recommended to see exactly how long you expect this task to take and account for some extra time.

Lastly, pay attention to this final output variable:

screen-shot-2017-02-10-at-3-53-03-pm

When it comes to parsing the output of the script, this is the value we need to pass to a parsing task to retrieve information for other UCS Director tasks in our workflow.  Note that this comes back as UCS Director’s implementation of a generic text input object.

Parsing the Response

 As recently as UCS Director 6.0, a new Cisco-created task was included in UCS Director called Parse PowerShell Output.  This task is relatively decent at retrieving simple values from the returned text and creating a single UCS Director variable.  To work with the task, drag the task into the Workflow Designer.  Upon getting to the User Input Mapping section of the task, we need to map this value to the output of our Execute PowerShell Command task.  You should be able to find it in the drop down menu when you select that you want to map this object to user input.  It should look something like this:

screen-shot-2017-02-10-at-3-59-31-pm

In the output section, you’ll see the following values that should be available after processing the text we are giving to this task:

screen-shot-2017-02-10-at-4-01-45-pm

These variables will store parsed information from our PowerShell script and allow for us to use these values as inputs into other UCS Director tasks.

Caveats

If you’ve worked with PowerShell, you can easily see that this task seems to only have one set of key/value pairs defined.  If you are attempting to return many pieces of information, this is going to be a problem.  This is where some custom task authoring is going to come in handy.  I would highly suggest some examples from the UCS Director communities site (UCSD Workflow INDEX).  Armed with some of these older workflows, you can go through some of the CloupiaScript/JavaScript code to see how the XML return can be parsed and all values returned, especially if you are returning a PowerShell hashtable.

To be continued…

In the next blog post, we continue the discussion of how to send arguments to your PowerShell scripts…

Posted in Technical | Tagged , , , | Leave a comment

Cisco UCS Director and the PowerShell Agent – Part 1

If you’ve installed Cisco UCS Director before, you know that there exists a small component that can be installed onto a Windows device that allows for remote execution of PowerShell scripts.  These scripts can be harnessed and used in ways to add automation and orchestration functionality to UCS Director where native integration into UCS Director may not already be happening.  This series of posts is to explain what the use cases for the PowerShell Agent are, what the PowerShell agent does, and how you can utilize PowerShell for some advanced techniques within the Cisco UCS Director platform.

Installation and Configuration

 I will not go into the full details of the base install and configuration of the agent onto a Windows server in your UCS Director environment.  I will make sure to mention, however, that you should remember these key details:

  1. Cisco UCS Director and the PowerShell Agent services default communication port is 43981/tcp
  2. An access key is created by the main UCS Director services that will be used for secure communication on that port. Make sure to copy that key from the main UCS Director interface and enter it into the PowerShell Agent configuration on the Windows server the Agent was installed
  3. Consider your third-party modules that you will plan on installing and check with the support matrices to see what maximum versions of the .NET Framework and PowerShell you can install on the Agent. I would recommend going as high as you can to start.  (Special Note:  At the time of writing this post, PowerShell 6.0 was still in alpha and I would not consider that version ready for production use, especially due to many issues I’ve personally had with opening remote PowerShell sessions).
  4. You will likely have to configure WinRM on any device you plan to have the agent server communicate with. Most WinRM configurations are set, by default, to disallow every host in their TrustedHosts configuration

The Flow of Communication

One of the misconceptions of the PowerShell Agent (or at least what I thought for the longest time) was that by default the PowerShell Agent processes all the PowerShell requests locally.  This was proven to be quite incorrect.  The PowerShell Agent initiates a remote PowerShell session (PSSession) to any particular device that may be included in the WinRM configuration and that you can communicate with on the default WinRM port.

Breaking through this wall was key to start understanding exactly how to troubleshoot some potential problems with the PowerShell Agent and some key PowerShell cmdlets.  You can read more about that in another blog post I wrote here:  Using Invoke-WebRequest with the Cisco PowerShell Agent

This idea comes in handy, especially when you have certain PowerShell modules that may not necessarily function correctly unless on a host with certain Windows features.  During many of the test sessions I’ve opened with various devices in my lab, I did come across a unique case in which Windows Active Directory cmdlets could not be executed unless ran from a device that had a specific Windows Active Directory role associated with it.

Testing Communication

You can easily test communication with a PowerShell Agent in the UCS Director interface.  As an administrator of UCS Director, navigate to the follow location:  Administration > Virtual Accounts.  Select the PowerShell Agents tab.  You should see your PowerShell agent that you registered with your Director instance.  If you select your instance, two new task options appear in the bar above your list.  You can select Test Connection if you need to test just simple network communication to the device.  You can select Execute Command if you would like to initiate communication with the PowerShell Agent service and get information back from PowerShell to UCS Director.

screen-shot-2017-02-10-at-11-32-25-am

Above is the screen that you are presented.  You must provide these five objects whenever communicating with PowerShell Agent.  As many of these are self-explanatory, I won’t go into basic details, but I will state that there are some nuances to the Commands/Script field.  There are certain character sequences that you may have problems with, like “/” and “$”.  The reason for this is that the PowerShell Agent is essentially running an Invoke-Command cmdlet with the remote PSSession and there are some nuances in sending special characters to that cmdlet that must be taken into consideration or your syntax is going to be off when remotely executing the command.

What I’m going to do here is show a quick example of how information is returned from the PowerShell Agent.  In my lab, I have a device, called UCSD-PowerShell, that I’m going to run the simple cmdlet of Get-Host.  This screen will show you what I filled out:

screen-shot-2017-02-10-at-12-51-22-pm

After clicking on the Execute button, I am told that my command completed successfully.

screen-shot-2017-02-10-at-12-52-09-pm

If I scroll down, I can see some formatted output of the response:

screen-shot-2017-02-10-at-12-52-52-pm

This appears to be the object information I would get from a Get-Host, with some of the PowerShell session information sprinkled in there.

To Be Continued

 In the next post in this series, we will explore using these building blocks and creating a small UCS Director workflow that uses the Execute PowerShell Command task and what we can do with the response to use returned PowerShell object information in other UCS Director workflow tasks.

Posted in Technical | Tagged , , , , | 1 Comment

Using Invoke-WebRequest with the Cisco PowerShell Agent

I’ve had a major initiative at the Day Job to overhaul some of our existing Cisco UCS Director workflows and try to squeeze more efficiency and reduce the potential critical stopping points in them, as we’ve continued to evolve our technical processes.  I decided that while Cisco UCS Director has some great visual tools for mapping out workflows, sometimes the tasks within are very rigid and the way task flow happens tends to lend a single task focus.  This doesn’t bode well when trying to reduce the over all execution times of the workflow, as you end up having to work in a pattern in which tasks can’t be executed independently from each other.  As an example, to execute the fourth task, which has no dependency upon tasks one through three, you have to wait for the execution of task one through three.  In my opinion, as long as I understand how my workflow is going to function, this is highly ineffecient.

To try to curb this issue, I decided to research some more into the idea of parallel processing with UCS Director.  There are some examples using the native JavaScript implementation of the tool (UCSD Parallel Workflow Execution Example), but I was much more interesting in flexing my muscle with PowerShell, as that’s my language of choice.  Cisco ships a tool for remote execution of PowerShell code in the form of an agent that can be installed on a Windows device that is added to the configuration of Cisco UCS Director.  From there, you can specify the account information and the script/code block you wish to execute.  A return response is sent back to UCS Director and you can use some XML parsing techniques, which can be very handy if you need variables back for other parts of workflows.

To make this happen, I realized that we would need to execute some of Cisco UCS Director’s REST APIs to be able to launch workflows within my script.  In PowerShell, this usually means pulling out the Invoke-WebRequest cmdlet.  In the case of this cmdlet and Cisco UCS Director, you will typically need three things to make calls to some of the REST APIs:  the URI, the header (including your X-Cloupia-Request-Key name/value pair in the form of a hashtable), and the method type.

Unfortunately, this didn’t exactly work as easily advertised.  When starting to trace exactly what the Cisco PowerShell Agent does, I found that the service really does nothing more than create a remote PowerShell session to the target you specify.  In my case, I tend to redirect the PSA to itself, as I have my modules and scripts easily accessible from this device.  When trying to execute an Invoke-WebRequest cmdlet through this created session, I receive the following error:

screen-shot-2017-01-18-at-2-28-30-pm

When looking through the Cisco PowerShell Agent log file we find that an error 3 is a pretty generic error.  Any sort of PowerShell error will trigger the PSA to report this back.  The log file includes some of the error message, so I was able to find that the specific error was a “Object reference not set to an instance of an object.”  Anyone who’s done enough PowerShell authoring knows this response well.  Typically, one of your arguments is either null or of the wrong error type.  So, I decided to try a couple of troubleshooting techniques to see what the issue was with Invoke-WebRequest.  I first tried to nest this in a try..catch sequence.  This way, I could potentially get a look at the error in question.  Unfortunately, the same problem occurred, but I was not presented with any sort of error.

I felt this was very odd, as this meant that something was happening at the level of the Cisco PSA called remote PowerShell session.  Armed with the idea that the error message being reported meant something might be up with my arguments to the cmdlet, I decided to look into some of the Invoke-WebRequest parameters.  I found the –UseBasicParsing parameter and decided to give that a whirl.  As you can see by the results below, it worked.

screen-shot-2017-01-18-at-2-32-26-pm

Now, UseBasicParsing isn’t a required parameter of the cmdlet.  Now, I wanted to test to make sure that I was potentially going to catch an error message within this remote PowerShell session, so I found another parameter in –Proxy and fed a dummy domain name and port to it.  This was the response.

screen-shot-2017-01-18-at-2-34-30-pm

Now that’s more like it!  That’s the type of error object I was expecting.  For this test, I only fed the Proxy parameter and did not apply the UseBasicParsing parameter.  At this point, I’m really starting to think there is something going on with the Cisco PSA and my cmdlet here.  To rule out any sort of remote PowerShell session issues, I wrote a quick script (script below) that created a new remote PowerShell session (to the same server) and tried launching it through the Cisco PSA.  Also, I did not use the UseBasicParsing parameter on the cmdlet.

Script:


$username = "*username*"
$the_password = ConvertTo-SecureString -String "*password*" -AsPlainText -Force
$the_cred = New-object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $the_password
$the_session = New-PSSession -ComputerName *PSA IP/FQDN* -Credential $the_cred
$response = Invoke-Command -Session $the_session -ScriptBlock { invoke-webrequest -Uri *HTTP/HTTPS URI*}
$the_session | Disconnect-PSSession | Remove-PSSession
return $response 

Result:

 

screen-shot-2017-01-18-at-2-40-00-pm

From the response, I got what I need.  I have a Content property, along with the properties of the remote PSSession that was created to get this response.

I have this information in to the Cisco UCS Director people and (long story) once I get my support contract renewed, I may run this through Cisco TAC to see if this can be logged as a potential defect (due to my belief that Invoke-WebRequest isn’t being handled correctly by the PSA).

So, forewarning for those trying to use PowerShell and Invoke-WebRequest (and to some degree Invoke-RestMethod), be wary of some weird issues with the session and remember to potentially use the UseBasicParsing parameter on the cmdlet OR resort to nesting remote PowerShell sessions.  In my case, I stuck with the remote PowerShell session nesting.  I’ll provide an update when/if I get this case to TAC.

Posted in Technical | Tagged , , , , | 2 Comments

Here’s to 2016 (A year in the life…)

While 2015 was a year of many great accomplishments, I ended the year on a personal downer.  While most people that get invited as a delegate to the Tech Field Day events consider it a great accomplishment, I went away from my first experience not exactly feeling a great deal of that said accomplishment.  Anyone close to me with recognize that when I do this, I’m suffering greatly from imposter syndrome.  We all suffer from it from time to time, but in my case, I felt weighed down by it.  2015 was the year in which I decided to put myself out there and try to establish myself into the big, big world of technical communities, yet, I felt even more weighed down by the prospect of what I’d had done.  I realized that while I wanted to be known, I was struggling to know what I would be known for.  Essentially, I had decided to stand in front of everyone, but I forgot to write a speech.

So, 2016 became all about establishing my voice.  When I returned from my first Tech Field Day event, I felt very overwhelmed about what I had seen and heard while out there.  Those who attended that event (Virtualization Field Day 6) will probably remember me as the guy who didn’t say a single thing on camera.  This was completely out of fear…not of the camera, mind you, but of just using my voice.  It also didn’t help that I was extremely intimidated by who I was around at that event.  There are many things that can knock you out of your best mindset and feeling intimidated by the brain trust in the room is one of them.  I basically spent most of the event in the shadows, not saying much, but not doing much either.  I could author an excuse about how I was just feeling things out and that I have the type of personality that takes a while to come out in a more intimate setting, however, it would be just that…an excuse.  I’ve given multiple presentations in rooms full of strangers and could drive authority into my message and hopefully provide information to those seeking it.  Not in this case…

This leads me into 2016 (again).  Personally, I thought I gave such a bad impression at Virtualization Field Day 6 that there was no way Stephen and his crew would ever invite me back.  I was greatly surprised to be invited to Tech Field Day 11 in mid-June.  Not only did I break my streak of not asking things on camera, but I had a lot of great personal conversations with individuals at this event.  I continued with this momentum and to be invited to two more Field Day events (Tech Field Day Extra @ VMworld 2016 and Tech Field Day 12 in California).  Each time, I felt myself growing more and gaining more confidence, especially in my area of expertise.

Why was this?  Why did I clam up so badly in 2015 and seem to finally figure it out in 2016?  Through 2016, I was presented some new and interesting opportunities that I feel contributed.  I found myself some new angles to become a student on and started using the topics as my way to contribute back to the technical communities in which I’ve enjoyed for so long.

I’ve made quite a few great friends and contacts through VMUGs, especially those close to my area.  I make it an effort to always ask the nearby VMUG leaders whether they need someone who can fill a time slot for their UserCons.  This year, I had the pleasure of speaking at the St. Louis VMUG UserCon (mid-March 2016) and the Minneapolis VMUG UserCon (early June 2016).  In each case, I chose to talk about DevOps and what that might mean to many of the technical individuals at the UserCons.  While the St. Louis presentation was short in nature, I did end up talking with a good number of people after the fact who wanted more information about DevOps.  I parlayed this into a much longer presentation for the Minneapolis VMUG UserCon.  It was a fantastic opportunity and from what I gathered, it was a highly rated session for the UserCon.  There is nothing that gives you more confidence about your material than excellent feedback (positive or negative)!

By the time, at least for me, major conference season hit, I was ready to start tackling some of my insecurity with conversations with many good, yet random people at VMworld US 2016.  Very similar to my Cisco Live 2015 attendance, I was not really expecting to attend this conference.  Personally, VMware-based technology was no longer my bread and butter, as all I used them for these days was simple virtualization.  My employer was in the process of making some major decisions based on the technology we were going to have in our datacenters moving forward and VMware was just no longer going to be a crown jewel in our infrastructure.  However, I had won a pass to attend through the vExpert program as an official blogger, so I made sure to attend.

I made sure to interface with just about everyone I could, especially under the premise (see, this is how you use this word, for the record) that this would likely be the last VMworld I would be attending, barring any sort of major employment change.  I reached out to a good many people that I knew were attending and made sure to give them some time and offer some thoughts to them on a bevy of topics.  However, one interaction stood out amongst the rest.  It was a series of interactions.  However, it was how this came to be that’s of interest.  I’ve known Mr. Atwell for a while now.  In most cases, it was just a Twitter thing or a VMware communities site thing in the PowerCLI forum.  We had known of each other and just by happenstance we ended up on the same panel about automation at the Opening Acts sessions in 2015 (which I’ll mention that I repeated in 2016, FWIW).  I took from friendly shots at the man over the last 12-18 months, but he was always one that was willing to answer a question, regardless of when and where I’d asked it.  I remember that he was lamenting about multiple papers that were rejected during VMworld 2016’s call for papers.  Initially, as a joke, I sent a tweeted at him about bringing a bottle of bourbon to VMworld to drown his sorrows over the rejection.  What transpired over the next month or two leading up to the conference was more seriousness about doing this and I finding a bottle of something he could not obtain, but felt that would be good to drink at the event.

Now, with the bourbon selected and the timeframe set, all we needed to do was find the time at the conference to make it happen.  I found some of the guys from my local area and were invited up to a suite to enjoy some bourbon.  I’m going to be honest with you here.  A bunch of nerds getting around a bottle of bourbon doesn’t seem like a very good story to the masses, but what transpired in that suite over a couple of hour period was a ton of laughs, a whole lot of ribbing, and deepening of bonds within my own personal community.  To be frank, it was one of the best times I’ve had at a major technical conference in a very long time.  I can’t thank Josh enough for letting myself and a few of my local pals for invading some space and enjoying some drink.

Now, why was this important?  It wasn’t this that made my conference, it was what happened later in the event.  I wanted to have a less-boozy discussion about issues I was having with my day job and trying to be a mentor and a technical leader with those in my day-to-day circle.  Josh was a trooper (it was an early breakfast, in Vegas, mind you) and I went away from that conversation better prepared for the challenges that faced me back in the office.  It also laid the groundwork for investment into DevOps (as the cultural movement, not as the technical one).

Since that conversation, I’ve become a student of some of the less tangible things of IT.  Business interaction.  Culture.  Team interactions.  I even went so far as to agreeing to upgrade my vBrownBag sticker on my laptop from just a fan of the group to a presenter.  As it’s one of the freshest things in my mind from this last year, I can tell you that I had no idea what I was originally getting myself into, but I thank the vBrownBag crew for letting me do it.  Somehow, I put on a (nearly) hour long presentation on something that contained little to no technical information.  Not only that, I got a ton of positive feedback from this session.

Why this entire story?  Well, when I started this year, I felt unsure of myself.  I struggled for a very long time to figure out that it was just that I needed to reign in how to use my voice.  I ended the year putting all the puzzle pieces together and feeling the most confident I’ve felt in this industry in a very long time.  As we close 2016, I just wanted to thank everyone that I’ve had the pleasure of interacting with over this last calendar year.  Each one of you has had a part to play in my successes and maturity during 2016.  I can’t wait to see what 2017 has in store and I hope that I’m able to return the favor that so many of you have provided for me.

Salut!

 

Posted in Technical | Tagged , , , | 1 Comment

Docker?  On Windows? Yep!

There’s starting to become a major timeline distinction between those of us in the IT industry.  There are those of us who grew up in the industry with Microsoft being the evil empire and those that are coming into IT and seeing Microsoft as company embracing the very thing it said it never would and, dare I say, championing the use of open source technologies.  I know I fall into the camp of having to deal with Microsoft at it’s very peak of being a closed company.  Back in the days of the domination of Win32 based applications, I never could have imagined what has transpired with Microsoft in the past few years (and specifically in the last 12 months).

This brings me to a recent event I attended (Tech Field Day 12, read more about the Field Day series of events Tech Field Day).  Docker was presenting and they focused a good section of their presentation about their integration of Docker into the Windows Server 2016 operating system.  This isn’t a pseudo version of Docker being embedded within a virtual machine running on Windows Server 2016, this is Windows-native application!  Along with this, all the full support for parts of Windows we nearly thought impossible to be able to isolate in containers, like the Windows registry.

When you ponder the common Windows application of (what should be) a bygone era, you will likely think of an application that seems bloated and have a GUI driven look and feel to it.  Over the last few years, Windows applications have been going through a metamorphosis, of sorts.  Similarly, to some Linux counterparts, Windows applications are starting have their monolithic parts broken off into other smaller microservices.  This is starting to allow for the very same scale capabilities that we’ve been hearing about from Docker since containerization has come out of the woodworks.

However, to pull this off, one would think you’d have to break away from the GUI that seems to dominate what most people consider a “Windows application”.  Docker on Windows will work with two versions of Windows 2016:  Windows Server Core and Nano Server.  Windows Server Core is the full version of Windows Server that is essentially missing the GUI.  You can still install some sort of GUI mechanism to a Server Core installation (whether that’s VNC or Remote Desktop Protocol [RDP]).  This will provide you a GUI interface to be able to interact with the applications installed onto that instance.  You could easily install your tried and true SQL Server instance onto it and manage it just like it’s been managed for many years (RDP into the instance, use local MMC installation components for application management).

However, I believe the true magic with Windows and containerization is going to come in the form of Nano Server.  If you, the reader, haven’t been paying attention to Microsoft technologies in the last year or two, Nano Server is a heavily refactored version of Window Server.  It has a very small footprint and can only be managed remotely (the installation is stripped of GUI capabilities).  Also, only the components that are necessary to the installation will be installed.  When focusing on the applications in this realm, this is where Microsoft and .NET Core (along with PowerShell Core for remote management) come into the equation.  By writing applications to take advantage of these new layers, you can start to see Microsoft’s vision of new application development mirror that of what Docker is trying to provide with other operating systems.  The only unfortunate side effect to this is that Microsoft containers can only be ran on Windows Server 2016.  So, portability to multiple Docker Engine’s, regardless of operating system, is going to be impossible for the moment.

Again, if you haven’t been paying attention to Microsoft in the last 1-2 years, this may come as a shock.  I’m extremely excited to see how this plays out, especially with this partnership with Docker.  Including Docker into the core of Windows Server 2016 is something that I never expected, but then again, maybe I shouldn’t be applying any legacy thinking to Microsoft these days, especially in regards to cloud technologies.

Posted in Technical | Tagged , , , | 3 Comments

Why REST APIs are Not Enough

Automation is a very hot topic these days.  Actually, that’s probably one of the understatements of the current state of IT.  Everywhere you turn, you get some sort of message about how important automation is.  Unfortunately, due to the sad state of IT up until “right now”, very few people have been able to devote the cycles necessary to understand automation and the processes automation is supposed to represent.

Back at VMworld US 2016, I was privileged to be a panelist for an Opening Acts panel that had automation and DevOps (although we didn’t even touch DevOps, much to my dismay) as the topics.  One of the opening questions was about barriers to automation and I piped up about the fact that many Operations folk are just not versed in programming/scripting skills.  I was quickly drown out by others bringing up that process was the biggest barrier to automation within existing IT shops.

I’m going to wholeheartedly disagree with some of my panelists.  Even in my current day job, many of our Operations personnel have the processes defined, as per specific industry certifications.  Documentation is constantly being updated about these processes and kept relatively up to date.  What my Operations team lacks is the programming specific knowledge to interface with all these disparate systems.

Internally, we’ve specifically targeted initiatives to teach (both internally and externally) PowerShell to our Operations personnel.  We’ve identified that many of our systems come with PowerShell modules to easily create multifaceted scripts to touch many systems within a single script or line of code.  My goal is to get my Operations team up to speed on what I’ve personally done with PowerShell and integration in our automation/orchestration system in Cisco UCS Director.  Unfortunately, they have a steep learning curve with some vendors in the infrastructure space.

Why is this?  It comes down to some companies feeling that just having a RESTful API as being “good enough” for integrators out there.  For those administrators that are learning the ways of programming, a RESTful API call can look a little daunting, considering some of the languages you actually wrap that request into.

I’m going to go back to a presentation I sat through from Zerto back at Tech Field Day 11.  The presenter had a many lines of PowerShell code up on the screen (somewhere in the 300+ lines category).  I asked the question of whether Zerto had considered wrapping all those Invoke-WebRequest and Invoke-RestMethod calls into their own specific PowerShell cmdlets and I was met with a response that seemed to indicate that maybe they hadn’t considered it.

It’s going to feel like I’m going to be picking on Zerto here, but when you dig into their architecture and what they were specifically trying to show us in that demonstration, nearly all the endpoints they were touching had PowerShell modules available so that all calls could be integrated into a single script.  Microsoft Azure has many PowerShell modules for accessing subscription information and provisioning virtual machines; VMware has their PowerCLI modules that could be leveraged for the on-premises virtual machines in which Zerto was trying to replicate out to public cloud resources; AWS even has a subset of either official or user/community created modules for accessing EC2 instances.

The point being, that many of the system administration community is learning how to automate their environments in the form of very human readable cmdlets within PowerShell.  If you, as a company working on enabling APIs for your user base, haven’t considering wrapping these up into a much easier format for use, maybe you should.  That community is not, and will likely never be, full-fledged integrators.  It’s time to start making their lives a little easier by creating better tools that wrap the RESTful APIs up in a more system administrator/beginner scripter-friendly format.  I highly suggest to these companies to do so with PowerShell, especially considering the now open source nature of PowerShell.

Posted in Technical | Tagged , , | 1 Comment