I've got a new Azure DevOps extension published out to the Azure DevOps Marketplace. This is the Wiki Age Report. This came about as a need for a client that I have been working with. They were looking for a way to keep track of potentially out-of-date documentation in their Azure DevOps wiki. They wanted to see a report of the wiki pages that had not been updated in a long time, so that they could then assign work to the wiki page owners to go out and review it and make sure it is still correct and hasn't become out-of-date. There's nothing worse than going out to the nice Wiki documentation in your project and finding that Wiki Page that tells you JUST what you needed to do, only to find that things have changed since that page was written. Often we document process and procedures that shift over time because of our own changes, BUT we also might be documenting how to "Do that thing in Azure" that we need to do.. and we all know how quickly Azure and the Azure Portal change... Now if that document was written a year ago, with screen shots and flows of how to accomplish something in Azure, chances are pretty strong that it's no longer correct. These kinds of changes to documentation can come from all kinds of sources, so it can be important to plan to keep an eye on that documentation to make sure it's still correct. That's where this extension comes in. You see it will show you a list of all the pages in your Wiki along with some information about when it was updated last. You can choose your time threshold to see a quick visual Red-Orange-Green indication of what might be out of date.
|
|
Now knowing what's "old" in your Wiki is a great step forward, but how about doing the work to review it and update it if necessary? That's going to take some time, and probably needs to be accounted for in your team's planning activities... So this report will also let you create a new Work Item to assign and track that work. This report will look at your project's configuration and figure out if you're using User Stories, or Product Backlog Items, or Requirements as your backlog requirement work item type and will create the work item for you. |
Recently I've done some work with one of the amazing Agile Coaches we have at Polaris Solutions, Sara Caldwell, to come up with a tool to help her get some more work item state time data out of Azure DevOps. The result is now, I've published a new Extension to Azure DevOps, focused on giving teams, scrum masters, agile coaches and whomever else might be interested, the ability to get a little more data on how their work items are progressing through their process, by measuring Flow Efficiency. You can go read lots of things written by people smarter than me about how to apply Flow Efficiency and its benefits, but the short story is that it helps scrum and kanban teams get an idea of how work progresses from start to finish, with a focus on tracking the time things are being worked on vs. the time things are waiting and not actively being worked on. This metric is an interesting thing to track because when teams are looking at things like Lead Time and how to improve that, focusing on the time work is "waiting" can help measure improvements a team makes.
Azure DevOps doesn't do a good job of reporting out how long work spends in different states or board columns, and that is the key for us to be able to measure Flow Efficiency. So that's where this extension comes in. It will look back in to the history of your Work Items and grab some of the data hiding within Azure DevOps and give it back to you. Users can pick the team they want to work with, and the backlog level (stories, features, epics...) they are interested in. From there we are able to see what work items have been closed (and are still closed) in a given timeframe, and figure out what board columns those work items have spent time in and how long on average things spend in those columns
|
|
Now we have some data about where work items spend time! Next to calculate flow efficiency, the user can categorize the board columns as either "Work" time or "Wait" time. Once those categorizations are done, we can begin to get a sense of work vs. wait, as well as a Work Efficiency percentage. Right now you can choose the date range you want to track for, and you can then use that to see improvements over time, the extension does not yet calculate the time slices necessary to trend things itself, so you would have to track that yourself.. but trending these numbers is next on my list of improvements, so if you install this now, you'll see some improvements to this as I get things done. |
|
So that's out there in the Marketplace for you to add to your Azure DevOps instance! Hopefully it helps you and your team get more insights in to how things are working!
|
I've
published a new Azure DevOps extension to the Marketplace called the Pull Request Completion Report. This
extension will add a new Hub to your Repos section. It is designed to
give you a little insight in to the Pull Request process on your repo. It
reports stats on completed Pull Requests so you can answer a few questions
maybe you weren't clear about before. How Long does it Take a Pull Request to get completed? |
|
This report shows you the average time a pull request takes to
go from created to completed. This metric is interesting on a couple
sides. If you see the average time is really short, something in minutes
maybe, then you may start to question, are your PR Reviewers actually doing a
review? What are they doing to approve this Pull Request? Are they just
rubber-stamping things? On the flip-side, if the average time is really
long, something in days maybe, then you may start to question if your PR
process and reviews are acting as a bottleneck in your team, slowing their
velocity. Either way, this helps give you a little insight in to
how this repository is going. What Branches are Pull Requests happening for? |
|
Depending on your branching strategy this report will give you a
little vision in to how your team is working with those branches. So if
you're following a gitflow branching strategy you'd likely expect a larger
number of Pull Requests against your Develop branch. So what really is
the Ratio of PRs to Develop vs. Master? How many updates are you generally
doing before you release something? This graph will group branches in
folders for you, so if you're organizing your branches in to things like
"release" and "hotfix" you can start to see just how many
bug-fix PRs are happening against your release branches in general. Does
this repository have a much higher number of those types of PRs versus other repositories?
Do we never PR bug fixes in to our release? Is this repository only ever
following a Pull Request process on code to the master branch? Any of those
behaviors are fine and may be the process you intend, but this chart will start
to confirm that that behavior is actually what your team is following. Who is approving all of the Pull Requests? |
|
Ever want to see who is doing all of your pull request reviews?
Or want to see how many Pull Requests are being completed without a Pull
Request approver assigned? Do you have a couple of lead developers charged with
doing the bulk of your reviews? Are they actually doing them? Are they
overloaded? is your team taking time and doing peer reviews? are things spread
out as you expect? This report helps you see what's happening, who's doing the
reviews, and how many are going through without a review. This can help get a
quick insight in to that activity. How are your optional reviewers being used? |
|
This report will graph out the review stats for the people
assigned to your Pull Requests. If you had optional reviewers added, are
they all given time to review things? Is the team moving forward without all
the reviews? Or are you adding too many optional reviewers? This chart will
show how the approvals are going. If you start to see large trends in
"Did Not Vote" maybe you start asking about why PRs are being
completed without giving everyone time to review, or maybe you ask if your team
is adding people un-necessarily? However this is going on your repo, you now
how some stats to start looking in to how the team is dealing with Pull Request
reviews, and how you might improve either the quality of the reviews, or the
quantity of things your team is asking of each other. Are the Groups you expect to review things getting to their
reviews? |
|
If you have some groups or teams assigned as your reviewers, you
may want to get a view of if those teams are doing their reviews. This
report is like the individual report, above, but now at the Team or Group
level. So if your Pull Request process assigned groups, either required
or optional, to your Pull Request, now you can see what's happening
there. Are the Optional groups reviewing things? How often? At my client,
we have assigned the Application Security team as a optional reviewer to all
the Pull Requests, but that team started wanting to see how many reviews they
actually participated in, they didn't really have any good idea of that before,
but with this chart they will be able to see just how many they were a part of. So that's it, that's the new extension. I hope you see
some value there, I hope maybe this can help answer questions and give you a
little insight in to how the team is dealing with code. I think that
there is some good power in the data, it may confirm that your team is doing
the things you expect, and that's great to have the power to confirm. Or
it may start to give you some ideas on where you can tweak your process to give
you some better outcomes. Hope you find it useful. |
At my current client, I have been working with a Scrum Master who has begun working with an established team who has been working with a Physical Kanban board using Post-it Notes on a white board to track their work. This has been working well for them, and so the Scrum Master doesn't want to disrupt their process. However, there is a need to track that work inside TFS/Azure DevOps. We wanted to enter the work items in Azure DevOps, and we were even able to replicate their physical board inside Azure DevOps boards pretty faithfully. But that big physical board is still a great way to view what's happening, and the team wants to keep using it. SO I was asked if we could print out "Cards" for those work items so that they can be used to put up on the physical board... Now, there was no good out-of-the box solution.. and going through the Marketplace led to one promising extension name "Pretty Cards" that promised to enable printing of cards... However we found that the work-item support in that extension was lacking, and the fields and card layout that that extension is using were not really quite what we were looking for. I went to the github repo for that extension to see see if I could request changes or what they were doing with Pull Requests. I found that there were other people submitting similar requests to what I would submit and they had not gotten any responses in close to a year, and there were also old Pull Requests that were not getting anywhere, so I felt that that repository was pretty dead. So I had a need, and I had a dead extension that was "close"... So my next-best course of action was to fork that repository and make the changes I needed! So I have published a new Azure DevOps extension to the Marketplace to enable users to Print Kanban-style cards for their Work Items. The Extension can be found here Print Physical Cards This extension improves upon Pretty Cards in that it will print any Work Item type; Pretty card could only print User Story, Task, or Bug, which left people who were not using Agile process template high and dry. My new extension also prints out a more consistent set of fields. I include things like Title, Assigned To, and estimate value (Story Points are used for User Stories, Business Value used for things like Feature and Epic, Original Estimate value used for Tasks etc..). Also supported are printing Tags, which was something we really can use as Tags are leveraged on Work Items here. You can also multi-select work items and print multiple cards at once! It fits three cards to a page, and will page break appropriately so you shouldn't get any cards split across pages.. Here is an example output of some cards that I selected to print: |
|
So! That's my story, hope there is some good value here for somebody who wants to still use a Physical Board, but still track their Work Items inside Azure DevOps!
|
API development is everywhere these days. With Microservice architectures taking off, and javascript frameworks needing a way to talk to their servers, writing the server-side code to be exposed through an API is ubiquitous. So when it comes to testing all these APIs to validate that they are running, you need a tool that will easily give you a way to test an API without having to code specific clients to call each API. Postman and it's command-line-interface partner Newman are just such a tool, and have become a popular tool to have around for just such tasks.
Now, writing scripts in Postman to test out your API is all well and good, and can give you a great way to validate that the API behavior you expect is what you are getting, and also can serve as a way to document what your APIs are doing.. BUT now, how do I get to run those scripts as part of my release pipeline so that I can take advantage of that automation goodness? And while the Postman enterprise accounts will store your scripts, and help you organize them in to workspaces, it doesn't really do a wonderful job of keeping history for you...
For Azure DevOps I found a nice little task out in the marketplace to run those Postman scripts through Newman in your build or release pipeline
Newman CLI Task for Azure DevOps
This task is great, and runs our scripts just as needed. However for our needs, this gave us a great way to run scripts that we have downloaded from Postman, but while we utilize the Postman Enterprise account, we didn't have a way to run scripts from the Postman Enterprise account directly, requiring us to have a method to pull the scripts down from our Postman account...
So I've created a new task now in the marketplace
Get Postman Scripts
This task will utilize the Postman API to retrieve the scripts your account has access to. This solves two troubles for us. We can now pull down all the scripts that the QA Team are building in the Postman Enterprise account workspaces so that we have them locally. This means we can easily utilize the Newman CLI task to run our scripts while still giving the QA team the ability to use their Postman Enterprise account. AND it gives us the added benefit of giving us a way to store those scripts in Git so that we have a good history of the scripts. That means if a script worked against a particular version of an API, and later scripts seem to be incompatible with an API, we can go back! All the benefits of source-control history for our Postman scripts can now be leveraged.
My Get Postman Scripts task will require you to go in to Postman to generate a Postman API credential key token. This will require you to log in to your team's postman workspaces on the Postman website (I haven't found a way to do this directly through the Postman client). Then choose the Integrations Tab, and then Browse Integrations, and finally choose the Postman API.. from there you can choose to generate an API Key. Keep that key safe and treat it like a password.. It's your access to the Postman API for your credentials. Once you have that key you can enter it in to the Get Postman scripts task in your build pipeline and it should start pulling down scripts for you!
How I'm using this at my client today:
I have created a git Repo to hold our Postman scripts, and I have a Pipeline Build created and scheduled to run weekly for now; we will move it to run more frequently as use of Postman becomes more wide spread. That Build utilizes the Get Postman Scripts task to download the team's tests from Postman. The build also issues some Git commands to keep our Git Repo up to date with the latest scripts. Then in our release pipeline for our APIs, we utilize the Newman CLI task to execute the appropriate Postman tests against the API we released! Boom! Automated API test through Postman and Newman.
So once again this year the Przylucki clan loaded up and made the drive up to Wisconsin to attend That Conference. This was our fifth year going up for a few days to spend some time at Summer Camp for geeks.
So you can go back and read some of my other blog posts from past years (and you can probably guess from the fact that this is our fifth year) to know that I think attending That Conference is really worthwhile. I know, I know, it's at the Kalahari, in the Wisconsin Dells, so yes, I rode many waterslides... but that is not the draw for me to keep returning (though choosing THAT Conference is certainly influenced by such family-friendliness). Attending That Conference is my attempt to take some dedicated time to continuously be learning. I feel like I am learning something 365 days a year. There is always something up-and-coming, always something I need to dig deeper on, always something I can get exposed to that will make me a better consultant, better employee, better co-worker. Attending a conference, I have found, has been a wonderful way to gain a lot of knowledge in one big dedicated chunk. It allows me to get out and get exposed to things that I might not get exposed to otherwise. It gives me a chance to learn things. It makes me get out and meet people. It (in the case of That Conference) allows me to eat lots and lots of bacon.
This year I was really happy with my That Conference experience. For me the Keynotes every morning are always a highlight, and this year they were really strong. Lots of strong motivation to go out and be the best "me" I can. To work together with a team. To focus on more than just the tech. To embrace the sense of adventure. It was really strong this year.. But I also found so many great sessions. I ran out to grab all the Docker & DevOps related sessions I could find. I am in the middle of working with a client that is looking to begin adopting Docker with some of their new projects, so this was PERFECT for me to get a ton of real-world knowledge and experience from the stories and ideas shared in those sessions. Could not have come at a better time for me! I also found a few challenging "Soft Skills" sessions to help me focus on things like team communication and goal setting. Good stuff. I have a lot to work on.
Another thing that can not be lost is the interaction with other people in the community. Every day I meet new people. Every day I get to hear stories about what people are doing and the problems they are facing and the solutions they are working on to get through those problems. I've come a long way in the last five years, first at embracing that interaction, and now also at recognizing the value there. I was listening to the Out From the Cube podcast the other day, and George was talking about the value and importance of even simply starting out with "Hi my name is _______" .. And it struck a chord with me because I spent 3-4 days doing that. Every meal, sitting down at a new table of people.. "Hi, my name is Jeff".. walking around the booths talking to sponsors "Hi my name is Jeff".. talking to people in sessions "Hi, my name is Jeff". That was not natural to me the first year I was doing this, but man, it was so natural now, and it just opens things up, having conversations, meeting people and doing a little networking.
So now I am back at the office, working on my day-to-day. BUT now, I have more knowledge than i had two weeks ago. I have more confidence in what I need to do on the Docker front. I have been exposed to new ideas and heard about new trends. I am feel I am stronger for it..
Creating build definitions and release definitions in TFS and VSTS can become a real chore. Have you ever wanted to give a build or a release a little bit more flexibility, so you can handle some situations based on a condition, but didn't have a great way to give the developers control of that without having to meddle with manually setting variables all the time? Maybe you wanted to handle creating an "alpha" or "beta" version build of a nuget package? or perhaps you wanted a insert some control of how things deployed? Having variables to control these things is great, and can rescue you from having to clone your build and release to handle the various situations that you find yourself in. BUT now you've just given yourself more variables to maintain, and more things to remember to set at build or release time. I've run into a few situations in some recent work where it just makes the most sense to give control of what happens when code is committed to the developers that own the code. One of the most direct ways to handle some of those things can be accomplished by placing some of that variable information in the developers hands to begin with, and one of the easiest ways to do that is to give them some JSON to hold it. Some of my current work is involving creating a deploy pipeline and workflow to help the various development teams to most easily deploy their code to the various "dev" environments that many of the teams use. The development teams had set up a series of Web Application paths on the dev server to hold various instances of their applications, so that one team can deploy, without stepping on another team's workflow. Now team A, can deploy to Dev server app space "F3" while team B can deploy to app space "F5" and they won't interfere with each other, and then later they can deploy to app space "Integration" and all play nicely together. This is all great, except how can we easily get a team B to deploy over to Team A's app space without making sure they update all the right spots in the right release definitions? Well, I suggested we find a way to standardize a "deploy" json file that could be used to control some of those things ... this isn't the first time I've had to utilize a json file in the source code to help customize the process flow of a build or release, so instead of trying to craft yet another one-off custom task, I decided to create a more flexible task that will read a JSON file and simply create Variables in the build and release for subsequent tasks to utilize. This way, I can feed other tasks the information they need, and I never (hopefully) will have to write something to pull the data out of a JSON file for my build and release processes again. Here's what I've got. It's in the Marketplace as "Json to Variable" and it will parse through a JSON file and generate variables with the data it finds. So some JSON like this:
|
|
Gives me output like this, and I get four new Variables in the process for other tasks to consume
|
|
This just gives me one more tool to help make my build and release pipeline a bit more flexible. The source code is in GitHub, and is written with Typescript to run as a node task in TFS and VSTS. It handles JSON objects, and Arrays. Really, the goal of this is not to generate some super complex process, and a simple JSON should suffice for where I envision this working. For now, this will get me through some of the deploy and build versioning things that I'm working with the development teams on, and I think I will be able to use this in the future. If you can use it, please pull it down out of the marketplace and give it a spin!
|
Automated build and deployments are great right!? Write some code, commit your changes, a CI Build can fire, and a CD Release goes off and deploys that shiny new code out to your web server! Magic! BUT, often I find myself working on a release pipeline that could benefit from a quick sanity check that the Web site that we deployed, is ACTUALLY available and running after the Deploy steps say they succeeded. I mean what good is a Release that shows up nice and Green in your release |
|
Only to then go to your application to find that it is not running, or is unavailable
|
|
Yuck! I want to see if the Website I say "Deployed" is actually a "good" deploy! Enter the Release Web Smoke Test release task for tfs and vsts. This task will run against a specified set of URLs, and test to validate that the response that is received is the expected result.. you can set the expected return value in the advanced settings, along with setting a retry count. So if you're deploying to a web farm, you can specify the individual server urls and test each server is running well. If you have a site that has a healthcheck api already on it that returns a different code than 200, you can specify something different than 200..
This can be used in a stand-alone manner to allow you to see if the release is functional, or even better you could use this failure to trigger a roll-back in your release, creating a more robust release pipeline than just firing away and forgetting about it.
I tried to make it somewhat flexible, but not overly so... If there is a case that you need that isn't covered here, let me know! The code is up on GitHub so you can see what's going on and extend the task as needed, or even better, share your updates with me and I can publish things up to the Marketplace for everybody to enjoy. |
Well, the Przylucki clan has returned from another fun, entertaining, informative, and enlightening excursion to Wisconsin for That Conference. I wanted to put down some of my thoughts and what I took away from the three days here while it was still fresh in my head.
One of the big things I traditionally take away from attending That Conference has been the spotting of trends in the industry. What is the coolest, hottest, slickest new thing that everybody is doing? It has been fairly easy to spot most years.. Microservices? Docker? Git? Cloud computing? Xamarin forms? Node.js? yup, I've seen those things come in from year to year and have a strong presence on the schedules of many of the past years. Also, I've learned to pay attention to those things, and even make myself go attend talks on those topics, even when it may not have been something on my radar previously. While it may not be something I'm doing today, it's a fair bet that it's something I will be better off having some familiarity with in the not too distant future. Git? yup, I'm working on that now. Node? sure, while I'm not currently doing Node, a little background and knowledge is helping me as my current client is starting to add that, and we have to create build and releases for it... Things like Docker and Microservice architecture are not something that I've gotten my hands in to yet, but I know its coming, just a matter of time...
So going in to this year, I was looking for "that thing" that I needed to go get my head wrapped around. This year however, I had a hard time finding "that thing". Not sure if there was just not one overriding thing, or if the That Conference organizers had made a more concerted effort to not have as much overlap in topics, but going in I just didn't see it. There were topics like AI, or programming Alexa tasks, or Docker that had multiple sessions, but nothing was hitting me in the face as the thing to go do. So I went in more loosely on my session selection this year than I have in the past, intent on looking for the thing I wanted to see more in the moment that as rigidly planned out as I had in the past. Now looking back on things, especially after looking at all of the Open Spaces sessions that were added during the week, I think that "thing" this year may have been React.. and I didn't react fast enough to make it to a React session.
As always, I took away a good deal from the daily keynotes, and this year in particular, I really enjoyed Brian Hogan's talk on Combatting Fear. The ideas on stepping up and recognizing where you're being held back by fear, and then leading and fostering a community that is capable of breaking through fear was energizing and inspiring.
Some of the other top sessions that pop to the top of my head are:
- Chris Powers' talk "TDD Like You Mean It" - talking through TDD, while actually doing it really was a great way to drive home the points of making TDD your default programming pattern. This is something I really need to practice more and work on, and even though this was not my first exposure to TDD ideas, this talk was super helpful
- Angela Dugan's talk on agile teams and how to get them unstuck was a great look in to common pitfalls and how to identify them, and strategies for getting through them. Always a great topic for me to hear, helpful info, lots of learning for me.
- Cecil Phillip's talk on Microservices discovery patterns was super helpful and informative. As somebody that's not really working in a Microservices architecture right now, I left this session feeling like I made a huge leap in understanding and knowledge on how it practically works and can be leveraged in an enterprise.
- Scott Davis' talk entitled "He is the most Paranoid Developer in the World" lived up to its name, and was both super informative, and fascinating. The lengths he has had to go to in order to properly safeguard and secure his mobile game showed us all just what vulnerabilities are out there, especially when building mobile apps, and gave a lot of food for thought on how to think through how and what you want to keep secret and secure.
Plenty of great take-aways from all of those, as well as the other topics on Devops, or Alexa, or JSON Web Tokens, or Waterfall to Agile transformations that I attended. Loved it all. I came out this end better prepared for life as a consultant than I started with.
And of course, it is That Conference, and the family aspect of this is great.. Over the years, my boys have loved intro to programming courses, minecraft hacking sessions, internet security talks, and on and on. Now I find after going for four years on, that my now teenage boys are finding less and less that interests them. This year the big hit was for my 13 yr old son and the "Science around the Campfire" put on by Sage Wheeler. He saw that Oobleck was going to be in play there, and he was sold.. and he came out of that session smiling and a happy boy having had a chance to geek out playing with science.
So plenty of learning, plenty of growing, great stuff for the family, all wins, but the fun side of That Conference is also very special, whether it was running That5K, or enjoying time at ThatWaterparkParty, or playing with my boys at ThatGameNight, or just eating all the bacon, lots of great time for the week. Had all of ThatFun!
I've been working with a client on an integration between their current requirements management system in Caliber and TFS. We recently started receiving the TF20507 error after upgrading to TFS2015. Our integration work used the TFS API, and it would go after User data in TFS as it was trying to insert records. The User Retrieval logic started throwing the TF20507, but the error doesn't tell you which user, and it doesn't tell you which field in the user record is causing the issue. It only tells you the invalid character (U+0009, or a TAB character in this case).
Doing some searching around the interwebs for that error message returned some example code and methods for tracking down the error user.. my trouble was that the code examples I was running across were all using the TFS API for TFS 2010, and the methods they were using were not working with the newer TFS API's.
So I ended up working through the code and converting things over in a manner that would give me the SID of the user that was causing the issue, only now in a TFS API that works with newer versions. I ended up with the code below, which will loop through all of the users in the Project Collection Valid Users group, and it tries to call the IIdentityManagementService.ReadIdentities method for each SID. This lets me find which user SID throws the TF20507 error.
var GSS = tfs.GetService<IIdentityManagementService>();
TeamFoundationIdentity SIDS =
GSS.ReadIdentity(
Microsoft.TeamFoundation.Framework.Common.IdentitySearchFactor.General,
"Project Collection Valid
Users",
Microsoft.TeamFoundation.Framework.Common.MembershipQuery.Expanded,
Microsoft.TeamFoundation.Framework.Common.ReadIdentityOptions.None);
foreach(var member in SIDS.Members)
{
if (member.IdentityType == "System.Security.Principal.WindowsIdentity")
{
string v =
member.GetType().ToString();
string[] mIDS = {
member.Identifier };
try {
TeamFoundationIdentity[][] u =
GSS.ReadIdentities(IdentitySearchFactor.Identifier,
mIDS, MembershipQuery.Expanded, ReadIdentityOptions.ExtendedProperties);
// Console.WriteLine(u[0][0].UniqueName);
}
catch(System.Exception ex)
{
Console.WriteLine("Error :
" +
ex.GetBaseException().Message);
Console.WriteLine("for sid :" +
member.Identifier);
}
}
}
This code ends up spitting out the TF20507 Error and the SID for each User that is throwing the exception/causing my troubles.
That's great and all, but knowing the SID is only half the battle.. I then needed to find which User ID's were the culprits... for that we took a peek in to the TFS Config database to find the SIDs (NOTE -- We are only SELECTING data here to find the culprit.. DON'T try to edit data in the TFS Configuration database! It won't end well for you unless you REALLY REALLY know what you're doing...)
SELECT *
FROM [Tfs_Configuration].[dbo].[tbl_Identity]
WHERE Sid = 'put in your SID here'
Once we found the user ID's of the user's causing the trouble, we were able to remove them from TFS. Working with the security folks who manage the Active Directory, we found that the user's had TAB characters in a description field in Active Directory. Having the Active Directory accounts updated to remove all the TAB's we then re-added the users to TFS... and no more issue!