subscribe via RSS
All I wanted was a huge TV hanging above our workspace displaying the build server status page. Too much to ask from the enormous IT market in 2015? Apparently.
We bought a 50” Panasonic LED TV (thankfully, with a generous corporate discount). The facilities folks eventually hung it on the wall and provided a quad power outlet and a network drop for whatever device we were going to tuck up there.
I thought we would just plug in a Chromecast and browse to our build status page. Haha, you have to be kidding me! You can’t just run a browser. Neither can the Amazon Fire TV Stick, Roku, or Apple TV. The Amazon Fire TV supports a browser, but only if you enable developer side-loading and want to run Mozilla Firefox unsupported.
Don’t forget, you might need to authenticate to your wifi network via captive portal. The Amazon Fire TV has that feature now and some of the other boxes will be getting support later.
You need a third-device “casting” to the dongle full-time!
- Chromecast (or similar)
- The device that’s actually doing the work :(
There are a variety of reasons not to do that…
- Don’t want to remember to login to see the dashboard
- Don’t want to leave a desktop session live and running all day (security)
- Don’t want a reboot for Windows updates to take down the dashboard
The “set-top” box category of devices are letting us down. So, we’re looking for some kind of full-blown PC in a mini-box. Maybe the Asus Chromebox? Nope, it’s running ChromeOS, which can’t be authorized to connect to the corporate network.
The guest network doesn’t have permission to access the dashboard without a firewall rule exception request.
So, now we have a $1000 black rectangle hanging on the wall. I’ll let you know what ends up working.
- The corporate network requires domain authentication and/or some kind of device registration that requires meeting some arcane security measures.
The developer in this story had been attending status meetings. These were good meetings, they were attempts at transparency and cross-functional communication. They were snappy and there was always time for questions.
For weeks, the project management team had been presenting the general status of the many releases that were in various stages of the software development lifecycle. Yes, there were released versions in support, receiving patches and hotfixes, versions in active development, and versions being planned and experimented on for future release.
The statuses were like:
- Bad, we’re going to miss a deadline or slip a feature. (
- In trouble, we’re trending downwards. (
- Good! Everything’s on track. (
And the presentation of these releases and status often looked like this, let’s say that 1.0 is the current released version, 2.0 and 2.1 are in various stages of development or testing, and 3.0 is in planning.
| 1.0 | 2.0 | 2.1 | 3.0 | |-----|-----|-----|-----| | - | ~ | ~ | + |
Of course, we’re “trying hard” to meet that deadline or to get that feature in. But, are we reorganizing the teams around the most important goals? Are we communicating hard messages to customers about that future service pack so that we can meet the 1.0 goals? It’s not really clear sometimes.
Let’s skip ahead and look at the reported progress as old releases rolled out and new ones popped in.
| 2.0 | 2.1 | 3.0 | 4.0 | |-----|-----|-----|-----| | - | ~ | ~ | + |
Wait, a minute… 2.0 isn’t improving and 2.1 is still in trouble. But, look, 4.0 is doing great! The key understanding was a quote(ish) from a project manager…
Of course, 4.0 is looking good because we haven’t started working on it yet! (chuckles)
If you understand that “working on it” means “starting the development phase”. The problems start when the damn developers get their hands on the perfect requirements. You’ll see, over time, that the statuses are fixed. They’ll always appear in that state in that order. The releases simply roll over them, like a wave.
| 1.0 | 2.0 | 2.1 | 3.0 | 4.0 | 4.1 | |-----|-----|-----|-----|-----|-----| | - | ~ | ~ | + | | | | | - | ~ | ~ | + | | | | | - | ~ | ~ | + |
I believe that the feedback loops between the teams are broken. The organization is not collaborating across the teams towards a shared goal. There is no real change, so the statuses will never change.
Though we don’t readily admit it, the team I’m on is part of a remote organization. We have offices in multiple time zones, work-from-home folks in different states, and offshore partners. We are trying to build a culture of remote/async work even when a lot of meetings happen in this office.
One part of that is encouraging collective ownership of meetings. For example, we started holding “lightning talks”. Mostly around technical topics, by developers for developers. But, everyone is invited and we’ve had QA folks and IT people come by and talk, too. Even managers got on board and started presenting softer topics!
I’ve been scheduling them, hosting the audio conference, starting the screen share, etc. But, that makes me a bottleneck. I want anyone to schedule these and for a quorum of participants to be able to start the meeting without waiting for some “official” meeting host.
Most enterprise email/calendering systems don’t make this easy because you can’t transfer ownership of a meeting or assign multiple owners. I’m looking at you, Outlook. So, I quit using recurring meetings. A recurrence makes it “mine” instead of “ours”. I developed a nice template that anyone can copy, edit, and deploy for their instance of a lightning talk.
Your audio conference line won’t start without the host. All the participants can be on the line, unable to chat, listening to muzak. Ugh. Not to mention, we have to “order” a conference line from our Corporate IT catalog! Virtual meeting rooms and screen sharing tools (like WebEx) have the same strong idea of single ownership. The host starts and controls the meeting. Without the host, you have no meeting, no sharing, no chat, no video, no recording.
So, here’s what we do to enable and encourage collective meeting ownership (some of these are specific to our tools, but you’ll get it):
Schedule the virtual meeting “room” for every day of the week for the longest possible period. WebEx doesn’t allow a 24-hour room, so we schedule two 12-hour meetings. This allows anyone anywhere at any time to start or join a virtual meeting. WebEx now has the concept of “personal” rooms that are always available. This is another option! But, a “permanent” room allows some kind of branding or titling to make the purpose more obvious.
Configure the virtual meeting room so that the first participant to sign in is the “presenter”. We don’t want to wait for a “host” to assign presenter permissions. This also lets the presenter get started and setup their rig early.
Provide the host key/code in the meeting agenda so that any participant can claim the host role and start a recording or configure this instance of the meeting. Again, the goal is not waiting on a single designated host!
Similarly, provide the host audio conference key/code to all participants so that the first participant actually starts the audio conference and they’re not stuck listening to hold music. It also helps to disable entry tones (the “bing” or required “say your name” part). We need to stop saying “Who’s on the line?” every time!
Allow the participants to use as many features as possible: chat, video, notes, file transfer, etc.
Be sure to record the meeting and provide a streaming and download link, a transcript (or minutes, etc.), and any relevant notes on a wiki and as a reply to the meeting invite, so that all the participants have access.
If you must require a code to dial in to the audio line, most phones should support commas (soft-pause, around 2 secs) and semi-colons (hard-pause, requires you to tap a button) between the conference number and the code. For example:
You should provide the link to the downloadable recording, because who wants to stream the recording when WebEx’s plugin asks for these permissions?
If you’re awesome, you’ll convert the recordings to a normal format using their recording editor.
I participated in an online panel on Build Automation: Quality and Velocity at Scale as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and Devops. Automating the build pipeline has many challenges, including 3rd party dependencies, consistency and standardization, and testing.
Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.
Below are a few insights from my contribution to the panel:
What do build bottlenecks mean for your pipeline?
I work on a very large legacy application. We have a thick client run by retail pharmacies over dial up lines. There is an enormous centralized database in the backend. Recently I am hearing a lot about build automation. But the build is just one small piece of building value to the customers. I am thinking about my process throughout the pipeline and all the teams I am supporting; starting from when the pharmacy owner has a problem and we develop a solution, test it, and develop documentation and training materials. They need to get all their pharmacists together to learn the new solution. And then we finally can deploy a new version. This pipeline spans 18 months. And that’s not good. I wish it was a lot faster. But I am thinking, what good does it do if I speed up this one small part of the build?
If we can commit faster then there’s more in a branch that QA needs to test and that needs documentation and training. So the customers are less likely to take builds, because they have to get their work done. I am trying to think about making our build better. But honestly, I wish these guys would slow down. I wish that they were given some slack because development isn’t even our bottleneck, if you look at the full process. So that’s the conflict I’ve got professionally. There’s the duality that I really do want to have great tools, builds, pipelines, and good feedback. But it doesn’t matter if we can’t get stuff out for 18 months.
I’ll try not to be too much of a nay-sayer about build automation because I really am excited by it. But I am worried that we don’t focus enough on full system effectiveness.
I wonder if it’s a chicken and egg problem – I can’t make the people on the dev team more efficient if the people upstream of us can’t keep up. I do not control the entire organization, the trainers and others. But if I can make this team more efficient, maybe we can be a better partner in the overall solution. So I can still get excited about build automation for that reason.
What do you think about consistency and standardization in the process?
My biggest technical problem is with build tools that let you write a build step inside their GUI and all they’re doing is wrapping a shell exec - you write this complicated multi-step, post-condition, pre-condition build and only your build server knows about it.
Then if your build server goes down or if an AWS region goes down, nobody can run a build because it is not actually in source control. So as nice as those features are, they may be easy to set up but aren’t going to be maintainable. Let’s write all of our build steps in some sort of a DSL, let’s check it in, the build server should be some fancy central wrapper. When the teams are excited to do it, we build up a vagrant development machine.
I don’t care what my build server does or how the agents are configured - as long as they have a virtual box and Vagrant, they boot up a machine to do whatever. I look at those and offer advice. We share best practices. But honestly, whatever they do inside their Vagrant box is their own. They can fight with Ops later about how it deploys. At least I can take that box and build it on any machine anywhere.
Can integrated tests help speed up the pipeline?
I think integrated testing is a scam. Unit tests should be enormously fast. You should run thousands of them per second. So if you’re writing good unit tests that are running quickly, your product should not be large enough that your unit tests are going to take too long.
With the integrated tests cycle, when you do the mathematics and multiply out the combinations for even a simple screen, if you think you’re writing enough integrated tests to cover all the paths of your system, you probably aren’t. And the cost in time and effort to write and run enough integrated tests is just so mindboggling – I mean we’re talking about tens and hundreds of thousands of paths. So I think that integrated testers, if you write a couple of contract tests or do a single smoke test, for example: “Is the app up? Can I log in?” then I don’t see how that will ever slow you down. But then I haven’t gotten that far in my own application so it’s all theoretical.
Command-line tools should emphasize the feature that you most likely want to use given the situation (context-sensitive, so to speak). If you run the
foo command in an empty directory, it should help you out.
$ foo You ran `foo` in an empty directory. You might want to run `foo init` to get started.
Someone on a podcast I listened to recently called this “emphasis”.