Tech Talk: "CCR" mit Richard Draxelmayr von Frequentis

Tech Talk: "CCR" mit Richard Draxelmayr von Frequentis

Hi, welcome to my talk on CCR! So, what is this about? Well, as much as you, I would like this to be about the iconic band CCR – unfortunately that's not the case. Today I'm going to talk about Collaborative Constructive Review.

So, the next couple of minutes, I'll be explaining to you, what I've experienced in my work with Frequentis. What constitutes a good review, what makes a review process smooth, and what I think you could employ to your company.

But first, let me give you a bit of an example on why I think I should be talking to you about reviews: I'm a software and systems engineer at Frequentis since 2015, which, primarily, develops safety critical communication solutions. Here, my main line of work is actually developing our automated deployment tool with Ansible. And – as I'm a developer at heart – I'm also tasked with bringing a software centric mindset to an otherwise software lean environment.

This means that, whenever I have the chance, I employ automatic quality checks to everything – everything! – that I can. So, among other things that I did in the companies, in the company there are web based user interfaces, automated load testing, customer and internal trainings and last but not least; a Spring Boot based REST back end.

So this range of topics is also why I'd like to talk to you about reviews. And not instead have a very fine detailed technical talk.

So, first up; reviews are awesome! Why do we think that? So, there's actually a couple of reasons that I have experienced, that make reviews a very great tool; but in short, I think that ownership transfer, knowhow transfer in all directions; a cauldron for smelting and upholding the teams philosophy; and an activity stream for your code base; and last but not least, more eyes just see more deficiencies systems of the game.

So, first off: ownership transfer. This is important, because it allows moving one individual responsibility to the responsibility of the team. While it's still your contribution, at least it allows your team to go from this to this. So, reviews are are also a great place to learn, and that goes in all directions. For example imagine "hey, this generator function here in JavaScript is actually used in a very clever way. I haven't seen that before." Or "yeah Python has built-in deep copy-functions that you might want to use, otherwise you might be in trouble." These are just corny examples, but I've found that they loop back in times, when I started web front end development, when I was able to pick up common practices, and whatever they wanted from me in reviews learning from other people and I eventually was able to showcase some stuff from my own. Also, wherever people are working together, different opinions are just a fact... that's human nature. But reviews allow you to actually incorporate these different viewpoints into a more homogenous philosophy, that you can then enforce in the individual reviews.

So, it's going to stick around, and people can point to decisions made in the past as it being still relevant for the present.

So, activity stream for your codebase. What does this mean? It means that basically you are not going to be on point on every change everybody does. But, if your colleagues point out crucial changes, you should at least be able to keep somewhat in track. And I'm pretty sure it beats the alternative to coming back to code or the work you haven't been able to touch before, and being totally unrecognizable.

Also last but not least; while reviews will not find all mistakes, at least reviews will find the mistakes that are coverable by reviews. You would miss them otherwise so, I can highly recommend at least employing some sort of review.

To at least have some sort of veneer of objectivity, I also want to highlight some of the other arguments against reviews; at least the three that I've found is that reviews take a certain amount of time, peer pressure from your peers might actually hinder some creativity, and also imbalance in review time might cause frustration. So it's quite obvious that reviews take time that could otherwise be spend on your main line of work. However, I personally think this is quite worth your while since, at least from my perspective, we are working in a safety critical environment, and also I'd rather, much rather, have a second pair of eyes go over for my work. Especially if I was working on a very gnarly implementation problem.

So, peer pressure may hinder your creativity. This is especially true if you have people with very strong opinions, that have been in the field for a very long time. This might be a good or bad thing, depending on the actual field that you are working in, but be aware that there are actually things you can do to counteract such problems. You could, for example, implement spikes to investigate new technology avenues, or more avenues in general. Or you could schedule a charisma-sprint where people get to do whatever the heck they like.

Ah yes – who hasn't had the time where reviews were the only things you did for weeks? And your own work sort of felt deserted by others. I would highly recommend that you identify these issues in your teams and address it in some sort of manner, because otherwise people and the reviews just become blunt.

So, after I've convinced you that reviews are awesome, here are the foundations that I think a good review process should incorporate, so it makes it smooth and transitions well. I would say that tooling is your bread and butter for reviews, I would say that automation allows you to focus on the things that are actually important and you don't have to repeat yourself as much. And etiquette, as; what goes around, comes around.

So first up we have tooling; personally, I think you can imagine that reviewing in word is nice and all, but at least after the second round of review, no one has any idea whats going on, and traceability into the past is actually a joke. So as a coder I get off easily because I can employ a rich set of features that come with code review tools. Personally, things that I look for in such tools are good diff-view where you can see the changes, that are actually being incorporated in my work, comments where I can actually see what's going on or actually leave remarks that things might need to look different or things should be explained to me in greater detail, tasks where I can actually force people to change the things the way I think they should be changed or we as a team decided that should be changed, quick navigation and as I said, traceability into the past, so you still have an idea about what's going on for the years to come.

In our company we employ a couple of things actually; we have a BitBucket on premise, we have Crucible and we have some internal proprietary tool. In our daily work, this is how it looks like and you can see that this highlights some of the key aspects that we have talked about previously. We have great diff-view, we have comments with tasks, and we have a great overview of what's going to be changed. You might find yourself in a situation, where you are not working with textbased deliverables. However, there's still a way for you to incorperate some of the things that we just talked about by converting into intermediate steps – if at all possible. For example in our company this would mean that we are maintaining a contract documentation that illustrates our entire API and clustering markdown and plant UML, which is then automatically converted into an HTML page, for company wide consumption. This means, it allows us to leverage the whole array of code review tools for making the reviews as smooth as possible and automatically compiling it into a different product.

Since I'm a developer, I'm used to working with computers, I'm also quite used to computers telling me what I actually got wrong. So here is one of the more important insights, that I have noticed. If you can automate everything – especially for reviews – because it allows to focus on the things that are actually important, and not the busy work. And it will also help to keep the team overall in line. For example; whitespaces or tabs, newlines at the end of the file, or is the copyright header placed where it needs to be? I would highly suggest that you automate these aspects, you don't need to point this out everytime this happens and it allows to keep the focus on what really needs reviewing.

Also, automated quality tests are a boon to reviews because you don't have to go through the tedious work for example checking out code, or whatever deliverable it is. We are producing and checking certain basic quality metrics, for example unit test or what not. So, use automated test for it. It keeps your head relatively sane for the other stuff.

In our company, whenever I would like to do some changes, we are actually having builds being triggered with every commit that I push to a branch. So we can actually see in this case that we have builds running for every commit and if I do things wrong, it will tell me I need to do things differently. Note that, we are actually preventing changes being incorperated to mainline branches, which do not mean certain quality criteria. So there is no arguing with the machine, it just tells you "hey this does not work." So people don't argue anymore. And whenever we need, we can change the rules to support our needs.

Also I would like to point out, that for every commit on our mainline development branch under the automated deployment tool that I've mentioned previously, we are actually triggering a full deployment onto one of our test systems – something that I'm actually proud of.

Ah yes, human communication; I'm pretty sure this is known to everybody that is not the easiest of all topics. But any good communication seminar are probably tell you that feedback is formulated from your perspective. For example, consider this: that's probably not a good way to phrase this, let's check if we can make this a bit better; well, it's still not quite it, but well this is a joke. It also attempts to illustrate that there is a message here that this method has its limits. And you need to be careful to compile your message in a way that maximizes the informational content and not the emotional content, let's see if we can make this a bit better. For example "this won't work" – this is a claim to an objective truth, which is probably not what you want because it makes room for discussion very narrow. So you could rephrase this to something like "I thought this is true, and I think it does not work as we intend." This at least leaves some sort of room for a discussion. That makes you more relatable and less of a jerk.

Another thing you can do is actually provide suggestions on how to solve the issue. This is especially prevalent if you start out and you are the target of a lot of review comments. And you don't just quite know the ropes yet. So instead of saying "okay this is just plain wrong" – because then it's up to me to formulate suggestions that you are going to like better – try to incorporate suggestions into your comments, so it's actually quite easy to understand what you want from that person. For example, I'll try to rephrase my message to explicitly point out what I think needs changing and I even provide a suggestion on how you could rephrase this appropriately.

Other points in terms of etiquette are that you should leave some room for discussion as I hinted at earlier. Because claiming objective truth is not something you should be doing, it's just not really relatable and a dubious claim at best.

Also, written communication leaves out a lot of things that usually come across with speech. For example, intonation is missing, facial expression is missing, so you might actually come across a bit cooler then you intend. Consider the example from previously. How does this message make you feel? Is it neutral, is it slightly hostile, is it slightly friendly. Hard to tell.

Let me read this out for you; and you'll see for yourself that the tone actually matters and you should be careful while phrasing this. For example: "I thought this through, and I think it doesn't work as intended" (positive intonation), or "I thought this through, and it does not work as intended" (negative intonation).

Also please, for anything sacred and holy, keep your exchanges small!

So, in short; reviews are awesome! Pick a tooling that you like to work with and reviews are easier. Also, automate as much as you can, because it will help you keep the focus on what's important and not repeating yourself as much, and also etiquette matters. It really does.

So, thanks for listening to my talk about CCR. Have a good one!



Erfahre mehr zum DevTeam von Frequentis