Lean/XP Full-Stack Software Engineer with a DevOps mindset. I do backend development and frontend development. I set up development infrastructures, define software and enterprise architectures, have lots of interest in operations and coach teams in adopting eXtreme Programming principles.
Along the years I have been a bit everything: Software Engineer, (Architect), Technical Team Leader, Scrum Master, Product Owner and Agile Technical Coach.
I like to help teams in creating meaningful software, with a keen eye for code quality and software delivery process - from customer interaction to continuous delivery. Instead of balancing quality & delivery, I believe and practice that better quality is actually a way to more and better deliveries. To me feature delivery and code quality goes hand in hand.
"How long would it take your organization to deploy a change that involves just one single line of code? Do you do this on a repeatable, reliable basis?"
Mary Poppendieck, Implementing Lean Software Development"Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. Code for readability."
AnonymousMy primary role is "IT Engineer". I have 20+ years of experience. I still code and I am passionate about this. This is my bread and butter. But I am not a Monkey Coder. I do the whole shebang. From inception over coding, testing, setting up the infrastructure, to releasing into production and monitoring the damned application.
I pay a lot of attention to quality and maintainability because that is what makes the difference to make money or not.
Without these skills, I cannot be a proper IT Delivery Coach. I cannot help organisations achieve a state of Continuous Delivery. Everything I know, I learned through delivering working software in production.
Pricing
850 EUR per day excl. VAT.
I help teams in adopting the practices that make Continuous Integration and Continuous Delivery.
Based on the 20+ years of experience delivering IT systems that make money I help teams and organisations reach a state of Continuous Delivery, i.e. being able to release product increments sufficiently reliably and quickly to satisfy customer demand.
This means adopting a series of both technological and organisational changes that will help teams reduce stress and fatigue while delivering higher quality significantly faster at a sustainable pace and at reduced costs.
Which changes to adopt and in which order is specific to the unique circumstances and constraints of your organisation. There is simply no fixed roadmap to the adoption to improve the delivery cadence.
This goes beyond automating the delivery. The tooling part is the easy part to solve. It is really about a change of organisational mindset.
Pricing
Online adhoc Continuous Delivery coaching:
225 EUR per hour excl. VAT.
For availabilities, check my calendar.
Long term coaching starting at one day per week:
1000 - 1250 EUR per day excl. VAT.
For the past six years, I have been managing the security and infrastructure for a fintech to satisfy finance regulation and compliance.
When reviewing the second edition of the book Infrastructure as Code I realised we implemented many of the advised cloud infrastructure patterns that reduce risks and increase delivery throughput just by applying common sense and lean thinking.
Based on this experience, I can advise startups and scale-ups on how best to leverage the AWS capabilities while keeping an eye on costs.
Pricing
1000 EUR per day excl. VAT.
Over the years, I have seen what works and what does not work. I have had multiple conversations with fellow coaches, engineers from various communities as well as with VPs of, Heads of, Directors of Engineering and CTOs and CIOs. I have read numerous books and was asked to review books. Finally, I have helped a great many organisations in improving their IT delivery. Recently, I also asked to run Technology Due Diligence for investors to better understand the strengths and weaknesses of technology organisations. All that allowed me to build a certain knowledge on how to organise a technology organisation that weigh in in making money.
With that background, I am available as an Interim CTO to help out a technology organisation in need of a temporary leader to bridge the gap; as a consulting CTO to advise CTOs on how to improve their organisation or just as a full fledged CTO.
In this, it is important to understand that everything is contextual. What works for one organisation, does not necessarily work well for another organisation. Therefore, it is important to listen to the people, to understand their problems and start from their.
Pricing
To be discussed.
The problem with the current most commonly accepted way of running code reviews using Pull Requests is that they have the nasty habit of blocking the flow of delivery. They introduce a cost of delay. Any delay reduces feedback. Consequently, it drives down quality.
In short, Conway’s Law says any organisation that designs a system will come up with a system design that copies the organisational communication structures.
Continuous Integration is by itself already a practice. It is one of the most critical to adopt to enable the fast flow of work through the value stream. However, many teams believe Continuous Integration is just a tooling problem, to then say they practice Continuous Integration.
Your IT organisation is surrounded by problems preventing it to satisfy market demand on time with the required quality. Where do you start?
The Belgian Federal Pension Service (SFPD) has a large IT department with more than a dozen teams working on the same application. Over the course of a few years, these teams developed an agile way of working at the team level but were struggling to work effectively on a larger scale.
15 teams, 1 shared monolith, 1 release every 6 months, and product demand for 1 release every 2 weeks. How do you know where to start with Continuous Delivery, when you’re surrounded by technology and organisational challenges?
Feature branching is one of the most commonly accepted practices in the IT industry. It is mainly used to control quality and to control feature delivery. However, many times the inverse is true. Branches break the flow of the IT delivery process, reducing both stability and throughput. Unfortunately, oftentimes teams are not aware of this. They truly think they are doing the right thing.
So your organisation wants to implement Continuous Delivery. But is your organisation ready for this ? Does it have the right mindset ? To be successful with Continuous Delivery, you have to adopt the proper mindset as a whole organisation. Just throwing tools at it will not do the job.
How do you go from a ragtag of people having no idea what it means to be Agile, stuck in eternal maintenance and operational work, applying none of the basic software engineering practices to a team of DevOps delivering value for their customers in sprints of 2 weeks ?
Continuous Delivery brings a lot of value to your organisation. It will allow you to reduce your time to market for new features and bug fixes. It is a significant predictor of company performance.
We read about Continuous Integration. The practice appeals to us. We understand its value and benefits, especially as it unlocks our ability to release confidently anytime. But, where to start? Many teams believe Continuous Integration is just a tooling problem, declaring they practice Continuous Integration. Although, they often do not. Hence, they miss out on the benefits that come along with it. It takes more than only tooling. So, again, where should we start? After all, there are still 20 practices to implement. Which ones to pick first?
One key principle for continuous improvement is feedback. To create feedback loops, we need to relay information. In the case of Continuous Integration, this requires Broadcasting the Codebase’s Health. But feedback alone is not enough. Teams have to act upon feedback, and for this, teams have to be empowered.
When tests occasionally fail because they are flaky, we can not rely on the tests any more to identify a good release candidate. At that point, we lose the codebase health monitoring. The benefits of Continuous Integration fall flat. On-demand production releases are jeopardised.
One precondition of being in a state of Continuous Integration is to fix a broken build within ten minutes. As long as the build is broken, the team cannot perform on-demand production releases. This irrevocably incapacitates the organisation to make money.
Continuous Integration is a practice that ensures always working software on mainline and gives us feedback within minutes as whether a change broke the application or not. To guarantee that the mainline is in a releasable state at all times, we need to verify every single commit. Therefore, every push to a remote mainline triggers an automated build and execution of all the automated tests.
As long as we have not pushed our local commits to the remote Mainline, the integration did not happen. We do not know whether our changes broke the application or not. No communication regarding our local changes happened with the rest of the team. Admittedly, the rest of the team is blind to our local changes. We are not working as a team but in isolation. Consequently, it is critical to push our local commits at least once a day into the remote Mainline to achieve Continuous Integration.
To accomplish Agree as a Team to Never Break the Build we have to Run a Local Build and Commit Only on Green. To know we can Commit on Green after Running the Local Build, we must Make the Build Self-Testing. Agree as a Team to Never Break the Build is a cornerstone of Continuous Integration. As a consequence, Making the Build Self-Testing is a necessary condition to realise Continuous Integration.
Occasionally, I am asked to help out with technology due diligence. Recently, I was involved in a different kind of due diligence. We were asked how the technology organisation can save on costs. After some analysis and many interviews, we suggested a Team Topologies approach to organising teams that came with an impressive estimated cost reduction.
Recently, I was speaking at ScanAgile, the biggest Scandinavian agile conference. ScanAgile is on a mission to increase diversity and attract more new, first-time speakers, which is laudable. So, they asked us, the speakers, for tips and tricks on becoming a successful speaker. Two things about this. First, what does it mean to be a successful speaker? What is that? I do not have the answer to that. I do not think I am one. That makes it a bit harder to give any advice. Second, I am not a fan of tips and tricks. I have often experienced this as patronising. It lacks context. What works for one person does not necessarily work for someone else. In that regard, I prefer sharing what works for me as a speaker and what I look for as a conference reviewer. Your mileage may vary. Note that, for now, I stopped being a reviewer for personal reasons.
Now and then, I rant about Pull Requests on social media. The rants are on the inefficiencies and the fairly low value of Pull Requests compared to their cost. The friends who understand lean principles appreciate my rants with applause and cheering. On the other hand, there is a fair share of somehow offended people who feel the need to belittle and call unprofessionalism. I guess it is time to provide some deeper argumentation than social media allows.
During this whole article series - On the Evilness of Feature Branching - I have not mentioned a single time anything evil about feature branching. So, where is the evilness? Is it the problems they introduce? Or the reasons teams use them for? Or the compliance reasons that pushes teams into using feature branches? In all truthfulness, it is none of these. But something else.
Months ago, I made the observation that engineers seem to enjoy administrative tasks seeing how much affection they show for the Pull Request. Malik reacted to this with “Show me a different process that guarantees a green mainline”. Manifestly, the answer to that is: Agree as a Team to Never Break the Build. To this, Malik replied: “‘agreeing to never break the build’ is like agreeing to never produce a bug… It’s nonsensical, why not prevent the issue in the first place instead of playing a blame game where the developer is bound to fail at some point?”. In all honesty, I appreciate Malik. We do not often agree online. But we are somehow aligned on the outcomes, i.e. have a green mainline. We just use different techniques to get there. Having said that, I decidedly disagree with Malik.
Every time I suggest the adoption of trunk-based development, I always get that one, single, same question asked: What about Code Reviews? How do we do Code Reviews when we do not have branches anymore? Of course, this assumes code reviews can only happen with Pull Requests.
This article describes a personal case study on how my wife and I used a lean principle to avoid a decision with somehow dramatic consequences.
The usual way to achieve fast Continuous Code Reviews is through Pair Programming or Ensemble Programming. In this article, I will share a less common approach to Continuous Code Reviews using Non-Blocking Reviews.
Recently, as a reaction to the Practices that make Continuous Integration, someone suggested on LinkedIn the Pull Request based approach without formal reviews could supersede Agree as a Team to Never Break the Build. The benefit would be it removes the need to rely on team agreements to avoid broken builds. Now they would be guaranteed by the Pull Request build. As attractive as this looks, it ignores everything that emanates from Agree as a Team to Never Break the Build.
In part 1 - Team Working for Continuous Integration we looked into the team practices that make Continuous Integration. In part 2 - Coding for Continuous Integration we explored the engineering practices for Continuous Integration. In this last part, we investigate the required practices for a team to succeed with Continuous Integration, i.e. which are the building practices and how to improve builds to support the team practices - especially Agree as a Team to Never Break the Build - and the coding practices - in particular Make Changes in Small Increments and Commit Frequently.
In part 1 - Team Working for Continuous Integration we looked at all the necessary practices to achieve team work around Continuous Integration. Now, we investigate the critical engineering practices individuals, pairs or ensembles should adopt to attain Continuous Integration as a team.
This first part of this series investigates the practices that enable teamwork that in turn enable Continuous Integration. Continuous Integration is a Team Practice. We achieve it as a team and not as a set of individuals. Most of the time, practices are defined for individuals. When most team members apply them, the team does well. However, with Continuous Integration, most team members have to implement other practices before the team can say they practice Continuous Integration.
Honestly, it feels a bit awkward to still write about Continuous Integration. It has been over 20 years since Kent Beck introduced Continuous Integration in his book Extreme Programming Explained. But, when looking around me, I realise teams and organisations still struggle with adopting Continuous Integration.
In part 4 of this series - The Problems I dived deep into the problems introduced by feature branching. But, what can we do about this? How can we avoid the problems introduced by feature branching altogether?
In part 2 of this series - Why do Teams use Feature Branches? - and part 3 - But Compliance!? I went into all the possible reasons teams mention themselves to why they use feature branches. This time I want to go deeper into the problems introduced by the use of feature branches.
Is pursuing 100% code coverage a good or a bad thing? Code coverage is an interesting metric. However, 100% code coverage is a crappy target. It encourages gaming.
In part 2 of this series - Why do Teams use Feature Branches? - I examined all the possible reasons teams mention for why they use feature branches. There was, however, one reason, I did not mention that people referenced as the ultimate reason: “We use feature branches and pull requests to comply with regulations”. I would like to explore this and show there are other options to comply that do not have the same drawbacks.
In part 1 of this series - a Tale of Two Teams - I introduced two quite different teams. One novice team practising trunk-based development, the other experienced but being used by GitFlow. Now I would like to explore why teams use feature branches. What are their reasons? What problems are they trying to solve with long-running branches?
On the experience of working with two totally different teams: one novice practising trunk-based development, the other very experienced being used by GitFlow.
Pulumi is the new kid on the block in the cloud infrastructure as code arena. I was relieved to find Pulumi. Finally, we have testable Infrastructure as Code. We can write fast unit tests that we can execute locally without needing the cloud. However, I was a bit disappointed. Pulumi does not have a full representation of IAM Policy documents. Fortunately, it was relatively easy to build a library that did this!
A set of questions I use during the occasional times I have to run an interview to hire engineers inside an agile team.
Over time, different people have articulated Conway’s Law in various different ways. This is an overview of the variations I found when recently going over Conway’s Law literature.
Branch creation became very easy with the advent of Distributed Version Control Systems. However, it comes at an unquestionable cost. Long living branches break the flow of the IT delivery process, impacting both stability and throughput.
It has been five years since I started with public speaking. I wanted to share my journey into public speaking. My highs and my lows. Maybe my experience as a shy, introverted person can help others.
Back in October 2019, Vasco Duarte wanted to perform a bonus series on Continuous Delivery for the Scrum Master Toolbox Podcast. By chance, I got part of this series.
There is this commonly accepted, hard-grained belief in the software industry. By dropping a build server in a team, they get Continuous Integration magically for free. This belief is further incentivised by the marketing of build server vendors.
In this post, Leena interviews Thierry de Pauw, a Software Engineer who coaches on Continuous Delivery and other Software Engineering Practices. His focus is helping teams to improve the flow of software delivery.
We asked Aino for an abstract and she said “no”.
Code quality is an abstract concept that fails to get traction at the business level. Consequently, software companies keep trading code quality for new features.
Scrum, Kanban, ShapeUp, and their like are comfortable prisons that make it easy to be mediocre. They promote an illusion of productivity and a mirage of predictable delivery. Just relax and follow the script, and you’re sure to get results–the same lackadaisical results as everyone else. It’s no accident that one of them is called “SAFe”, because no one ever got fired for following a framework.
I’ve often heard people say “But how do you know you’ve tested everything?” and the answer is “I know I haven’t”. We must walk a fine line when working in high-speed delivery environments. There is a delicate balance between efficiency and thoroughness, between delivering something to the customer and delivering something of value to the customer.
I’ve done testing in various forms throughout my career. Automated, exploratory, scripted, you name it! Over the years though, I’ve been through situations where even with all the testing in the world we couldn’t quite avoid serious quality problems with the software we were releasing. Have you ever been in such a situation as well? Or is it something you haven’t considered before?
Quality is complex. Quality is subjective. Even if we all talk about the same system and have the same professional background, our assessment of a system’s quality might wildly differ. So, when I got the request to detect the quality gaps among multiple teams, I knew I needed a uniform way to scan and assess in order to provide this information.
Leadership is one the most significant challenges to business agility adoption faced by organizations. It’s is a key factor―individuals who welcome complexity and know how to leverage influence, culture, and organizational design to align widely distributed teams are integral to the success
Let’s face it: test automation is hard. Teams across the industry continue to struggle with the same old problems again and again: flaky tests, poor coverage, and never enough time to develop automation. While many teams have reached success, many others feel left behind.
Scanning content from the testing community, exploring conference programs I started to see a pattern with automation content, they are all framed about how to do automation successfully. You should do A, followed by B and magic will happen. Those talks are insightful, often contain great advice, but what if a talk switched the framing?
As software testing becomes increasingly complex, companies are turning to artificial intelligence (AI) to optimize their testing processes.
Everyone at the Agile Testing Days Conference has heard the term “debugging.” You may also know the story of the first computer bug. In 1947, early in her career with the US Navy, Dr. Grace Hopper was helping prepare a Mark I computer to show to a visiting Navy Admiral. Something was wrong. As a good Software Tester she finally isolated the problem to mechanical relay #70. (Back then, mechanical relays opening and shutting served as 1s and 0s.) The arms of Relay #70 had smashed a moth. Dr. Hopper scraped off the moth and taped it in her lab journal: the world’s first computer bug!
Does it always feel like your team is chasing after quality? Are you taking steps to improve quality, but it never feels like it’s moving the needle?
There are things you don’t talk about with your colleagues - even less so with your boss. Mental health issues are certainly a big no-no. When I first started working as an agile tester, I kept my history with mental illness secret. As a result, I couldn’t speak openly about topics that are close to my heart: mental health and self-care. In the Agile World however, we value respect, courage, and openness. How do you reconcile this with these taboos?
Kanban doesn’t work for us because we don’t have items of the same size” - If I had a dollar for every time I heard this statement, I could afford to take my entire family on a month-long, all-inclusive, five-star tropical vacation. And I have a large family!
Mob programming is when the whole development team works on the same thing, at the same time, in the same space and with one shared computer, screen and keyboard. I’ve been working as a software engineer in a development team where we’ve been mob programming for well over a year, every day, without exceptions. We’ve noticed an enormous boost in productivity and we really feel that we make the most use of the team’s overall brain capacity to solve problems and to ship high quality software to our users. But it’s not always that easy. We’ve learned to master this way of working the hard way, by continuously improving on our processes.
Data Warehouses are normally seen as big, expensive, lengthy, waterfall projects, using complex and costly tools. They don’t need to be. As someone who has helped several teams to build successful data warehouses, I know that an Agile approach is much better, and in this talk I will explain how we do this. We can deliver incremental, useful reports and applications within weeks of starting the project and can continue to consolidate an organisation’s data, improving its quality and the meaningful insights it can deliver. At the same time we develop wider understanding of the data, often leading to improved business processes.
This talk explains that D&I is the main asset organisations have, to thrive in a complex world. It explains how D&I is a real catalyst to achieving Agility, and that to solve complexity, Diversity and Agility must go hand in hand
We’ll discuss what debt looks like, and how easy it is to get into debt in the first place. Then we’ll put a plan into action,
The world has turned itself upside down the past two years - there’s no doubt about that. With it, our relationship with work has shifted, team dynamics have changed and how and where we work is up for constant debate. With that in mind, Vimla will be taking the stage to speak about conscious culture and intentional team design - how can we help our people be their best selves so that they design and deliver products and services that are fit for everyone. Sounds easy right? It’s an ambition that most companies strive for, but very few achieve so join Vimla’s talk to see what role you can play in changing the world.
Great products come from great passion, not technical excellence. The products with the best market fit, beloved of users, will often be held together with love and damp string. In their murky codebases you’ll find every example of bad practice. These products of passion grow quickly, but often fail when the technical limits of their poor implementation start biting. Exceptionally high running costs, high costs of change, and increasing fragility can damage the customer relationship.
This session will explore the conditions needed to facilitate developer autonomy and create an awesome DevEx. What makes a good developer experience? What are the common stumbling blocks? How to architect to manage risk, governance and autonomy? How to build modern team topologies that don’t hinder progress and ensure long term sustainability? How to build a culture that is transparent and breaks down silos?
A quote from a recent client: “We don’t do the important stuff…because we won’t do anything that is NOT our OKRs”. Also: “OKRs give me performance anxiety :-(”
Since You Build It You Run It was outlined in 2006, on-call product teams as an operating model has gone from being a controversial idea… to being a controversial idea. Enterprise organisations don’t do it, but they do talk about why they don’t do it.
If you are a developer you are probably working on a large and complicated codebase. Unfortunately a lot of existing code lacks automated tests and adding them can be challenging, particularly if the code is old or poorly structured. Testability has always been an aspect of architecture that people have said is important but all too often I see this aspect ignored. Approval testing is a technique that helps you to get a difficult codebase under test and begin to control your technical debt. Approval testing works best on larger pieces of code where you want to test for multiple things. Because of this, the architecture of the system is really important for success with this testing technique.
Let’s put aside the ‘bubblegum and unicorns’ of the Spotify Engineering Culture videos and talk about what doesn’t quite work at Spotify, and how we’re trying to solve it. This is a failure/learning report intended for coaches and other change agents who need encouragement that it’s always hard AND it’s always possible to improve.
Over the last 7 years I have heard lots of reasons on why people can’t do Continuous Delivery. This is my summary of these reasons and why they are wrong.
Clarke Ching organised the online conference ToC Down-Under Summit 2021 at Sydney time. I was able to attend the first two presentations of each day. These are my notes of the keynote of day 2 given by Ian Larsen.
Clarke Ching organised the online conference ToC Down-Under Summit 2021 at Sydney time. I was able to attend the first two presentations of each day. These are my notes of the keynote of day 1 given by Justin Roff-Marsh.
Most of us know about Conway’s adage “Any organization will produce a design which is a copy of the organization’s communication structure.” But Conway coined four laws in his 1968 paper “How Do Committees Invent?” What are the other ones? Why are we not talking about them? And what do they tell us about optimizing teams in a distributed world? – Mike Amundsen
This is the paper behind Conway’s Law. I’ve assembled some snippets from the paper that triggered me and added some thoughts.
Cloud native is the perfect recipe for innovation, adaptability and engineering excellence – when it goes right. When it’s not right, it can be a monster spaghetti, a quality headache, and frustratingly inflexible. Why so negative?
Who designs the architecture of your software systems? Conway’s Law suggests that HR may be strongly shaping software architecture by deciding how teams are composed and interrelate. Do you want HR designing your software architecture?
One of the issues with the maturity models we often use to assess teams is they are context free. They don’t take the environment into account, missing the challenges and the specific needs we have in our context. We will explore an alternative: Maturity Mapping. How can Wardley Mapping, Social Practice Theory and Cynefin be applied together to develop situational awareness and build a shared understanding of the practices you use and the unique challenges you face.
Thinking about flow from some new perspectives. Find opportunities to think about flow. Talk about turbulence: how flow is both good and bad.
Estimating with story points is a common practice. But is it worth the effort?
From TOIL to Continuous Delivery of Infrastructure, our tail of migrating our existing Infrastructure as code tools & wrappers so that they can be used in a CD system, but with all of the control grey-beards, enterprises & governments expect.
The only sure thing about forecasts is that they are WRONG – James P. Womack and Daniel T. Jones.
how ThoughtWorks UK exceeded 40% women and non-binary people in tech roles
We are trying to tell something positive about #NoProjects instead of just saying No.
My notes from the Anand Bagmar’s Analytics workshop I followed at AADays 2018 in Poland.
My notes from a performance testing workshop I followed at AADays 2018 in Poland. It was my very first encounter with performance testing.
Human beings have an astounding ability to see patterns and apply them in new contexts…but how often do we see patterns that don’t truly exist, and what happens when those patterns are misapplied? In a complex domain it’s only in retrospect that we can understand how outcomes emerged, and we don’t get much more complex than human systems.
Blog post: https://lizkeogh.com/2012/09/21/the-deliberate-discovery-workshop/
Complexity framework: a sense-making framework, patterns emerge from the data
Complex adaptive systems: no linear cause, but instead we have dispositional state, a set of possibilities and plausibilities, in which future state cannot be predicted.
If you want to go in one direction, the best route may involve going in the other. Goals are more likely to be achieved when pursued indirectly.
This is a full transcript of Allan Kelly’s “Continuous Delivery and Conway’s Law” at the LondonCD Meetup. It gives an in-depth overview of what Conway’s Law is, how it impacts organisational and system design and how it relates to Continuous Delivery.
Why I got interested in micro services? Because from my early days at ThoughtWorks was actually helping people ship software more quickly. I spent lots of time looking at Continuous Integration, Continuous Delivery, cloud automation, infrastructure automation, automated tests, and all these sort of things. And realise that actually it was the architecture of these systems that made it hard to ship software more quickly.
These are my notes on Maike Goldkuhle’s session at XP2017 on HR and Agile. I wanted to attend this session because there is very little said about the role of HR in agile transitions.
This is a full write out, almost word for word, of Steve Smith’s presentation ‘Measuring Continuous Delivery’ at Pipeline Conf 2017. I’ve written this presentation down, together with the first version of this presentation at the LondonCD meetup, to be better prepared to review Steve’s book Measuring Continuous Delivery.
Dave Farley describes approaches to acceptance testing that allow teams to work quickly and effectively, build functional coverage tests and maintain those tests throughout change.
Your branching strategy is an extremely important choice to make. In this talk I hope to show how a change of branching strategy can actually change your team’s mindset. Specifically I look at a shift from a feature branching strategy to a trunk based strategy affected the team. In my view these changes were for the better and I guess most at PIPELINE will agree but I leave that for others to decide on this occasion.
For millenia, human beings have survived by learning, then applying our learning to different contexts. We’re so good at it that we’re driven to find those patterns, even when they don’t exist. Our desire for the predictable suffuses everything we do; our beliefs, our behaviour and even our identity. From cognitive bias to the metaphors that underlie our language, we create constructs of words and imagination that keep us from innovating… and yet, they’re the same constructs that help us move forward in uncertainty. Without them, we’d be unable to make decisions at all! In this talk we look at how our language and perceptions can hold us back, and how changing the things we say and the way we look at the world might help us become more resilient, happy and innovative.
Les Hazlewood, CTO @ Stormpath gives an in-depth overview of what makes a RESTful API. The original video on which these notes are based is not available any more. I fell back on the video of JAXConf.
consulting CTO - IT Delivery Consultant - IT Engineer