'Cyclomatic Complexity (as Reported in SonarQube) - Is This an Acceptable Solution? Requesting for Opinions
We have been using SonarQube on our code and it complains of Code Complexity Issues by attaching a complexity score to code. One of the type of issues that SQ reported is (as I have found out what the name is) is called "Cyclomatic Complexity". It's basically the amount of branching (conditional statements like if/then, switch, ternary operators). As an example, here is a function that is considered to have high cyclomatic complexity:
function foo(a, b, c, d) {
var result = 0;
if (a === b) { result = 1; }
else {
if (b > c) {
result = 2;
}
else {
result = c > d ? 3 : 4;
}
if (a < 0) { result = b; }
}
switch (result) {
case 1:
result = result + (a < 0 ? 2 : 3);
break;
case 2:
result = result*2;
break;
default:
result = 0;
}
return result;
}
Just indulge me that a function above is a real use case, as crazy as the logic may look like. SonarQube will probably report a complexity score of 10+ points on the code above.
We have been tasked to refactor so that SQ complexity score is below a certain number of points. To this effect, and for lack of other solutions, we refactored the above function by writing other functions that correspond to an if-statement or ternary operator, and calling them from this main function. This is so that those encapsulated if-statements and ternary operators do not add to the complexity score of the main function. For example:
function ternarize(x,y,z) {
return x ? y : z;
}
function equalize(x, y, z) {
if (x === y) { return z; }
}
function foo(a, b, c, d) {
var result = 0;
result = equalize(a, b, 1);
if (result !== 1) {
if (b > c) {
result = 2;
}
else {
result = ternarize(c > d, 3, 4);
}
if (a < 0) { result = b; }
}
switch (result) {
case 1:
result = result + ternarize(a < 0, 2, 3);
break;
case 2:
result = result*2;
break;
default:
result = 0;
}
return result;
}
What's your opinion on the above example of refactoring to satisfy code complexity requirements? The obvious alternative is to remove the requirement on specific files of code on a case-by-case basis, but I was not successful in pushing for this alternative. The resulting refactor on our actual code created the following:
- Functions that simply wrapped built-in language constructs
- Functions that are used only once at the most, and only exist to satisfy code complexity requirements.
I have specific questions:
- When is it worth writing a function that will only be used once by another function and never anywhere else?
- When is it worth writing functions that wrap around built-in language constructs (like comparison operators <, >, === etc) to reduce code complexity? I have been searching for a similar example of refactoring above, but have not found any, let alone justifiable ones.
Thanks in advance for your thoughts.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|