Big O notation is tremendously useful for reasoning about performance and memory consumption of algorithms. Depending on the exact nature of your work your life might not exactly depend on it but even when developing something as seemingly mundane as CRUD applications being able to analyze the performance of a particular sorting or search algorithm might come in handy at times.
The Wikipedia article on Big O notation outlines it this way:
“Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.[ … ]
In computer science, big O notation is used to classify algorithms by how they respond to changes in input size, such as how the processing time of an algorithm changes as the problem size becomes extremely large.”
As the professor of my data structures class at university once put it: “Big O notation is a way of describing something inexact in exact terms.” (which in my opinion says a lot about the inherent elegance of mathematics)
While I absolutely recommend learning about relevant mathematical concepts in abstract terms, because this’ll both help you with abstract thinking and give you a deeper understanding of how software works (particularly in this time of rekindled interest in AI and machine learning) sometimes it helps thinking about these concepts in the kind of language you’re used to from everyday work, which of course is programming language:
Fellow IT freelancer Rob Bell some time ago published an article titled A beginner’s guide to Big O notation, in which he gives simple examples of common algorithm performance behaviours by using a notation every programmer uses every day: for-loops.
Now, while it’s by no means complete or extensive the explanation provided in the article should give you a very good idea about what the concept of Big O notation is about and how to apply it in a very pragmatic manner.