It explains all parts of your regex and highlights all matches in your example text. I usually add a comment to a regex101 playground if I use a regex in code.
I have written dozens upon dozens of Regexes without using reverse negative lookups, but I guess according to you I don't really know Regexes because I haven't used those specific features?
You don't need to know all about a subject to know a subject.
I'm not sure why you're bringing up efficiency, I'm not talking about that. If I don't understand a topic, I can't do things with the topic. I use Regexes, so I must at least somewhat understand Regexes.
I’m saying if you look at my code, and I write a lot of negative lookups in dev ops for data validation, you can’t read my code. It’s nothing personal.
Well, earlier you were saying that I don't know Regexes if I don't know reverse negative lookups. Funnily enough Regex101 would help me understand your code in this case, so you were wrong on that count too.
I am, and I can think of many cases where plain dumb string matching since you know what you're dealing with beats regex in both performance and maintainability.
You're a clown that wouldn't know how to compare two strings without regex even if you got paid 6 figures to do it.
There's a lot of use cases where regex makes a lot of sense: complex log parsing, determining if a value entered is a valid phone number or email, syntax highlighting, data validation in ML preprocessing, etc. A lot of languages also come with certain features that allow regex to be more efficient than dumb string matching, such as the ability to pre-compile patterns and the flexibility of being able to choose between deterministic and non-deterministic finite automata, should you need efficiency for one use case and flexibility for another. It really depends on what you're designing and how it's going to be used, of course.
My guess is, that someone started with a small share of features to find a simple solution for the problem, but the complexity of the problem got waaaay out of hand.
Regexes are actually used in formal computer science (if that's the right term), i.e. "proof that this and that algorithm won't deadlock" or something like that.
They're actually really elegant and can cover a lot. But you'll have to learn them by using them.
For the purpose of algorithm verification, the final and/or pushdown automaton or probably sometimes even Turing Machines are used, because they are easier to work with. "Real" regular expressions are only nice to write a grammar for regular languages which can be easily interpreted by the computer I think. The thing is, that regexs in the *nix and programming language world are also used for searching which is why there are additional special characters to indicate things like: "it has to end with ..." and there are shortcuts for when you want that a character or sequence occurs
at least once,
once or never or
a specified number of times
back to back.
In "standard" regex, you would only have
() for grouping,
* for 0 or any number of occurances (so a* means blank or a or aa or ...)
+ as combining two characters/groups with exclusive or (in programming, a+ is mostly the same as aa* so this is a difference)
and sometimes some way to have a shortcut for (a+b+c+...+z) if you want to allow any lower case character as the next one
So there are only 4 characters which have the same expressive power as the extended syntax with the exception of not being able to indicate, that it should occur at the end or beginning of a string/line (which could even be removed if one would have implemented different functions or options for the tools we now have instead)
You are probably thinking of Temporal logic which allows us to model if algorithms and programs terminate etc! It can be represented by using state machines tho!
It's been a while, so I'm quite rusty, especiallyeon the terminology, but I think we modelled feasible sequences of finite and infinite state machines using regexes.
Regex is actually just a way to write (Epsilon) non determistic state automata(ε-NDA) using text! ε-NDA comes from automata theory and they are just a somewhat powerful way to describe state machines!
They can kind of be seen as a stepping stone to things like Context-Free Grammars which is what language parsers use to define their language/parsers, and Turing machines! Regex is a fundamental part of computer science, and they are of course incredibly useful in string validation due to their expressive power!
If you study at uni and get the chance to take a course in automata theory I recommend it! Personal favorite subject :)
It's pattern-matching. Like searching *.txt to get all text files. It's just... more. There's symbols for matching the start of a string, the end of a string, a set of characters, repetition, etc. Very "etc." And the syntax blows. The choices of . for match-any-character and * for zero-or-more really fuck with common expectations.
It can also replace substrings that match. Like changing the file extension of all text files. Where it gets properly difficult is in "capture groups." Like looking for all file extensions, and sticking a tilde after the dot. You can put parentheses around part of the pattern being matched and then reference that in the replacement. Conceptually simple - pain in the ass to use properly - syntax both sucks and blows.
Lookahead is what you do to match "ass" but not "assault." I refuse to elaborate further.