Formal systems like computers are incredibly useful, because they represent and manipulate ideas. However, there’s a wide spectrum of what a statement like that could really mean.
On one end, sometimes just storing and retrieving information is the goal of a formal system. This is like the relationship of a word processor to the text it stores; its job isn’t to glean any meaning from the words you type into it, it just stores them and handles basic things like visual presentation, line wrapping, etc. Even the “deeper” functionality in a word processor (like spell-checking) is still fairly surface, compared to how a human mind interacts with that same text.
Going a level deeper, sometimes the goal of a system is to represent more of the internal structure of concepts, in a way that allows you to manipulate them. This is more like the way that a calculator works with numbers; they’re not just opaque strings of bits, they map to the sophisticated domain of mathematics. The entities in a calculator can therefore be added together, divided, raised to powers, etc.
At the far end of the spectrum, you might say that there are programs that aim to not just store and manipulate ideas in conceptual categories that come from human minds, but to create their own categories–to “think” in a way we’d recognize as thinking. This is one of the loftier goals of AI systems, and it’s one we haven’t approached yet.