currently the robots.txt file is useless because
it's interpreted as one path "/icons/ /fonts/ *.js *.css"
(an example path that would be accepted -- and therefore disallowed
for robots) by this regex would be `https://cobalt.tools/icons/ /fonts/ bla.js .css`,
which is obviously nonsense & useless)