If you have a function in the form of f(kx)f(kx), the graph is horizontally scaled by a factor of kk and
the bigger the magnitude of kk, the more compressed the graph gets, and the inverse is true.
So by definition, kk should be called the horizontal compression factor of the function, meaning if k=12k=12, the graph is horizontally compressed by a factor of 1212, and since stretching is the inverse of compressing you could also say the graph is horizontally stretched by a factor of 22. and using the same logic, if k=2k=2 then the graph is horizontally compressed by a factor of 22 or horizontally stretched by a factor of 1212
But this is not the case and the accepted practice is to say the graph is compressed by a factor of kk if |k|>1|k|>1 and stretched by a factor of kk if 0<|k|<10<|k|<1
This seems extremely unintuitive and to me it doesn't use the word factor correctly. So my question is why do we describe stretching and compressing transformations like this?