most academic papers are not cited and that’s ok
A classic result in the social analysis of science is that most papers are poorly cited. For example, the classic deSolla Price paper in Science (1965) found that the modal citation count in his sample was zero. Low mean and modal citation counts remain the standard in contemporary studies of scientific behavior. So, what gives?
Scientific research is a type of creative pursuit. By definition, journal articles are supposed to report on what is new or novel. Once you buy that, the low citation rates in science make sense. First, creativity (or importance) is a scarce commodity. Anyone trained in a psychology graduate program can do an experiment, but few can do a novel experiment. Second, new results are themselves scarce. Fields quickly get covered and only obscure points remain. Third, even if you have a creative scientist who found a genuinely important problem, they might not have an audience. Perhaps people are focused on other issues, or the scientist is low status or publishing in a low status journal.
In principle, we should expect that few articles will deserve more than token citation. But still, why can’t journals just stick to important stuff? The answer is imperfect knowledge. Once in a while we encounter obvious innovation, but usually we have a limited ability to predict what will be important. It is better to over publish and let history be the judge. Considering that the cost of journal publishing is low (but not the subscription!), we should be ok with a world of many uncited and lonely articles.