𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Untangling Herdan's law and Heaps' law: Mathematical and informetric arguments

✍ Scribed by Leo Egghe


Publisher
John Wiley and Sons
Year
2007
Tongue
English
Weight
289 KB
Volume
58
Category
Article
ISSN
1532-2882

No coin nor oath required. For personal study only.

✦ Synopsis


Abstract

Herdan's law in linguistics and Heaps' law in information retrieval are different formulations of the same phenomenon. Stated briefly and in linguistic terms they state that vocabularies' sizes are concave increasing power laws of texts' sizes. This study investigates these laws from a purely mathematical and informetric point of view. A general informetric argument shows that the problem of proving these laws is, in fact, ill‐posed. Using the more general terminology of sources and items, the author shows by presenting exact formulas from Lotkaian informetrics that the total number T of sources is not only a function of the total number A of items, but is also a function of several parameters (e.g., the parameters occurring in Lotka's law). Consequently, it is shown that a fixed T (or A) value can lead to different possible A (respectively, T) values. Limiting the T(A)‐variability to increasing samples (e.g., in a text as done in linguistics) the author then shows, in a purely mathematical way, that for large sample sizes TLA^θ^, where θ is a constant, θ6<1 but close to 1, hence roughly, Heaps' or Herdan's law can be proved without using any linguistic or informetric argument. The author also shows that for smaller samples, u is not a constant but essentially decreases as confirmed by practical examples. Finally, an exact informetric argument on random sampling in the items shows that, in most cases, T = T(A) is a concavely increasing function, in accordance with practical examples.


📜 SIMILAR VOLUMES