In this paper, we present a general approach to finite-memory detection. From a semi-tutorial perspective, a number of previous results are rederived and new insights are gained within a unified framework. A probabilistic derivation of the well-known Viterbi algorithm, forward–backward, and sum-product algorithms, shows that a basic metric emerges naturally under very general causality and finite-memory conditions. This result implies that detection solutions based on one algorithm can be systematically extended to other algorithms. For stochastic channels described by a suitable parametric model, a conditional Markov property is shown to imply this finite-memory condition. This conditional Markov property, although seldom met exactly in practice, is shown to represent a reasonable and useful approximation in all considered cases. We consider, as examples, linear predictive and noncoherent detection schemes. While good performance for increasing complexity can often be achieved with a finite-memory detection strategy, key issues in the design of detection algorithms are the computational efficiency and the performance for limited complexity.