Local independence

Local independence is the underlying assumption of latent variable models. The observed items are independent of each other given an individual score on the latent variable(s). This means that the latent variable explains why the observed items are related to another. This can be explained by the following example.

Example
Local independence can be explained by an example of Lazarsfeld and Henry (1968). Suppose that a sample of 1000 people is asked whether they read journal A and B. Their responses were:

you can easily see that the two variables (reading A and reading B) are strongly related, and thus dependent of each other. Readers of A tend to read B more often (52%) than non readers of A (28%). When reading A and B is independent, than P(A&B) = P(A)xP(B)

But 260/1000 isn't 400/1000 * 500/1000.

Thus reading A and B is dependent of each other.

But when we also look to the education level of these people we get these tables:

And again if reading A and B are independent, than P(A&B) = P(A)xP(B) for each education level.

240/500 = 300/500*400/500 and 20/500 = 100/500*100/500.

Thus when we look separate to the high and low educated people, there is no relationship between the two journals. That is, reading A and B are independent within educational level. The educational level 'explains' the difference in reading A and B.

Latent variable
In latent variable models the latent variable can be seen as education in the example. This means that the manifest variables are locally independent given the latent variable(s). This assumption is also necessary to identify the latent variables.