\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \newcommand{\E} {\mathrm{E}} \)
deepdream of
          a sidewalk
Show Question
\( \newcommand{\cat}[1] {\mathrm{#1}} \newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})} \newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}} \newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}} \newcommand{\betaReduction}[0] {\rightarrow_{\beta}} \newcommand{\betaEq}[0] {=_{\beta}} \newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}} \newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}} \newcommand{\groupMul}[1] { \cdot_{\small{#1}}} \newcommand{\groupAdd}[1] { +_{\small{#1}}} \newcommand{\inv}[1] {#1^{-1} } \newcommand{\bm}[1] { \boldsymbol{#1} } \require{physics} \require{ams} \require{mathtools} \)
Math and science::INF ML AI

Belief networks: independence examples




1 a) Marginalizing over \( C \)  makes \( A \) and \( B \) independent. In other words, \( A \) and \( B \) are (unconditionally) independent \( p(A,B) = p(A)p(B) \). In the absence of any information about the effect of \( C \) we retain this belief. 


1 b) Conditioning on \( C \) makes A and B (graphically) dependent \( p(A, B \vert C) \ne p(A \vert C)p(B \vert C) \). Although the causes are a priori independent, knowing the effect \( C \) can tell us something about how the causes colluded to bring about the effect observed.



2. Conditioning on \(  D \), a descendent of a collider \( C \) makes \( A \) and \( B \)  (graphically) dependent. \( p(A, B \vert D) \ne p(A \vert D)p(B \vert D) \)



3 a) \( p(A, B, C) = \) \(p(A \vert C)p(B \vert C)p(C) \). Here there is a 'cause' \( C \) and independent 'effects' \( A \) and \( B \).

3 b) Marginalizing over \( C \) makes \( A \) and \( B \) (graphically) dependent. \( p(A, B) \ne p(A)p(B) \). Although we don't know the 'cause', the 'effects' will nevertheless be dependent. 



3 c) Conditioning on \( C \) makes \( A \) and \( B \)  independent. \( p(A, B \vert C) = p(A \vert C)p(B \vert C) \). If you know the 'cause' \( C \), you know everything about how each effect occurs, independent of the other effect. This is also true for reversing the arrow from \( A \) to \( C \)—in this case, \( A \) would 'cause' \( C \) and then \( C \) would 'cause' \( B \). Conditioning on \( C \) blocks the ability of \( A \) to influence \( B \).


Finally, these following graphs all express the same conditional independence assumptions.



Source

Bayesian Reasoning and Machine Learning
David Barber
p42-43