CS131,Homewrok8,Tracking-OpticalFlow
Homework 8
This assignment covers Lukas-Kanade tracking method. Please hand in motion.py
and this notebook file to Gradescope.
这份作业是实现Lukas-Kanade光流追踪算法。
1 | # Setup |
0. Displaying Video
We have done some cool stuff with static images in past assignemnts. Now, let’s turn our attention to videos! For this assignment, the videos are provided as time series of images. We also provide utility functions to load the image frames and visualize them as a short video clip.
Note: You may need to install video codec like FFmpeg. For Linux/Mac, you will be able to install ffmpeg using apt-get
or brew
. For Windows, you can find the installation instructions here.
用视频来做处理。
此处将图片序列整理成短视频片段。
需要安装ffmpeg。推荐的方式是在anaconda下安装:conda install -c conda-forge ffmpeg
1 | from utils import animated_frames, load_frames |
1. Lucas-Kanade Method for Optical Flow
1.1 Deriving optical flow equation
Optical flow methods are used to estimate motion at each pixel location in two consecutive image frames. The optical flow equation at a spatio-temporal point $\mathbf{p}=(x, y, t)$ is given by:
$$ I_x({\mathbf{p}})v_{x} + I_y({\mathbf{p}})v_{y} + I_t({\mathbf{p}}) = 0 $$,where $I_x$, $I_y$ and $I_t$ are partial derivatives of pixel intensity $I$, and $v_{x}={\Delta x}/{\Delta t}$ and $v_{x}={\Delta x}/{\Delta t}$ are flow vectors.
Let us derive the equation in order to understand what it actually means. First, we make a reasonable assumption (a.k.a. brightness constancy) that the pixel intensity of a moving point stays the same between two consecutive frames with small time difference. Consider pixel intensity $I(x, y, t)$ of a point $(x, y)$ in the first frame $t$. Suppose that the point has moved to $(x+\Delta{x}, y+\Delta{y})$ after $\Delta{t}$. According to the brightness constancy constraint, we can relate intensities of the point in the two frames using the following equation:
$$ I(x,y,t)=I(x+\Delta{x},y+\Delta{y},t+\Delta{t}) $$a. Derive the optical flow equation from the brightness constancy equation. Clearly state any assumption you make during derivation.
b. Can the optical flow equation be solved given two consecutive frames without further assumption? Which values can be computed directly given two consecutive frames? Which values cannot be computed without additional information?
Your answer here: Write your answer in this markdown cell
光流公式的推导
a.
利用灰度不变假设,我们有:
$$ I(x,y,t)=I(x+\Delta{x},y+\Delta{y},t+\Delta{t}) $$对右边进行泰勒展开,保留一阶项,得:
$$ I(x+\Delta{x},y+\Delta{y},t+\Delta{t}) \approx I(x,y,t) + I_x\Delta{x} + I_y\Delta{y} + I_t\Delta{t} $$因为我们假设了灰度不变,所以下个时刻的灰度等于之前的灰度,从而:
$$ I_x\Delta{x} + I_y\Delta{y} + I_t\Delta{t} = 0 $$两边除以$\Delta{t}$ ,得:
$$ I_x \frac{\Delta{x}}{\Delta{t}} + I_y\frac{\Delta{y}}{\Delta{t}} + I_t= 0 $$其中,$\frac{\Delta{x}}{\Delta{t}}$表示像素在x轴上的运动速度,$\frac{\Delta{y}}{\Delta{t}}$表示像素在y轴上的运动速度,用$v_x$和$v_y$表示,最终得:
$$ I_x({\mathbf{p}})v_{x} + I_y({\mathbf{p}})v_{y} + I_t({\mathbf{p}}) = 0 $$b.
问:光流方程凭借上式,不做其他假设能够求解?
答:不能求解。
上式中,$I_x$、$I_y$、$I_t$可以经由连续的两幅图像求解,$I_x$表示图像在点P处x方向的梯度,$I_y$表示图像在点P处y方向的梯度,$I_t$表示图像灰度对时间的变化量。
但是仍有两个未知变量$v_x$和$v_y$,最终得到的是一个两个变量的一次方程。
无法求解,需要引入其他约束。
1.2 Overview of Lucas-Kanade method
The Lucas–Kanade method assumes that the motion of the image contents between two frames is approximately constant within a neighborhood of the point $p$ under consideration (spatial coherence).
Consider a neighborhood of $p$, $N(p)={p_1,…,p_n}$ (e.g. 3x3 window around $p$). According to the optical flow equation and spatial coherence assumption, the following should be satisfied:
For every $p_i \in N(p)$,
$$ I_{x}(p_i)v_x + I_{y}(p_i)v_y = -I_{t}(p_i) $$These equations can be written in matrix form $Av=b$, where
$$ A = \begin{bmatrix} I_{x}(p_1) & I_{y}(p_1)\\ I_{x}(p_2) & I_{y}(p_2)\\ \vdots & \vdots\\ I_{x}(p_n) & I_{y}(p_n) \end{bmatrix} \quad v = \begin{bmatrix} v_{x}\\ v_{y} \end{bmatrix} \quad b = \begin{bmatrix} -I_{t}(p_1)\\ -I_{t}(p_2)\\ \vdots\\ -I_{t}(p_n) \end{bmatrix} $$Note that this linear system may not have solution for $v$ as it is usually over-determined. Instead, Lucas-Kanade method estimates the flow vector by solving the least-squares problem, $A^{T}Av=A^{T}b$.
- a. What is the condition for this equation to be solvable?
- b. Reason about why Harris corners might be good features to track using Lucas-Kanade method.
Lucas-Kanade算法的思想
上一个光流公式的推导可以得知:求解光流公式需要引入其他约束,而LK光流法的约束就是,假设某一个窗口内的像素具有相同的运动。
如果是采用3x3的窗口,则会另有9个公式,此时采用最小二乘来估计最优的$v_x$和$v_y$即可。
Your answer here: Write your answer in this markdown cell
问题a:方程求解的条件是什么?参见lecture17 PPT
- $A^TA$可逆
- 因为噪声的存在,$A^TA$不能太小
- $A^TA$的特征值$λ_1$和$λ_2$都不能太小
- $A^TA$应当是well-conditioned
- $\frac{λ_1}{λ_2} $不能太大,($λ_1$是较大的特征值)
问题b:为何Harris角点会是LK光流法追踪的好特征?参见lecture17 PPT第25页
因为$M=A^TA$是二阶矩矩阵,$A^TA$的特征向量和特征值与边缘的方向和强度相关。
- 较大的特征值对应的特征向量表示强度变化最快的方向
- 另外一个特征向量则与它正交
1.3 Implementation of Lucas-Kanade method
In this section, we are going to implement basic Lucas-Kanade method for feature tracking. In order to do so, we first need to find keypoints to track. Harris corner detector is commonly used to initialize the keypoints to track with Lucas-Kanade method. For this assignment, we are going to use skimage
implementation of Harris corner detector.
实现基础版本LK光流
第一步需要找到关键点来进行追踪。
Lucas-Kanade方法通常是使用Harris角点检测器来进行关键点的初始化。
skimage中提供了Harris角点检测的实现。
skimage中的Harris角点检测可以参考:Programming Computer Vision with Python (学习笔记九)
1 | from skimage import filters |
Implement function lucas_kanade
in motion.py
and run the code cell below. You will be able to see small arrows pointing towards the directions where keypoints are moving.
实现两帧之间的LK光流追踪
1 | from motion import lucas_kanade |
We can estimate the position of the keypoints in the next frame by adding the flow vectors to the keypoints.
通过当前帧关键点的位置和光流的方向,就可以预测下一帧中的关键点的位置
1 | # Plot tracked kepoints |
上图可以看到,光流法预测的第二帧中的特征点地位置还是基本正确的。
1.4 Feature Tracking in multiple frames
Now we can use Lucas-Kanade method to track keypoints across multiple frames. The idea is simple: compute flow vectors at keypoints in $i$-th frame, and add the flow vectors to the points to keep track of the points in $i+1$-th frame. We have provided the function track_features
for you. First, run the code cell below. You will notice that some of the points just drift away and are not tracked very well.
Instead of keeping these ‘bad’ tracks, we would want to somehow declare some points are ‘lost’ and just discard them. One simple way to is to compare the patches around tracked points in two subsequent frames. If the patch around a point is NOT similar to the patch around the corresponding point in the next frame, then we declare the point to be lost. Here, we are going to use mean squared error between two normalized patches as the criterion for lost tracks.
Implement compute_error
in motion.py
, and re-run the code cell below. You will see many of the points disappearing in later frames.
多帧图像中的特征追踪
作业中已经提供了追踪连续图像帧中的特征点的功能,在track_features
函数中。
其中思路就是根据初始的关键点位置和相邻两帧LK光流法得到的光流方向,计算预测后续每一帧中的关键点位置。
但是由于出现一些关键点的漂移(计算精确度以及关键点的消失),整体的追踪效果会很差。
为了去除这些“坏点” 的影响,提出了一种思路就是:判断关键点的丢失。
原理就是:判断相邻两帧关键点区域图像块的相似程度,如果不相似则判断为该点丢。它的依据就是Lucas-Kanade的假设,某一窗口内的像素具有相同的移动。
计算方式:计算两个归一化的图像块的均方差MSE,如果大于阈值,则抛弃该点。
1 | from utils import animated_scatter |
从上面的运行效果看,保留下来的关键点都能够准确的被预测到。
但是也可以看到就是,当运动幅度较大的时候,好多关键点都被丢弃了。
2. Pyramidal Lucas-Kanade Feature Tracker
In this section, we are going to implement a simpler version of the method described in “Pyramidal Implementation of the Lucas Kanade Feature Tracker”.
金字塔光流法
博客参考
用以解决运动较大时的光流追踪失败问题。
这篇博客主要还是讲解如何利用金字塔法算法优化光流法,具体的原理为何我还是没有搞定,应该要去看作者原文的介绍了。
“Pyramidal Implementation of the Lucas Kanade Feature Tracker”.
2.1 Iterative Lucas-Kanade method
One limitation of the naive Lucas-Kanade method is that it cannot track large motions between frames. You might have noticed that the resulting flow vectors (blue arrows) in the previous section are too small that the tracked keypoints are slightly off from where they should be. In order to address this problem, we can iteratively refine the estimated optical flow vectors. Below is the step-by-step description of the algorithm:
Let $p=\begin{bmatrix}p_x & p_y \end{bmatrix}^T$ be a point on frame $I$. The goal is to find flow vector $v=\begin{bmatrix}v_x & v_y \end{bmatrix}^T$ such that $p+v$ is the corresponding point of $p$ on the next frame $J$.
Initialize flow vector:
$$
v=
\begin{bmatrix}
0\0
\end{bmatrix}
$$Compute spatial gradient matrix:
$$
G=\sum_{x=p_x-w}^{p_x+w}\sum_{y=p_y-w}^{p_y+w}
\begin{bmatrix}
I_{x}^2(x,y) & I_{x}(x,y)I_{y}(x,y)\
I_{x}(x,y)I_{y}(x,y) & I_{y}^2(x,y)
\end{bmatrix}
$$for $k=1$ to $K$
- Compute temporal difference: $\delta I_k(x, y) = I(x,y)-J(x+g_x+v_x, y+g_y+v_y)$
- Compute image mismatch vector:
$$
b_k=\sum_{x=p_x-w}^{p_x+w}\sum_{y=p_y-w}^{p_y+w}
\begin{bmatrix}
\delta I_k(x, y)I_x(x,y)\
\delta I_k(x, y)I_y(x,y)
\end{bmatrix}
$$ - Compute optical flow: $v^k=G^{-1}b_k$
- Update flow vector for next iteration: $v := v + v^k$
Return $v$
Implement iterative_lucas_kanade
method in motion.py
and run the code cell below. You should be able to see slightly longer arrows in the visualization.
1 | from motion import iterative_lucas_kanade |
可以看到,迭代式光流法可以检测到大运动的光流。
1 | # Plot tracked kepoints |
1 | # Detect keypoints to track in the first frame |