Motion estimation has traditionally been approached either from a pure discrete point of view, using multi-view tensors, or from a pure continuous point of view, using optical-flow-based techniques. This talk covers the recent development of hybrid matching constraints for motion estimation. These hybrid matching constraints are based on both corresponding feature points and the motion of these feature points, thus combining the advantages of both discrete and continuous methods. The main usage of these constraints is as a theoretical basis for filtering approaches to structure and motion recovery, enabling an update of a current motion estimate when a new image becomes available. One important feature is that the update formulas become linear in the motion parameters in the calibrated case, which is a major improvement compared to the standard discrete approach. Another advantage is that fewer points are needed in the update formula then in the traditional discrete case. We will present several hybrid matching constraints and derive their properties as well as show how they can be used for structure and motion estimation. First the hybrid bifocal and trifocal constraints will be treated, extending the traditional discrete epipolar and trifocal constraints. Then we will derive novel hybrid constraints for structure and motion recovery from a rigidly moving calibrated stereo-head. Finally, we will derive novel hybrid matching constraints for the 2D-case, enabling linear update of the motion parameters from a calibrated 2D-camera.