This paper explores the use of physical signs as anchors for digital annotations and other information. In our prototype system, the user carries a camera-equipped handheld device with pen-based input to capture street signs, restaurant signs, and shop signs. Image matching is supported by interactively established point correspondences. The captured image along with context information is transferred to a back-end server, which performs image matching and returns the results to the user. We present a comparison of four different algorithms for the sign matching task. We found that the SIFT algorithm performs best. Moreover, we discovered that lighting conditions - especially glare - have a crucial impact on the recognition rate.