Most existing computational models of the visual perception of 3D shape from texture are based on assumed constraints about how texture is distributed on visible surfaces. To compute shape from blob textures, for example, it is typically assumed that the texture is isotropic and/or homogeneous. Other models have been developed for contour textures that assume contours are oriented along surface geodesics or directions of principal curvature. The present research was designed to investigate how violations of these assumptions influence human perception. The displays depicted roughly spherical objects with random patterns of ridges and valleys. These objects were rendered with two types of volumetric textures. Contour textures were created using a random pattern of parallel planar cuts through an object that could be oriented in three possible directions. Blob textures were created by carving each object from a volume of small spheres. These spheres could also be stretched in a horizontal or vertical direction so that the distribution of surface markings would be both anisotropic and inhomogeneous. Observers judged the pattern of ordinal depth on each object by marking local maxima and minima along designated scan lines. They also judged the apparent magnitudes of relative depth between designated probe points on the surface. There was a high degree of reliability on these tasks both within and between observers. When the different patterns of texture were compared, the variations in judged depth were remarkably small. Indeed, the observers' judgments were almost perfectly correlated across each possible pair of texture conditions. These findings suggest that human perception of 3D shape from texture is much more robust than would be reasonable to expect based on current computational models of this phenomenon.