We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by Svanberg. At the start of each outer iteration, the initial curvatures of the diagonal quadratic approximations are estimated using historic objective and/or constraint function value information, or by building the diagonal quadratic approximation to the reciprocal approximation at the current iterate. During inner iterations, these curvatures are increased if no feasible descent step can be made. Although this conditional enforcement of conservatism on the subproblems is a relaxation of the strict conservatism enforced by Svanberg, global convergence is still inherited from the conservative convex separable approximations framework developed by Svanberg. A numerical comparison with the globally convergent version of the method of moving asymptotes and the nonconservative variants of both our algorithm and method of moving asymptotes is made.