We consider both l0-penalized and l0-constrained quantile regression estimators. For the l0-penalized estimator, we derive an exponential inequality on the tail probability of excess quantile prediction risk and apply it to obtain non-asymptotic upper bounds on the mean-square parameter and regression function estimation errors. We also derive analogous results for the l0-constrained estimator. The resulting rates of convergence are minimax-optimal and the same as those for l1-penalized estimators. Further, we characterize expected Hamming loss for the l0-penalized estimator. We implement the proposed procedure via mixed integer linear programming and also a more scalable first-order approximation algorithm. We illustrate the finite-sample performance of our approach in Monte Carlo experiments and its usefulness in a real data application concerning conformal prediction of infant birth weights (with n ≈ 103 and up to p > 103). In sum, our l0-based method produces a much sparser estimator than the l1-penalized approach without compromising precision.