In many random assignment problems, the central planner has their own policy objective, such as matching size and minimum quota fulfillment. A number of practically important policy objectives are not aligned with agents’ preferences and known to be incompatible with strategy-proofness. This paper proves that such policy objectives can be achieved by mechanisms that satisfy Bayesian incentive compatibility in a restricted domain of von Neumann Morgenstern utilities. We prove that if a mechanism satisfies the three axioms of swap monotonicity, lower invariance, and interior upper invariance, then the mechanism satisfies Bayesian incentive compatibility in an inverse-bounded-indifference (IBI) domain. We apply this axiomatic characterization to analyze the incentive property of a novel mechanism, the constrained serial dictatorship mechanism (CRSD). CRSD is designed to generate an individually rational assignment that maximizes the central planner’s policy objective function. As CRSD satisfies these axioms, CRSD is Bayesian incentive compatible in an IBI domain.