Nice question, and I have to say from the start that this is my personal view on the topic, so please take it as such.
This I believe is somehow outside of the scope of reporting on quality as I framed it for the session. I say this because my frame was that of reporting as output or follow-up of the testing effort. I believe this approach is kind of complementary to what I have covered in during the session. I say this, because this seems to me another option to get stakeholder involvement, and it is not a product of software testing effort.
Also, as I mentioned, quality can mean so many things, and it would be difficult to build relevant surveys for relevant audiences for different perspectives of quality, not to say that surveys are a tool that only captures the segment of audience that wants to reply and interact, leaving a silent minority (or majority) that does not express themselves via survey replies.
One other aspect I believe needing attention is the “survey fatigue”, that will occur at some point, increasing the risk of having not so useful input.
To sum it up, I believe building and administering surveys is a different approach, not excluding a healthy approach on reporting on quality. I would not do this, as administering surveys is not an easy task, being in itself a different skillset and craft, not to mention being effort intensive. I have not taken this approach and I do not have examples on this.