'Pass data from ViewController to Representable SwiftUI
I am doing an object detection and used UIViewControllerRepresentable
to add my view controller. The thing is that I can't pass data from my ViewController
to my SwiftUI view. I can print it tho.
Can someone help me? This is my code:
//
import SwiftUI
import AVKit
import UIKit
import Vision
let SVWidth = UIScreen.main.bounds.width
struct MaskDetectionView: View {
let hasMaskColor = Color.green
let noMaskColor = Color.red
let shadowColor = Color.gray
var body: some View {
VStack(alignment: .center) {
VStack(alignment: .center) {
Text("Please place your head inside the bounded box.")
.font(.system(size: 15, weight: .regular, design: .default))
Text("For better result, show your entire face.")
.font(.system(size: 15, weight: .regular, design: .default))
}.padding(.top, 10)
VStack(alignment: .center) {
SwiftUIViewController()
.frame(width: SVWidth - 30, height: SVWidth + 30, alignment: .center)
.background(Color.white)
.cornerRadius(25)
.shadow(color: hasMaskColor, radius: 7, x: 0, y: 0)
.padding(.top, 30)
Spacer()
/// VALUE HERE
}
}.padding()
}
}
struct MaskDetectionView_Previews: PreviewProvider {
static var previews: some View {
MaskDetectionView()
}
}
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
var result = String()
//ALL THE OBJECTS
override func viewDidLoad() {
super.viewDidLoad()
// 1 - start session
let capture_session = AVCaptureSession()
//capture_session.sessionPreset = .vga640x480
// 2 - set the device front & add input
guard let capture_device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: .video, position: .front) else {return}
guard let input = try? AVCaptureDeviceInput(device: capture_device) else { return }
capture_session.addInput(input)
// 3 - the layer on screen that shows the picture
let previewLayer = AVCaptureVideoPreviewLayer(session: capture_session)
view.layer.addSublayer(previewLayer)
previewLayer.frame.size = CGSize(width: SVWidth, height: SVWidth + 40)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
// 4 - run the session
capture_session.startRunning()
// 5 - the produced output aka image or video
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
capture_session.addOutput(dataOutput)
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
// our model
guard let model = try? VNCoreMLModel(for: SqueezeNet(configuration: MLModelConfiguration()).model) else { return }
// request for our model
let request = VNCoreMLRequest(model: model) { (finishedReq, err) in
if let error = err {
print("failed to detect faces:", error)
return
}
//result
guard let results = finishedReq.results as? [VNClassificationObservation] else {return}
guard let first_observation = results.first else {return}
self.result = first_observation.identifier
print(self.result)
}
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {return}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
}
struct SwiftUIViewController: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> ViewController{
return ViewController()
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
}
}
Solution 1:[1]
The idiomatic way is to circulate a Binding
instance through the UI hierarchy - this includes both the SwiftUI and the UIKit code. The Binding
will transparently update the data on all views that are connected to it, regardless of who did the change.
A diagram of the data flow could look similar to this:
OK, getting to the implementation details, first things first you need a @State
to store data that will come from the UIKit side, in order to make it available for updates to the view controller,:
struct MaskDetectionView: View {
@State var clasificationIdentifier: String = ""
Next, you need to pass this to both the view controller and the SwiftUI view:
var body: some View {
...
SwiftUIViewController(identifier: $clasificationIdentifier)
...
// this is the "VALUE HERE" from your question
Text("Clasification identifier: \(clasificationIdentifier)")
Now, that you are properly injecting the binding, you'll need to update the UIKit side of the code to allow the binding to be received.
Update your view representable to look something like this:
struct SwiftUIViewController: UIViewControllerRepresentable {
// this is the binding that is received from the SwiftUI side
let identifier: Binding<String>
// this will be the delegate of the view controller, it's role is to allow
// the data transfer from UIKit to SwiftUI
class Coordinator: ViewControllerDelegate {
let identifierBinding: Binding<String>
init(identifierBinding: Binding<String>) {
self.identifierBinding = identifierBinding
}
func clasificationOccured(_ viewController: ViewController, identifier: String) {
// whenever the view controller notifies it's delegate about receiving a new idenfifier
// the line below will propagate the change up to SwiftUI
identifierBinding.wrappedValue = identifier
}
}
func makeUIViewController(context: Context) -> ViewController{
let vc = ViewController()
vc.delegate = context.coordinator
return vc
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
// update the controller data, if needed
}
// this is very important, this coordinator will be used in `makeUIViewController`
func makeCoordinator() -> Coordinator {
Coordinator(identifierBinding: identifier)
}
}
The last piece of the puzzle is to write the code for the view controller delegate, and the code that makes use of that delegate:
protocol ViewControllerDelegate: AnyObject {
func clasificationOccured(_ viewController: ViewController, identifier: String)
}
class ViewController: UIViewController {
weak var delegate: ViewControllerDelegate?
...
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
...
print(self.result)
// let's tell the delegate we found a new clasification
// the delegate, aka the Coordinator will then update the Binding
// the Binding will update the State, and this change will be
// propagate to the Text() element from the SwiftUI view
delegate?.clasificationOccured(self, identifier: self.result)
}
Solution 2:[2]
Swift has multiple ways to enable you to pass data back and forth between views and objects
Such as delegation, Key-Value-Observation, or, specifically for SwiftUI, property wrappers such as @State, @Binding, @ObservableObject and @ObservedObject. However, when it comes to showing your data in a SwiftUI view, you are going to need property wrappers.
If you want to do it the SwiftUI way, you may want to take a look at @State
and @Binding
property wrappers as well as how to use coordinators with your UIViewControllerRepresentable
struct. Add a @State
property to the SwiftUI view and pass it as a binding to your UIViewControllerRepresentable
.
//Declare a new property in struct MaskDetectionView and pass it to SwiftUIViewController as a binding
@State var string result = ""
...
SwiftUIViewController(resultText: $result)
//Add your new binding as a property in the SwiftUIViewController struct
@Binding var string resultText
That way you expose a piece of the SwiftUI view (the result string you can use within a Text
view for instance) to the UIViewControllerRepresentable
. From there, you can either pass it down further to ViewController
and/or have a look at the following article about coordinators: https://www.hackingwithswift.com/books/ios-swiftui/using-coordinators-to-manage-swiftui-view-controllers
In my opinion, encapsulating your camera work inside another class ViewController
is obsolete and can be done just as well by using a coordinator. The following steps should assist you in getting your view controller up and running:
- Create your view controller code in
makeUIView
, including setting up the AVKit object - Make sure to put
context.coordinator
as the delegate instead ofself
- Create a nested class
Coordinator
inside theSwiftUIViewController
and declare that class yourAVCaptureVideoDataOutputSampleBufferDelegate
- Add a property to the coordinator to hold an instance of your view controller object and implement the initializer and
makeCoordinator
functions to have the coordinator store a reference to your view controller - If set up correctly so far, you can now implement your
AVCaptureVideoDataOutputSampleBufferDelegate
delegation functions in the coordinator class and update the view controller's binding property when detecting something and return a result
Solution 3:[3]
protocol (interface in other language) make easy life in such use case, its also really simple to use
1 - define a protocol at suitable place
2 - implement at required view(class, struct)
3 - pass implemented object reference to caller class or struct
example - > below
//Protocol
protocol MyDataReceiverDelegte {
func dataReceived(data:String) //any type of data as your need, I choose String
}
struct MaskDetectionView: View, MyDataReceiverDelegte { // implementer struct
func dataReceived(data:String){
//write your code here to process received data
print(data)
}
var body: some View {
//your views comes here
VStack(alignment: .center) {
SwiftUIViewController(parent:self)
}
}
}
//custom view
struct SwiftUIViewController: UIViewControllerRepresentable {
let parent:MaskDetectionView
func makeUIViewController(context: Context) -> ViewController{
return ViewController(delegate:parent)
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
}
}
//caller class
//i omit your code for simpilicity
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let delegate: MyDataReceiverDelegte
init(delegate d: MyDataReceiverDelegte) {
self.delegate = d
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
//your code comes here
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
//your rest of code comes here ,
delegate.dataReceived("Data you want to pass to parent view")
}
}
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | |
Solution 3 |