I use the code to add a picture texture in RealityKit and it works fine. var material = SimpleMaterial() material.baseColor = try! .texture(.load(named: "ima
I have seen some examples demo of ARKit where material A is blocking material B, kind of creating occlusion effect, or black hole, or masking. But all of them s
When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be i
I made a 3D object in Blender and made some custom animation to it. However, I manage to load the object to the scene, but not the animation. This is what it
I get the following warning during runtime when I tap to load the model in ARView: Warning (secondary thread): in AppendProperty at line 859 of sdf/path.cpp --
ARKit 2.0 added a new class named AREnvironmentProbeAnchor. Reading it's instructions, it seems that ARKit can automatically collect environment texture (cubema
Please forgive me if this question is not that great. I've hit a bit of a road block on Apple's documentation of ARGeoAnchor. Currently ARGeoAnchor just shows a
I am in USA (Houston, TX) and I am trying to add a ModelEntity in RealityKit to a specific geo location. But I am not able to see the entity anywhere. Am I doin
I cannot manage to release my RealityKit ARView() from memory. I am aware that there were similar issues with ARKit + SceneKit with workarounds which doesn't so
So, I'm trying to create a sceneView programatically class ViewController: UIViewController, ARSCNViewDelegate { var sceneView: ARSCNView = ARSCNView()
I want to switch the ModelEntity from the fvBoatAnchor.fvBoatObject to the fvBridgeAnchor.fvBridgeObject with the click of a button, but I'm not sure how I do i
I want to make a mix of virtual reality and augmented reality. The goal is I have a stereo camera (for each eyes). I tried to put two ARSCNView in a viewCotnro
given a ARMeshAnchor 'meshAnchor', i can retrieve its geometry and faces : let geometry = meshAnchor.geometry let faces = geometry.faces var i = Int(0) while i
I have made project with ARKit that uses metal shaders to perform a mirroring effect, and this is assigned by using sceneView.technique. I am trying to figure o
In my renderer delegate I create a raycast query from the center of the view to track estimated plane and display a 3D pointer that follows the raycast result.
Out of the box it's pretty clear ARKit doesn't allow for the tracking of more than 4 images at once. (You can "track" more markers than that but only 4 will fun
I am brand new to ARKit (and a novice in swift) but I am trying to create a basic AR app. I am following this tutorial in which a simple scene is created essent
I want to know if there is a way for the scenes that gets exported from the reality composer to deal with multiple surfaces in the real world. So for example,
I want to build a demo app in ARKit and I have some questions about what is currently possible with the beta (Apple has been calling this RealityKit, or ARKit 3
I'm currently doing some experiments with RealityKit. I've been looking at some sample code, and I'm a bit confused about the differences between ARAnchor and