'RealityKit – Adding ModelEntity to an ARGeoAnchor

I am in USA (Houston, TX) and I am trying to add a ModelEntity in RealityKit to a specific geo location. But I am not able to see the entity anywhere. Am I doing something wrong?

// Geo anchor
let location = CLLocationCoordinate2D(latitude: 30.0374898290727, 
                                     longitude: -95.58518171314036)
        
let geoAnchor = ARGeoAnchor(coordinate: location, altitude: 70)
arView.session.add(anchor: geoAnchor)
        
let geoAnchorEntity = AnchorEntity(anchor: geoAnchor)
arView.scene.anchors.append(geoAnchorEntity)
        
let box = ModelEntity(mesh: MeshResource.generateBox(size: 0.5), 
                 materials: [SimpleMaterial(color: .green, isMetallic: true)])
        
geoAnchorEntity.addChild(box)


Solution 1:[1]

It exclusively works outdoors, usually on the main public streets, next to the roadways. How does it work? Well, Cupertino cars use large LiDARs to scan the environment along the road, and then the scanned results (called localization imagery for Apple Maps) are uploaded to a server. Consider, internet connection is compulsory when using ARGeo configuration.

Running session with ARGeo config tracks your locations with GPS, map data, and iPhone's compass. If your location matches the coordinates that the Apple car passed, then what your device's sensors "see" is compared with the data that's stored on Apple server. And based on the matches, a location anchor will be created (or will not be created).

Why is it so complicated? The answer is obvious: GPS alone doesn't provide a high precision for location anchor positioning. GPS-enabled smartphones are typically accurate to within 5.0 meters.

To place location anchors with precision, geotracking requires a better understanding of the user’s geographic location than is possible with GPS alone. Based on the user's GPS coordinates, ARKit downloads imagery that depicts the physical environment in that area. Apple collects this localization imagery in advance by capturing photos of the view from the street and recording the geographic position at each photo. By comparing the device's current camera image with this imagery, the session matches the user’s precise geographic location with the scene's local coordinates.

For additional info read this post and this post.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1