First complete implementation of the hierarchical SAP.

This commit is contained in:
Crozet Sébastien
2021-03-13 18:00:58 +01:00
parent a967ace7d4
commit 3a1502be74
18 changed files with 784 additions and 282 deletions

View File

@@ -1,26 +1,87 @@
use super::{
BroadPhasePairEvent, BroadPhaseProxies, BroadPhaseProxy, BroadPhaseProxyData, ColliderPair,
SAPLayer, SAPRegion, NEXT_FREE_SENTINEL, SENTINEL_VALUE,
BroadPhasePairEvent, ColliderPair, SAPLayer, SAPProxies, SAPProxy, SAPProxyData, SAPRegionPool,
};
use crate::data::pubsub::Subscription;
use crate::dynamics::RigidBodySet;
use crate::geometry::broad_phase_multi_sap::SAPProxyIndex;
use crate::geometry::{ColliderSet, RemovedCollider};
use crate::math::Real;
use crate::utils::IndexMut2;
use parry::bounding_volume::BoundingVolume;
use parry::utils::hashmap::HashMap;
/// A broad-phase based on multiple Sweep-and-Prune instances running of disjoint region of the 3D world.
/// A broad-phase combining a Hierarchical Grid and Sweep-and-Prune.
///
/// The basic Sweep-and-Prune (SAP) algorithm has one significant flaws:
/// the interactions between far-away objects. This means that objects
/// that are very far away will still have some of their endpoints swapped
/// within the SAP data-structure. This results in poor scaling because this
/// results in lots of swapping between endpoints of AABBs that won't ever
/// actually interact.
///
/// The first optimization to address this problem is to use the Multi-SAP
/// method. This basically combines an SAP with a grid. The grid subdivides
/// the spaces into equally-sized subspaces (grid cells). Each subspace, which we call
/// a "region" contains an SAP instance (i.e. there SAP axes responsible for
/// collecting endpoints and swapping them when they move to detect interaction pairs).
/// Each AABB is inserted in all the regions it intersects.
/// This prevents the far-away problem because two objects that are far away will
/// be located on different regions. So their endpoints will never meed.
///
/// However, the Multi-SAP approach has one notable problem: the region size must
/// be chosen wisely. It could be user-defined, but that's makes it more difficult
/// to use (for the end-user). Or it can be given a fixed value. Using a fixed
/// value may result in large objects intersecting lots of regions, resulting in
/// poor performances and very high memory usage.
///
/// So a solution to that large-objects problem is the Multi-SAP approach is to
/// replace the grid by a hierarchical grid. A hierarchical grid is composed of
/// several layers. And each layer have different region sizes. For example all
/// the regions on layer 0 will have the size 1x1x1. All the regions on the layer
/// 1 will have the size 10x10x10, etc. That way, a given AABB will be inserted
/// on the layer that has regions big enough to avoid the large-object problem.
/// For example a 20x20x20 object will be inserted in the layer with region
/// of size 10x10x10, resulting in only 8 regions being intersect by the AABB.
/// (If it was inserted in the layer with regions of size 1x1x1, it would have intersected
/// 8000 regions, which is a problem performancewise.)
///
/// We call this new method the Hierarchical-SAP.
///
/// Now with the Hierarchical-SAP, we can update each layer independently from one another.
/// However, objects belonging to different layers will never be detected as intersecting that
/// way. So we need a way to do inter-layer interference detection. There is a lot ways of doing
/// this: performing inter-layer Multi-Box-Pruning passes is one example (but this is not what we do).
/// In our implementation, we do the following:
/// - The AABB bounds of each region of the layer `n` are inserted into the corresponding larger region
/// of the layer `n + 1`.
/// - When an AABB in the region of the layer `n + 1` intersects the AABB corresponding to one of the
/// regions at the smaller layer `n`, we add that AABB to that smaller region.
/// So in the end it means that a given AABB will be inserted into all the region it intersects at
/// the layer `n`. And it will also be inserted into all the regions it intersects at the smaller layers
/// (the layers `< n`), but only for the regions that already exist (so we don't have to discretize
/// our AABB into the layers `< n`). This involves a fair amount of bookkeeping unfortunately, but
/// this has the benefit of keep the overall complexity of the algorithm O(1) in the typical specially
/// coherent scenario.
///
/// From an implementation point-of-view, our hierarchical SAP is implemented with the following structures:
/// - There is one `SAPLayer` per layer of the hierarchical grid.
/// - Each `SAPLayer` contains multiple `SAPRegion` (each being a region of the grid represented by that layer).
/// - Each `SAPRegion` contains three `SAPAxis`, representing the "classical" SAP algorithm running on this region.
/// - Each `SAPAxis` maintains a sorted list of `SAPEndpoints` representing the endpoints of the AABBs intersecting
/// the bounds on the `SAPRegion` containing this `SAPAxis`.
/// - A set of `SAPProxy` are maintained separately. It contains the AABBs of all the colliders managed by this
/// broad-phase, as well as the AABBs of all the regions part of this broad-phase.
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub struct BroadPhase {
proxies: BroadPhaseProxies,
proxies: SAPProxies,
layers: Vec<SAPLayer>,
smallest_layer: u8,
largest_layer: u8,
removed_colliders: Option<Subscription<RemovedCollider>>,
deleted_any: bool,
#[cfg_attr(feature = "serde-serialize", serde(skip))]
region_pool: Vec<SAPRegion>, // To avoid repeated allocations.
region_pool: SAPRegionPool, // To avoid repeated allocations.
// We could think serializing this workspace is useless.
// It turns out is is important to serialize at least its capacity
// and restore this capacity when deserializing the hashmap.
@@ -47,7 +108,7 @@ impl BroadPhase {
pub fn new() -> Self {
BroadPhase {
removed_colliders: None,
proxies: BroadPhaseProxies::new(),
proxies: SAPProxies::new(),
layers: Vec::new(),
smallest_layer: 0,
largest_layer: 0,
@@ -58,42 +119,110 @@ impl BroadPhase {
}
/// Maintain the broad-phase internal state by taking collider removal into account.
pub fn maintain(&mut self, colliders: &mut ColliderSet) {
// Ensure we already subscribed.
fn handle_removed_colliders(&mut self, colliders: &mut ColliderSet) {
// Ensure we already subscribed the collider-removed events.
if self.removed_colliders.is_none() {
self.removed_colliders = Some(colliders.removed_colliders.subscribe());
}
// Extract the cursor to avoid borrowing issues.
let cursor = self.removed_colliders.take().unwrap();
// Read all the collider-removed events, and remove the corresponding proxy.
for collider in colliders.removed_colliders.read(&cursor) {
self.remove_proxy(collider.proxy_index);
self.predelete_proxy(collider.proxy_index);
}
colliders.removed_colliders.ack(&cursor);
// NOTE: We don't acknowledge the cursor just yet because we need
// to traverse the set of removed colliders one more time after
// the broad-phase update.
// Re-insert the cursor we extracted to avoid borrowing issues.
self.removed_colliders = Some(cursor);
}
fn remove_proxy(&mut self, proxy_index: usize) {
if proxy_index == crate::INVALID_USIZE {
/// Removes a proxy from this broad-phase.
///
/// The removal of a proxy is a semi-lazy process. It will mark
/// the proxy as predeleted, and will set its AABB as +infinity.
/// After this method has been called with all the proxies to
/// remove, the `complete_removal` method MUST be called to
/// complete the removal of these proxies, by removing them
/// from all the relevant layers/regions/axes.
fn predelete_proxy(&mut self, proxy_index: SAPProxyIndex) {
if proxy_index == crate::INVALID_U32 {
// This collider has not been added to the broad-phase yet.
return;
}
let proxy = &mut self.proxies[proxy_index];
let layer = &mut self.layers[proxy.layer_id as usize];
layer.remove_proxy(proxy, proxy_index);
// Let the layer know that the proxy is being deleted.
layer.predelete_proxy(&mut self.proxies, proxy_index);
}
// Push the proxy to infinity, but not beyond the sentinels.
proxy.aabb.mins.coords.fill(SENTINEL_VALUE / 2.0);
proxy.aabb.maxs.coords.fill(SENTINEL_VALUE / 2.0);
self.proxies.remove(proxy_index);
/// Completes the removal of the deleted proxies.
///
/// If `self.predelete_proxy` was called, then this `complete_removals`
/// method must be called to complete the removals.
///
/// This method will actually remove from the proxy list all the proxies
/// marked as deletable by `self.predelete_proxy`, making their proxy
/// handles re-usable by new proxies.
fn complete_removals(&mut self, colliders: &mut ColliderSet) {
// If there is no layer, there is nothing to remove.
if self.layers.is_empty() {
return;
}
// This is a bottom-up pass:
// - Complete the removal on the layer `n`. This may cause so regions to be deleted.
// - Continue with the layer `n + 1`. This will delete from `n + 1` all the proxies
// of the regions originating from `n`.
// This bottom-up approach will propagate region removal from the smallest layer up
// to the largest layer.
let mut curr_layer_id = self.smallest_layer;
loop {
let curr_layer = &mut self.layers[curr_layer_id as usize];
if let Some(larger_layer_id) = curr_layer.larger_layer {
let (curr_layer, larger_layer) = self
.layers
.index_mut2(curr_layer_id as usize, larger_layer_id as usize);
curr_layer.complete_removals(
Some(larger_layer),
&mut self.proxies,
&mut self.region_pool,
);
// NOTE: we don't care about reporting pairs.
self.reporting.clear();
curr_layer_id = larger_layer_id;
} else {
curr_layer.complete_removals(None, &mut self.proxies, &mut self.region_pool);
// NOTE: we don't care about reporting pairs.
self.reporting.clear();
break;
}
}
/*
* Actually remove the colliders proxies.
*/
let cursor = self.removed_colliders.as_ref().unwrap();
for collider in colliders.removed_colliders.read(&cursor) {
self.proxies.remove(collider.proxy_index);
}
colliders.removed_colliders.ack(&cursor);
}
fn finalize_layer_insertion(&mut self, layer_id: u8) {
if let Some(larger_layer) = self.layers[layer_id as usize].larger_layer {
// Remove all the region endpoints from the larger layer.
// They will be automatically replaced by the new layer's region.
self.layers[larger_layer as usize].delete_all_region_endpoints();
// They will be automatically replaced by the new layer's regions.
self.layers[larger_layer as usize].delete_all_region_endpoints(&mut self.proxies);
}
if let Some(smaller_layer) = self.layers[layer_id as usize].smaller_layer {
@@ -103,17 +232,20 @@ impl BroadPhase {
// Add all the regions from the smaller layer to the new layer.
// This will propagate to the bigger layers automatically.
for smaller_region in smaller_layer.regions.iter() {
let smaller_region_key = ???;
new_layer.preupdate_proxy_in_region(smaller_region.proxy_id, smaller_region_key);
}
smaller_layer.propagate_existing_regions(
new_layer,
&mut self.proxies,
&mut self.region_pool,
);
}
}
fn ensure_layer_exists(&mut self, new_depth: i8) -> u8 {
// Special case: we don't have any layers yet.
if self.layers.is_empty() {
self.layers.push(SAPLayer::new(new_depth));
let layer_id = self.layers.len() as u8; // TODO: check overflow.
self.layers
.push(SAPLayer::new(new_depth, layer_id, None, None));
return 0;
}
@@ -125,21 +257,26 @@ impl BroadPhase {
break;
}
larger_layer_id = self.layers[layer as usize].larger_layer;
larger_layer_id = self.layers[curr_layer_id as usize].larger_layer;
}
match larger_layer_id {
None => {
assert_ne!(self.layers.len() as u8, u8::MAX, "Not yet implemented.");
let new_layer_id = self.layers.len() as u8;
self.layers[self.largest_layer as usize].larger_layer = Some(new_layer_id);
self.layers.push(SAPLayer::new(
new_depth,
new_layer_id,
Some(self.largest_layer),
None,
));
self.largest_layer = new_layer_id;
self.layers
.push(SAPLayer::new(new_depth, Some(self.largest_layer), None));
self.finalize_layer_insertion(new_layer_id);
new_layer_id
}
Some(larger_layer_id) => {
if self.layers[larger_layer_id].depth == new_depth {
if self.layers[larger_layer_id as usize].depth == new_depth {
// Found it! The layer already exists.
larger_layer_id
} else {
@@ -150,10 +287,13 @@ impl BroadPhase {
if let Some(smaller_layer_id) = smaller_layer_id {
self.layers[smaller_layer_id as usize].larger_layer = Some(new_layer_id);
} else {
self.smallest_layer = new_layer_id;
}
self.layers.push(SAPLayer::new(
new_depth,
new_layer_id,
smaller_layer_id,
Some(larger_layer_id),
));
@@ -165,17 +305,16 @@ impl BroadPhase {
}
}
pub(crate) fn update_aabbs(
pub fn update(
&mut self,
prediction_distance: Real,
bodies: &RigidBodySet,
colliders: &mut ColliderSet,
events: &mut Vec<BroadPhasePairEvent>,
) {
// First, if we have any pending removals we have
// to deal with them now because otherwise we will
// end up with an ABA problems when reusing proxy
// ids.
self.complete_removals();
self.handle_removed_colliders(colliders);
let mut need_region_propagation = false;
for body_handle in bodies
.modified_inactive_set
@@ -195,12 +334,7 @@ impl BroadPhase {
let layer_id = self.ensure_layer_exists(layer_depth);
// Create the proxy.
let proxy = BroadPhaseProxy {
data: BroadPhaseProxyData::Collider(*handle),
aabb,
next_free: NEXT_FREE_SENTINEL,
layer_id,
};
let proxy = SAPProxy::collider(*handle, aabb, layer_id);
collider.proxy_index = self.proxies.insert(proxy);
layer_id
};
@@ -208,38 +342,94 @@ impl BroadPhase {
let layer = &mut self.layers[layer_id as usize];
// Preupdate the collider in the layer.
layer.preupdate_collider(collider, &aabb, &mut self.region_pool);
layer.preupdate_collider(collider, &aabb, &mut self.proxies, &mut self.region_pool);
need_region_propagation =
need_region_propagation || !layer.created_regions.is_empty();
}
}
// Bottom-up pass to propagate regions from smaller layers to larger layers.
if need_region_propagation {
self.propagate_created_regions();
}
// Top-down pass to propagate proxies from larger layers to smaller layers.
self.update_layers_and_find_pairs(events);
// Bottom-up pass to remove proxies, and propagate region removed from smaller layers to
// possible remove regions from larger layers that would become empty that way.
self.complete_removals(colliders);
}
pub(crate) fn complete_removals(&mut self) {
for layer in &mut self.layers {
layer.complete_removals(&self.proxies, &mut self.reporting, &mut self.region_pool);
// NOTE: we don't care about reporting pairs.
self.reporting.clear();
/// Propagate regions from the smaller layers up to the larger layers.
///
/// Whenever a region is created on a layer `n`, then its AABB must be
/// added to all the larger layers so we can detect whaen a object
/// in a larger layer may start interacting with objects in a smaller
/// layer.
fn propagate_created_regions(&mut self) {
let mut curr_layer = Some(self.smallest_layer);
while let Some(curr_layer_id) = curr_layer {
let layer = &mut self.layers[curr_layer_id as usize];
let larger_layer = layer.larger_layer;
if !layer.created_regions.is_empty() {
if let Some(larger_layer) = larger_layer {
let (layer, larger_layer) = self
.layers
.index_mut2(curr_layer_id as usize, larger_layer as usize);
layer.propagate_created_regions(
larger_layer,
&mut self.proxies,
&mut self.region_pool,
);
} else {
// Always clear the set of created regions, even else if
// there is no larger layer.
layer.created_regions.clear();
}
}
curr_layer = larger_layer;
}
}
pub(crate) fn find_pairs(&mut self, out_events: &mut Vec<BroadPhasePairEvent>) {
let mut layer_id = 0;
fn update_layers_and_find_pairs(&mut self, out_events: &mut Vec<BroadPhasePairEvent>) {
if self.layers.is_empty() {
return;
}
while let Some(layer) = self.layers.get_mut(layer_id as usize) {
layer.update_regions(&self.proxies, &mut self.reporting, &mut self.region_pool);
layer_id = layer.prev_layer; // this MUST be done before the `for` loop because we use the prev_layer index there.
// This is a top-down operation: we start by updating the largest
// layer, and continue down to the smallest layer.
//
// This must be top-down because:
// 1. If a non-region proxy from the layer `n` interacts with one of
// the regions from the layer `n - 1`, it must be added to that
// smaller layer before that smaller layer is updated.
// 2. If a region has been updated, then all its subregions at the
// layer `n - 1` must be marked as "needs-to-be-updated" too, in
// order to account for the fact that a big proxy moved.
// NOTE: this 2nd point could probably be improved: instead of updating
// all the subregions, we could perhaps just update the subregions
// that crosses the boundary of the AABB of the big proxies that
// moved in they layer `n`.
let mut layer_id = Some(self.largest_layer);
while let Some(curr_layer_id) = layer_id {
self.layers[curr_layer_id as usize]
.update_regions(&mut self.proxies, &mut self.reporting);
layer_id = self.layers[curr_layer_id as usize].smaller_layer;
for ((proxy_id1, proxy_id2), colliding) in &self.reporting {
let proxy1 = &self.proxies[*proxy_id1 as usize];
let proxy2 = &self.proxies[*proxy_id2 as usize];
let (proxy1, proxy2) = self
.proxies
.elements
.index_mut2(*proxy_id1 as usize, *proxy_id2 as usize);
let handle1 = proxy1.handle;
let handle2 = proxy2.handle;
match (&handle1, &handle2) {
(
BroadPhaseProxyData::Collider(handle1),
BroadPhaseProxyData::Collider(handle2),
) => {
match (&mut proxy1.data, &mut proxy2.data) {
(SAPProxyData::Collider(handle1), SAPProxyData::Collider(handle2)) => {
if *colliding {
out_events.push(BroadPhasePairEvent::AddPair(ColliderPair::new(
*handle1, *handle2,
@@ -250,29 +440,21 @@ impl BroadPhase {
)));
}
}
(
BroadPhaseProxyData::Collider(_),
BroadPhaseProxyData::Subregion(region_key),
) => {
(SAPProxyData::Collider(_), SAPProxyData::Region(_)) => {
if *colliding {
// Add the collider to the subregion.
let sublayer = &mut self.layers[layer_id as usize];
sublayer.preupdate_proxy_in_region(*proxy_id1, region_key);
proxy2.data.as_region_mut().preupdate_proxy(*proxy_id1);
}
}
(
BroadPhaseProxyData::Subregion(region_key),
BroadPhaseProxyData::Collider(_),
) => {
(SAPProxyData::Region(_), SAPProxyData::Collider(_)) => {
if *colliding {
// Add the collider to the subregion.
let sublayer = &mut self.layers[layer_id as usize];
sublayer.preupdate_proxy_in_region(*proxy_id2, region_key);
proxy1.data.as_region_mut().preupdate_proxy(*proxy_id2);
}
}
(BroadPhaseProxyData::Subregion(_), BroadPhaseProxyData::Subregion(_)) => {
(SAPProxyData::Region(_), SAPProxyData::Region(_)) => {
// This will only happen between two adjacent subregions because
// they share some of the same bounds. So this case does not matter.
// they share some identical bounds. So this case does not matter.
}
}
}

View File

@@ -1,81 +0,0 @@
use super::NEXT_FREE_SENTINEL;
use crate::geometry::ColliderHandle;
use crate::math::Point;
use parry::bounding_volume::AABB;
use std::ops::{Index, IndexMut};
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Copy, Clone, Debug)]
pub(crate) enum BroadPhaseProxyData {
Collider(ColliderHandle),
Subregion(Point<i32>),
}
impl BroadPhaseProxyData {
pub fn is_subregion(&self) -> bool {
match self {
BroadPhaseProxyData::Subregion(_) => true,
_ => false,
}
}
}
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub(crate) struct BroadPhaseProxy {
pub data: BroadPhaseProxyData,
pub aabb: AABB,
pub next_free: u32,
pub layer_id: u8,
}
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub(crate) struct BroadPhaseProxies {
pub elements: Vec<BroadPhaseProxy>,
pub first_free: u32,
}
impl BroadPhaseProxies {
pub fn new() -> Self {
Self {
elements: Vec::new(),
first_free: NEXT_FREE_SENTINEL,
}
}
pub fn insert(&mut self, proxy: BroadPhaseProxy) -> usize {
if self.first_free != NEXT_FREE_SENTINEL {
let proxy_id = self.first_free as usize;
self.first_free = self.elements[proxy_id].next_free;
self.elements[proxy_id] = proxy;
proxy_id
} else {
self.elements.push(proxy);
self.elements.len() - 1
}
}
pub fn remove(&mut self, proxy_id: usize) {
self.elements[proxy_id].next_free = self.first_free;
self.first_free = proxy_id as u32;
}
// FIXME: take holes into account?
pub fn get_mut(&mut self, i: usize) -> Option<&mut BroadPhaseProxy> {
self.elements.get_mut(i)
}
}
impl Index<usize> for BroadPhaseProxies {
type Output = BroadPhaseProxy;
fn index(&self, i: usize) -> &BroadPhaseProxy {
self.elements.index(i)
}
}
impl IndexMut<usize> for BroadPhaseProxies {
fn index_mut(&mut self, i: usize) -> &mut BroadPhaseProxy {
self.elements.index_mut(i)
}
}

View File

@@ -1,18 +1,19 @@
pub use self::broad_phase::BroadPhase;
pub use self::broad_phase_pair_event::{BroadPhasePairEvent, ColliderPair};
pub use self::sap_proxy::SAPProxyIndex;
pub(self) use self::broad_phase_proxy::*;
pub(self) use self::sap_axis::*;
pub(self) use self::sap_endpoint::*;
pub(self) use self::sap_layer::*;
pub(self) use self::sap_proxy::*;
pub(self) use self::sap_region::*;
pub(self) use self::sap_utils::*;
mod broad_phase;
mod broad_phase_pair_event;
mod broad_phase_proxy;
mod sap_axis;
mod sap_endpoint;
mod sap_layer;
mod sap_proxy;
mod sap_region;
mod sap_utils;

View File

@@ -1,4 +1,6 @@
use super::{BroadPhaseProxies, SAPEndpoint, NUM_SENTINELS};
use super::{SAPEndpoint, SAPProxies, NUM_SENTINELS};
use crate::geometry::broad_phase_multi_sap::DELETED_AABB_VALUE;
use crate::geometry::SAPProxyIndex;
use crate::math::Real;
use bit_vec::BitVec;
use parry::bounding_volume::BoundingVolume;
@@ -6,8 +8,8 @@ use parry::utils::hashmap::HashMap;
use std::cmp::Ordering;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub(crate) struct SAPAxis {
#[derive(Clone, Debug)]
pub struct SAPAxis {
pub min_bound: Real,
pub max_bound: Real,
pub endpoints: Vec<SAPEndpoint>,
@@ -30,8 +32,8 @@ impl SAPAxis {
pub fn batch_insert(
&mut self,
dim: usize,
new_proxies: &[usize],
proxies: &BroadPhaseProxies,
new_proxies: &[SAPProxyIndex],
proxies: &SAPProxies,
reporting: Option<&mut HashMap<(u32, u32), bool>>,
) {
if new_proxies.is_empty() {
@@ -86,7 +88,7 @@ impl SAPAxis {
let endpoints_wo_last_sentinel = &self.endpoints[..self.endpoints.len() - 1];
if let Some(reporting) = reporting {
for (endpoint, endpoint_id) in self.new_endpoints.drain(..).filter(|e| e.0.is_start()) {
let proxy1 = &proxies[endpoint.proxy() as usize];
let proxy1 = &proxies[endpoint.proxy()];
let min = endpoint.value;
let max = proxy1.aabb.maxs[dim];
@@ -95,7 +97,7 @@ impl SAPAxis {
continue;
}
let proxy2 = &proxies[endpoint2.proxy() as usize];
let proxy2 = &proxies[endpoint2.proxy()];
// NOTE: some pairs with equal aabb.mins[dim] may end up being reported twice.
if (endpoint2.is_start() && endpoint2.value < max)
@@ -147,16 +149,38 @@ impl SAPAxis {
.retain(|endpt| endpt.is_sentinel() || existing_proxies[endpt.proxy() as usize])
}
pub fn delete_deleted_proxies_and_endpoints_after_subregion_removal(
&mut self,
proxies: &SAPProxies,
existing_proxies: &mut BitVec,
) -> usize {
let mut num_deleted = 0;
self.endpoints.retain(|endpt| {
if !endpt.is_sentinel() && proxies[endpt.proxy()].aabb.mins.x == DELETED_AABB_VALUE {
if existing_proxies.get(endpt.proxy() as usize) == Some(true) {
existing_proxies.set(endpt.proxy() as usize, false);
num_deleted += 1;
}
false
} else {
true
}
});
num_deleted
}
pub fn update_endpoints(
&mut self,
dim: usize,
proxies: &BroadPhaseProxies,
proxies: &SAPProxies,
reporting: &mut HashMap<(u32, u32), bool>,
) {
let last_endpoint = self.endpoints.len() - NUM_SENTINELS;
for i in NUM_SENTINELS..last_endpoint {
let mut endpoint_i = self.endpoints[i];
let aabb_i = proxies[endpoint_i.proxy() as usize].aabb;
let aabb_i = proxies[endpoint_i.proxy()].aabb;
if endpoint_i.is_start() {
endpoint_i.value = aabb_i.mins[dim];
@@ -173,7 +197,7 @@ impl SAPAxis {
if endpoint_j.is_end() {
// Report start collision.
if aabb_i.intersects(&proxies[endpoint_j.proxy() as usize].aabb) {
if aabb_i.intersects(&proxies[endpoint_j.proxy()].aabb) {
let pair = super::sort2(endpoint_i.proxy(), endpoint_j.proxy());
reporting.insert(pair, true);
}
@@ -188,7 +212,7 @@ impl SAPAxis {
if endpoint_j.is_start() {
// Report end collision.
if !aabb_i.intersects(&proxies[endpoint_j.proxy() as usize].aabb) {
if !aabb_i.intersects(&proxies[endpoint_j.proxy()].aabb) {
let pair = super::sort2(endpoint_i.proxy(), endpoint_j.proxy());
reporting.insert(pair, false);
}

View File

@@ -3,7 +3,7 @@ use crate::math::Real;
#[derive(Copy, Clone, Debug, PartialEq)]
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
pub(crate) struct SAPEndpoint {
pub struct SAPEndpoint {
pub value: Real,
pub packed_flag_proxy: u32,
}

View File

@@ -1,6 +1,6 @@
use super::{BroadPhaseProxies, SAPRegion};
use crate::geometry::broad_phase_multi_sap::BroadPhaseProxy;
use crate::geometry::{Collider, AABB};
use super::{SAPProxies, SAPProxy, SAPRegion, SAPRegionPool};
use crate::geometry::broad_phase_multi_sap::DELETED_AABB_VALUE;
use crate::geometry::{Collider, SAPProxyIndex, AABB};
use crate::math::{Point, Real};
use parry::utils::hashmap::{Entry, HashMap};
@@ -8,49 +8,150 @@ use parry::utils::hashmap::{Entry, HashMap};
#[derive(Clone)]
pub(crate) struct SAPLayer {
pub depth: i8,
pub layer_id: u8,
pub smaller_layer: Option<u8>,
pub larger_layer: Option<u8>,
region_width: Real,
regions: HashMap<Point<i32>, SAPRegion>,
deleted_any: bool,
pub regions: HashMap<Point<i32>, SAPProxyIndex>,
#[cfg_attr(feature = "serde-serialize", serde(skip))]
regions_to_remove: Vec<Point<i32>>, // Workspace
regions_to_potentially_remove: Vec<Point<i32>>, // Workspace
#[cfg_attr(feature = "serde-serialize", serde(skip))]
pub created_regions: Vec<Point<i32>>,
pub created_regions: Vec<SAPProxyIndex>,
}
impl SAPLayer {
pub fn new(depth: i8, smaller_layer: Option<u8>, bigger_layer: Option<u8>) -> Self {
pub fn new(
depth: i8,
layer_id: u8,
smaller_layer: Option<u8>,
larger_layer: Option<u8>,
) -> Self {
Self {
depth,
smaller_layer,
larger_layer,
layer_id,
region_width: super::region_width(depth),
regions: HashMap::default(),
deleted_any: false,
regions_to_remove: vec![],
regions_to_potentially_remove: vec![],
created_regions: vec![],
}
}
pub fn delete_all_region_endpoints(&mut self) {
for region in &mut self.regions {
region.0.delete_all_region_endpoints();
pub fn delete_all_region_endpoints(&mut self, proxies: &mut SAPProxies) {
for region_id in self.regions.values() {
if let Some(mut region) = proxies[*region_id].data.take_region() {
region.delete_all_region_endpoints(proxies);
proxies[*region_id].data.set_region(region);
}
}
}
pub fn preupdate_proxy_in_region(&mut self, proxy: u32, region: &Point<i32>) {
self.regions
.get_mut(&region)
.unwrap()
.preupdate_proxy(proxy as usize);
pub fn propagate_created_regions(
&mut self,
larger_layer: &mut Self,
proxies: &mut SAPProxies,
pool: &mut SAPRegionPool,
) {
for proxy_id in self.created_regions.drain(..) {
larger_layer.register_subregion(proxy_id, proxies, pool)
}
}
pub fn propagate_existing_regions(
&mut self,
larger_layer: &mut Self,
proxies: &mut SAPProxies,
pool: &mut SAPRegionPool,
) {
for proxy_id in self.regions.values() {
larger_layer.register_subregion(*proxy_id, proxies, pool)
}
}
// Preupdates the proxy of a subregion.
fn register_subregion(
&mut self,
proxy_id: SAPProxyIndex,
proxies: &mut SAPProxies,
pool: &mut SAPRegionPool,
) {
if let Some(proxy) = proxies.get(proxy_id) {
let region_key = super::point_key(proxy.aabb.center(), self.region_width);
let region_id = self.ensure_region_exists(region_key, proxies, pool);
let region = proxies[region_id].data.as_region_mut();
let id_in_parent_subregion = region.register_subregion(proxy_id);
proxies[proxy_id]
.data
.as_region_mut()
.id_in_parent_subregion = id_in_parent_subregion as u32;
}
}
fn unregister_subregion(
&mut self,
proxy_id: SAPProxyIndex,
proxy_region: &SAPRegion,
proxies: &mut SAPProxies,
) {
if let Some(proxy) = proxies.get(proxy_id) {
let id_in_parent_subregion = proxy_region.id_in_parent_subregion;
let region_key = super::point_key(proxy.aabb.center(), self.region_width);
if let Some(region_id) = self.regions.get(&region_key) {
let proxy = &mut proxies[*region_id];
let region = proxy.data.as_region_mut();
if !region.needs_update_after_subregion_removal {
self.regions_to_potentially_remove.push(region_key);
region.needs_update_after_subregion_removal = true;
}
region
.subregions
.swap_remove(id_in_parent_subregion as usize); // Remove the subregion index from the subregion list.
// Re-adjust the id_in_parent_subregion of the subregion that was swapped in place
// of the deleted one.
if let Some(subregion_to_update) = region
.subregions
.get(id_in_parent_subregion as usize)
.copied()
{
proxies[subregion_to_update]
.data
.as_region_mut()
.id_in_parent_subregion = id_in_parent_subregion;
}
}
}
}
pub fn ensure_region_exists(
&mut self,
region_key: Point<i32>,
proxies: &mut SAPProxies,
pool: &mut SAPRegionPool,
) -> SAPProxyIndex {
match self.regions.entry(region_key) {
Entry::Occupied(occupied) => *occupied.get(),
Entry::Vacant(vacant) => {
let region_bounds = super::region_aabb(region_key, self.region_width);
let region = SAPRegion::recycle_or_new(region_bounds, pool);
let region_proxy = SAPProxy::subregion(region, region_bounds, self.layer_id);
let region_proxy_id = proxies.insert(region_proxy);
self.created_regions.push(region_proxy_id as u32);
let _ = vacant.insert(region_proxy_id);
region_proxy_id
}
}
}
pub fn preupdate_collider(
&mut self,
collider: &Collider,
aabb: &AABB,
pool: &mut Vec<SAPRegion>,
proxies: &mut SAPProxies,
pool: &mut SAPRegionPool,
) {
let proxy_id = collider.proxy_index;
let start = super::point_key(aabb.mins, self.region_width);
@@ -69,26 +170,23 @@ impl SAPLayer {
let region_key = Point::new(i, j);
#[cfg(feature = "dim3")]
let region_key = Point::new(i, j, _k);
let region = match self.regions.entry(region_key) {
Entry::Occupied(occupied) => occupied.into_mut(),
Entry::Vacant(vacant) => {
self.created_regions.push(region_key);
let region_bounds = super::region_aabb(region_key, self.region_width);
vacant.insert(SAPRegion::recycle_or_new(region_bounds, pool))
}
};
let _ = region.preupdate_proxy(proxy_id);
let region_id = self.ensure_region_exists(region_key, proxies, pool);
let region = proxies[region_id].data.as_region_mut();
region.preupdate_proxy(proxy_id);
}
}
}
}
pub fn remove_proxy(&mut self, proxy: &BroadPhaseProxy, proxy_index: usize) {
pub fn predelete_proxy(&mut self, proxies: &mut SAPProxies, proxy_index: SAPProxyIndex) {
// Discretize the AABB to find the regions that need to be invalidated.
let start = super::point_key(proxy.aabb.mins, self.region_width);
let end = super::point_key(proxy.aabb.maxs, self.region_width);
let proxy_aabb = &mut proxies[proxy_index].aabb;
let start = super::point_key(proxy_aabb.mins, self.region_width);
let end = super::point_key(proxy_aabb.maxs, self.region_width);
// Set the AABB of the proxy to a very large value.
proxy_aabb.mins.coords.fill(DELETED_AABB_VALUE);
proxy_aabb.maxs.coords.fill(DELETED_AABB_VALUE);
#[cfg(feature = "dim2")]
let k_range = 0..1;
@@ -102,9 +200,9 @@ impl SAPLayer {
let key = Point::new(i, j);
#[cfg(feature = "dim3")]
let key = Point::new(i, j, _k);
if let Some(region) = self.regions.get_mut(&key) {
if let Some(region_id) = self.regions.get(&key) {
let region = proxies[*region_id].data.as_region_mut();
region.predelete_proxy(proxy_index);
self.deleted_any = true;
}
}
}
@@ -113,36 +211,80 @@ impl SAPLayer {
pub fn update_regions(
&mut self,
proxies: &BroadPhaseProxies,
proxies: &mut SAPProxies,
reporting: &mut HashMap<(u32, u32), bool>,
pool: &mut Vec<SAPRegion>,
) {
for (point, region) in &mut self.regions {
region.update(proxies, reporting);
// println!(
// "Num regions at depth {}: {}",
// self.depth,
// self.regions.len(),
// );
for (point, region_id) in &self.regions {
if let Some(mut region) = proxies[*region_id].data.take_region() {
// Update the region.
region.update(proxies, reporting);
if region.proxy_count == 0 {
self.regions_to_remove.push(*point);
// Mark all subregions as to-be-updated.
for subregion_id in &region.subregions {
proxies[*subregion_id].data.as_region_mut().mark_as_dirty();
}
if region.proxy_count == 0 {
self.regions_to_potentially_remove.push(*point);
}
proxies[*region_id].data.set_region(region);
}
}
// Remove all the empty regions and store them in the region pool
let regions = &mut self.regions;
pool.extend(
self.regions_to_remove
.drain(..)
.map(|p| regions.remove(&p).unwrap()),
);
}
/// Complete the removals of empty regions on this layer.
///
/// This method must be called in a bottom-up fashion, i.e.,
/// by starting with the smallest layer up to the larger layer.
/// This is needed in order to properly propagate the removal
/// of a region (which is a subregion registered into the larger
/// layer).
pub fn complete_removals(
&mut self,
proxies: &BroadPhaseProxies,
reporting: &mut HashMap<(u32, u32), bool>,
pool: &mut Vec<SAPRegion>,
mut larger_layer: Option<&mut Self>,
proxies: &mut SAPProxies,
pool: &mut SAPRegionPool,
) {
if self.deleted_any {
self.update_regions(proxies, reporting, pool);
self.deleted_any = false;
// Delete all the empty regions and store them in the region pool.
for region_to_remove in self.regions_to_potentially_remove.drain(..) {
if let Entry::Occupied(region_id) = self.regions.entry(region_to_remove) {
if let Some(proxy) = proxies.get_mut(*region_id.get()) {
// First, we need to remove the endpoints of the deleted subregions.
let mut region = proxy.data.take_region().unwrap();
region.update_after_subregion_removal(proxies);
// Check if we actually can delete this region.
if region.proxy_count == 0 {
let region_id = region_id.remove();
// We can delete this region. So we need to tell the larger
// layer that one of its subregion is being deleted.
// The next call to `complete_removals` on the larger layer
// will take care of updating its own regions by taking into
// account this deleted subregion.
if let Some(larger_layer) = &mut larger_layer {
larger_layer.unregister_subregion(region_id, &region, proxies);
}
// Move the proxy to infinity.
let proxy = &mut proxies[region_id];
proxy.aabb.mins.coords.fill(DELETED_AABB_VALUE);
proxy.aabb.maxs.coords.fill(DELETED_AABB_VALUE);
// Mark the proxy as deleted.
proxies.remove(region_id);
pool.push(region);
} else {
proxies[*region_id.get()].data.set_region(region);
}
}
}
}
}
}

View File

@@ -0,0 +1,133 @@
use super::NEXT_FREE_SENTINEL;
use crate::geometry::broad_phase_multi_sap::SAPRegion;
use crate::geometry::ColliderHandle;
use parry::bounding_volume::AABB;
use std::ops::{Index, IndexMut};
pub type SAPProxyIndex = u32;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub enum SAPProxyData {
Collider(ColliderHandle),
Region(Option<Box<SAPRegion>>),
}
impl SAPProxyData {
pub fn is_subregion(&self) -> bool {
match self {
SAPProxyData::Region(_) => true,
_ => false,
}
}
// pub fn as_region(&self) -> &SAPRegion {
// match self {
// SAPProxyData::Region(r) => r.as_ref().unwrap(),
// _ => panic!("Invalid proxy type."),
// }
// }
pub fn as_region_mut(&mut self) -> &mut SAPRegion {
match self {
SAPProxyData::Region(r) => r.as_mut().unwrap(),
_ => panic!("Invalid proxy type."),
}
}
pub fn take_region(&mut self) -> Option<Box<SAPRegion>> {
match self {
SAPProxyData::Region(r) => r.take(),
_ => None,
}
}
pub fn set_region(&mut self, region: Box<SAPRegion>) {
*self = SAPProxyData::Region(Some(region));
}
}
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub struct SAPProxy {
pub data: SAPProxyData,
pub aabb: AABB,
pub next_free: SAPProxyIndex,
pub layer_id: u8,
}
impl SAPProxy {
pub fn collider(handle: ColliderHandle, aabb: AABB, layer_id: u8) -> Self {
Self {
data: SAPProxyData::Collider(handle),
aabb,
next_free: NEXT_FREE_SENTINEL,
layer_id,
}
}
pub fn subregion(subregion: Box<SAPRegion>, aabb: AABB, layer_id: u8) -> Self {
Self {
data: SAPProxyData::Region(Some(subregion)),
aabb,
next_free: NEXT_FREE_SENTINEL,
layer_id,
}
}
}
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub struct SAPProxies {
pub elements: Vec<SAPProxy>,
pub first_free: SAPProxyIndex,
}
impl SAPProxies {
pub fn new() -> Self {
Self {
elements: Vec::new(),
first_free: NEXT_FREE_SENTINEL,
}
}
pub fn insert(&mut self, proxy: SAPProxy) -> SAPProxyIndex {
if self.first_free != NEXT_FREE_SENTINEL {
let proxy_id = self.first_free;
self.first_free = self.elements[proxy_id as usize].next_free;
self.elements[proxy_id as usize] = proxy;
proxy_id
} else {
self.elements.push(proxy);
self.elements.len() as u32 - 1
}
}
pub fn remove(&mut self, proxy_id: SAPProxyIndex) {
let proxy = &mut self.elements[proxy_id as usize];
proxy.next_free = self.first_free;
self.first_free = proxy_id as u32;
}
// NOTE: this must not take holes into account.
pub fn get_mut(&mut self, i: SAPProxyIndex) -> Option<&mut SAPProxy> {
self.elements.get_mut(i as usize)
}
// NOTE: this must not take holes into account.
pub fn get(&self, i: SAPProxyIndex) -> Option<&SAPProxy> {
self.elements.get(i as usize)
}
}
impl Index<SAPProxyIndex> for SAPProxies {
type Output = SAPProxy;
fn index(&self, i: SAPProxyIndex) -> &SAPProxy {
self.elements.index(i as usize)
}
}
impl IndexMut<SAPProxyIndex> for SAPProxies {
fn index_mut(&mut self, i: SAPProxyIndex) -> &mut SAPProxy {
self.elements.index_mut(i as usize)
}
}

View File

@@ -1,17 +1,23 @@
use super::{BroadPhaseProxies, SAPAxis};
use super::{SAPAxis, SAPProxies};
use crate::geometry::SAPProxyIndex;
use crate::math::DIM;
use bit_vec::BitVec;
use parry::bounding_volume::AABB;
use parry::utils::hashmap::HashMap;
pub type SAPRegionPool = Vec<Box<SAPRegion>>;
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
#[derive(Clone)]
pub(crate) struct SAPRegion {
pub struct SAPRegion {
pub axes: [SAPAxis; DIM],
pub existing_proxies: BitVec,
#[cfg_attr(feature = "serde-serialize", serde(skip))]
pub to_insert: Vec<usize>, // Workspace
pub to_insert: Vec<SAPProxyIndex>, // Workspace
pub subregions: Vec<SAPProxyIndex>,
pub id_in_parent_subregion: u32,
pub update_count: u8,
pub needs_update_after_subregion_removal: bool,
pub proxy_count: usize,
}
@@ -27,12 +33,15 @@ impl SAPRegion {
axes,
existing_proxies: BitVec::new(),
to_insert: Vec::new(),
subregions: Vec::new(),
id_in_parent_subregion: crate::INVALID_U32,
update_count: 0,
needs_update_after_subregion_removal: false,
proxy_count: 0,
}
}
pub fn recycle(bounds: AABB, mut old: Self) -> Self {
pub fn recycle(bounds: AABB, mut old: Box<Self>) -> Box<Self> {
// Correct the bounds
for (axis, &bound) in old.axes.iter_mut().zip(bounds.mins.iter()) {
axis.min_bound = bound;
@@ -52,19 +61,20 @@ impl SAPRegion {
old
}
pub fn recycle_or_new(bounds: AABB, pool: &mut Vec<Self>) -> Self {
pub fn recycle_or_new(bounds: AABB, pool: &mut Vec<Box<Self>>) -> Box<Self> {
if let Some(old) = pool.pop() {
Self::recycle(bounds, old)
} else {
Self::new(bounds)
Box::new(Self::new(bounds))
}
}
pub fn delete_all_region_endpoints(&mut self, proxies: &BroadPhaseProxies) {
pub fn delete_all_region_endpoints(&mut self, proxies: &SAPProxies) {
for axis in &mut self.axes {
let existing_proxies = &mut self.existing_proxies;
axis.endpoints.retain(|e| {
if let Some(proxy) = proxies.get(e.proxy() as usize) {
self.existing_proxies.set(e.proxy() as usize, false);
if let Some(proxy) = proxies.get(e.proxy()) {
existing_proxies.set(e.proxy() as usize, false);
!proxy.data.is_subregion()
} else {
true
@@ -73,22 +83,34 @@ impl SAPRegion {
}
}
pub fn predelete_proxy(&mut self, _proxy_id: usize) {
pub fn predelete_proxy(&mut self, _proxy_id: SAPProxyIndex) {
// We keep the proxy_id as argument for uniformity with the "preupdate"
// method. However we don't actually need it because the deletion will be
// handled transparently during the next update.
self.update_count = 1;
self.update_count = self.update_count.max(1);
}
pub fn preupdate_proxy(&mut self, proxy_id: usize) -> bool {
pub fn mark_as_dirty(&mut self) {
self.update_count = self.update_count.max(1);
}
pub fn register_subregion(&mut self, proxy_id: SAPProxyIndex) -> usize {
let subregion_index = self.subregions.len();
self.subregions.push(proxy_id);
self.preupdate_proxy(proxy_id);
subregion_index
}
pub fn preupdate_proxy(&mut self, proxy_id: SAPProxyIndex) -> bool {
let mask_len = self.existing_proxies.len();
if proxy_id >= mask_len {
self.existing_proxies.grow(proxy_id + 1 - mask_len, false);
if proxy_id as usize >= mask_len {
self.existing_proxies
.grow(proxy_id as usize + 1 - mask_len, false);
}
if !self.existing_proxies[proxy_id] {
if !self.existing_proxies[proxy_id as usize] {
self.to_insert.push(proxy_id);
self.existing_proxies.set(proxy_id, true);
self.existing_proxies.set(proxy_id as usize, true);
self.proxy_count += 1;
false
} else {
@@ -100,11 +122,20 @@ impl SAPRegion {
}
}
pub fn update(
&mut self,
proxies: &BroadPhaseProxies,
reporting: &mut HashMap<(u32, u32), bool>,
) {
pub fn update_after_subregion_removal(&mut self, proxies: &SAPProxies) {
if self.needs_update_after_subregion_removal {
for axis in &mut self.axes {
self.proxy_count -= axis
.delete_deleted_proxies_and_endpoints_after_subregion_removal(
proxies,
&mut self.existing_proxies,
);
}
self.needs_update_after_subregion_removal = false;
}
}
pub fn update(&mut self, proxies: &SAPProxies, reporting: &mut HashMap<(u32, u32), bool>) {
if self.update_count > 0 {
// Update endpoints.
let mut deleted = 0;

View File

@@ -4,6 +4,7 @@ use parry::bounding_volume::AABB;
pub(crate) const NUM_SENTINELS: usize = 1;
pub(crate) const NEXT_FREE_SENTINEL: u32 = u32::MAX;
pub(crate) const SENTINEL_VALUE: Real = Real::MAX;
pub(crate) const DELETED_AABB_VALUE: Real = SENTINEL_VALUE / 2.0;
pub(crate) const REGION_WIDTH_BASE: Real = 20.0;
pub(crate) const REGION_WIDTH_POWER_BASIS: Real = 8.0;

View File

@@ -1,5 +1,5 @@
use crate::dynamics::{CoefficientCombineRule, MassProperties, RigidBodyHandle};
use crate::geometry::{InteractionGroups, SharedShape, SolverFlags};
use crate::geometry::{InteractionGroups, SAPProxyIndex, SharedShape, SolverFlags};
use crate::math::{AngVector, Isometry, Point, Real, Rotation, Vector, DIM};
use crate::parry::transformation::vhacd::VHACDParameters;
use parry::bounding_volume::AABB;
@@ -69,7 +69,7 @@ pub struct Collider {
pub restitution: Real,
pub(crate) collision_groups: InteractionGroups,
pub(crate) solver_groups: InteractionGroups,
pub(crate) proxy_index: usize,
pub(crate) proxy_index: SAPProxyIndex,
/// User-defined data associated to this rigid-body.
pub user_data: u128,
}
@@ -77,7 +77,7 @@ pub struct Collider {
impl Collider {
pub(crate) fn reset_internal_references(&mut self) {
self.parent = RigidBodyHandle::invalid();
self.proxy_index = crate::INVALID_USIZE;
self.proxy_index = crate::INVALID_U32;
}
/// The rigid body this collider is attached to.
@@ -597,7 +597,7 @@ impl ColliderBuilder {
parent: RigidBodyHandle::invalid(),
position: Isometry::identity(),
predicted_position: Isometry::identity(),
proxy_index: crate::INVALID_USIZE,
proxy_index: crate::INVALID_U32,
collision_groups: self.collision_groups,
solver_groups: self.solver_groups,
user_data: self.user_data,

View File

@@ -1,7 +1,7 @@
use crate::data::arena::Arena;
use crate::data::pubsub::PubSub;
use crate::dynamics::{RigidBodyHandle, RigidBodySet};
use crate::geometry::Collider;
use crate::geometry::{Collider, SAPProxyIndex};
use parry::partitioning::IndexedData;
use std::ops::{Index, IndexMut};
@@ -45,7 +45,7 @@ impl IndexedData for ColliderHandle {
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]
pub(crate) struct RemovedCollider {
pub handle: ColliderHandle,
pub(crate) proxy_index: usize,
pub(crate) proxy_index: SAPProxyIndex,
}
#[cfg_attr(feature = "serde-serialize", derive(Serialize, Deserialize))]

View File

@@ -84,7 +84,7 @@ impl IntersectionEvent {
}
}
pub(crate) use self::broad_phase_multi_sap::{BroadPhasePairEvent, ColliderPair};
pub(crate) use self::broad_phase_multi_sap::{BroadPhasePairEvent, ColliderPair, SAPProxyIndex};
pub(crate) use self::collider_set::RemovedCollider;
pub(crate) use self::narrow_phase::ContactManifoldIndex;
pub(crate) use parry::partitioning::SimdQuadTree;

View File

@@ -47,10 +47,13 @@ impl CollisionPipeline {
bodies.maintain(colliders);
self.broadphase_collider_pairs.clear();
broad_phase.update_aabbs(prediction_distance, bodies, colliders);
self.broad_phase_events.clear();
broad_phase.find_pairs(&mut self.broad_phase_events);
broad_phase.update(
prediction_distance,
bodies,
colliders,
&mut self.broad_phase_events,
);
narrow_phase.register_pairs(colliders, bodies, &self.broad_phase_events, events);

View File

@@ -73,7 +73,6 @@ impl PhysicsPipeline {
) {
self.counters.step_started();
bodies.maintain(colliders);
broad_phase.maintain(colliders);
narrow_phase.maintain(colliders, bodies);
// Update kinematic bodies velocities.
@@ -87,19 +86,15 @@ impl PhysicsPipeline {
self.counters.stages.collision_detection_time.start();
self.counters.cd.broad_phase_time.start();
self.broad_phase_events.clear();
self.broadphase_collider_pairs.clear();
// let t = instant::now();
broad_phase.update_aabbs(
broad_phase.update(
integration_parameters.prediction_distance,
bodies,
colliders,
&mut self.broad_phase_events,
);
// println!("Update AABBs time: {}", instant::now() - t);
// let t = instant::now();
self.broad_phase_events.clear();
broad_phase.find_pairs(&mut self.broad_phase_events);
// println!("Find pairs time: {}", instant::now() - t);
narrow_phase.register_pairs(colliders, bodies, &self.broad_phase_events, events);
self.counters.cd.broad_phase_time.pause();

View File

@@ -3,6 +3,7 @@
use na::{Matrix3, Point2, Point3, Scalar, SimdRealField, Vector2, Vector3};
use num::Zero;
use simba::simd::SimdValue;
use std::ops::IndexMut;
use parry::utils::SdpMatrix3;
use {
@@ -676,3 +677,34 @@ pub(crate) fn select_other<T: PartialEq>(pair: (T, T), elt: T) -> T {
pair.0
}
}
/// Methods for simultaneously indexing a container with two distinct indices.
pub trait IndexMut2<I>: IndexMut<I> {
/// Gets mutable references to two distinct elements of the container.
///
/// Panics if `i == j`.
fn index_mut2(&mut self, i: usize, j: usize) -> (&mut Self::Output, &mut Self::Output);
/// Gets a mutable reference to one element, and immutable reference to a second one.
///
/// Panics if `i == j`.
#[inline]
fn index_mut_const(&mut self, i: usize, j: usize) -> (&mut Self::Output, &Self::Output) {
let (a, b) = self.index_mut2(i, j);
(a, &*b)
}
}
impl<T> IndexMut2<usize> for Vec<T> {
#[inline]
fn index_mut2(&mut self, i: usize, j: usize) -> (&mut T, &mut T) {
assert!(i != j, "Unable to index the same element twice.");
assert!(i < self.len() && j < self.len(), "Index out of bounds.");
unsafe {
let a = &mut *(self.get_unchecked_mut(i) as *mut _);
let b = &mut *(self.get_unchecked_mut(j) as *mut _);
(a, b)
}
}
}