We introduce PhysXNet, a learning-based approach to predict the dynamics of
deformable clothes given 3D skeleton motion sequences of humans wearing these
clothes. The proposed model is adaptable to a large variety of garments and
changing topologies, without need of being retrained. Such simulations are
typically carried out by physics engines that require manual human expertise
and are subjectto computationally intensive computations. PhysXNet, by
contrast, is a fully differentiable deep network that at inference is able to
estimate the geometry of dense cloth meshes in a matter of milliseconds, and
thus, can be readily deployed as a layer of a larger deep learning
architecture. This efficiency is achieved thanks to the specific
parameterization of the clothes we consider, based on 3D UV maps encoding
spatial garment displacements. The problem is then formulated as a mapping
between the human kinematics space (represented also by 3D UV maps of the
undressed body mesh) into the clothes displacement UV maps, which we learn
using a conditional GAN with a discriminator that enforces feasible
deformations. We train simultaneously our model for three garment templates,
tops, bottoms and dresses for which we simulate deformations under 50 different
human actions. Nevertheless, the UV map representation we consider allows
encapsulating many different cloth topologies, and at test we can simulate
garments even if we did not specifically train for them. A thorough evaluation
demonstrates that PhysXNet delivers cloth deformations very close to those
computed with the physical engine, opening the door to be effectively
integrated within deeplearning pipelines.
Abstract We introduce PhysXNet, a learning-based approach to predict the dynamics of deformable clothes given 3D skeleton motion sequences of humans wearing these clothes.
The proposed model is adaptable to a large variety of garments and changing topologies, without need of being retrained.
提案手法は, 様々な衣服に適応し, トポロジを変えることができるが, 再訓練は不要である。
0.69
Such simulations are typically carried out by physics engines that require manual human expertise and are subject to computationally intensive computations.
PhysXNet, by contrast, is a fully differentiable deep network that at inference is able to estimate the geometry of dense cloth meshes in a matter of milliseconds, and thus, can be readily deployed as a layer of a larger deep learning architecture.
This efficiency is achieved thanks to the specific parameterization of the clothes we consider, based on 3D UV maps encoding spatial garment displacements.
The problem is then formulated as a mapping between the human kinematics space (represented also by 3D UV maps of the undressed body mesh) into the clothes displacement UV maps, which we learn using a conditional GAN with a discriminator that enforces feasible deformations.
問題は そして、人間の運動空間(無着のボディメッシュの3d uv マップで示される)と衣服の変位 uv マップのマッピングとして定式化され、条件付き gan と可視性変形を強制する判別器を用いて学習される。
0.64
We train simultaneously our model for three garment templates, tops, bottoms and dresses for which we simulate deformations under 50 different human actions.
Nevertheless, the UV map representation we consider allows encapsulating many different cloth topologies, and at test we can simulate garments even if we did not specifically train for them.
A thorough evaluation demonstrates that PhysXNet delivers cloth deformations very close to those computed with the physical engine, opening the door to be effectively integrated within deep learning pipelines.
Internet GarmentModelsPhysXNe tGarmentTemplates…DressTopsBottoms
internet clothing modelsphysxnetgarmen ttemplates ...dresstopsbottoms
0.11
英語(論文から抽出)
日本語訳
スコア
1. Introduction High-fidelity animation of clothed humans is the key for a wide range of applications in e g AR/VR, 3D content production and virtual try-on.
One of the main challenges when generating these animations is to create realistic cloth deformations with plausible wrinkles, creases, pleats, and folds.
Such simulations are typically carried out by physics engines that model clothes via meshes with neighboring vertices connected using spring-mass systems.
Unfortunately, these simulators need to be fine-tuned by a human expert and are subject to computationally intensive processes to calculate collisions between vertices.
With the advent of deep learning there have been a number of learning-based approaches that attempt to emulate the physical engines using differentiable networks [18, 10, 37, 24].
In this paper, we present PhysXNet, a method to predict cloth dynamics of dressed people that is adaptable to different clothing types, styles and topologies without need of being retrained.
For this purpose we build upon a simple but powerful representation based UV maps encoding cloth displacements.
この目的のために,布の変位を符号化した単純だが強力なUVマップを構築した。
0.62
These UV maps are carefully designed in order to simultaneously encapsulate many different cloth types (upper body, lower body and dresses) and cloth styles (e g from long-sleeve to sleeve-less T-shirts).
Given this representation, we then formulate the problem as a mapping between the human body kinematic space and the cloth deformation space.
この表現が与えられると、問題を人体運動空間と布の変形空間のマッピングとして定式化する。
0.62
The input human kinematics are similarly represented as UV maps, in this case encoding body velocities and accelerations.
入力された人体運動学も同様にUVマップとして表され、この場合、身体速度と加速度を符号化する。
0.58
Therefore, the problem boils down to learning a mapping between two different UV maps, from the human to the clothing, which we do using a conditional GAN network.
In order to train our system we build a synthetic dataset with the Blender physical engine, consisting of 50 skeletal actions and a human wearing three different garment templates: tops, bottoms and dresses.
The results show that PhysXNet is then able to predict very accurate cloth deformations for clothes seen at train, while being also adaptable to clothes with a other topologies with a simple UV mapping.
While estimating cloth deformation has been traditionally addressed by model-based approaches [23, 28, 22], recent deep learning techniques build upon data-driven methods.
These datasets, however, usually ignore cloth deformation physics, producing unrealistic renders.
しかし、これらのデータセットは通常布の変形物理学を無視し、非現実的なレンダリングを生成する。
0.51
This problem is generally addressed by obtaining the data from registered scans or including cloth simulation engines into the data generation process.
Scan based approaches [41, 11, 35] have the advantage that can capture every cloth detail without having to worry about cloth physical models, however, the main drawback is that they need of dedicated hardware and software to process all the data.
On the other side, synthetic based approaches [36, 26, 2] can be easily annotated and modified, but have the trouble of obtaining realistic cloth deformations.
Recent cloth physical engines can achieve very natural cloth behaviors [15, 33, 34], even for complex meshes, which makes the synthetic simulation a good competitor for the scanned data.
We create high quality cloth deformations for three garment templates over 50 motion sequences.
3種類の衣料テンプレートに対して,50以上の動作シーケンスで高品質な布変形を発生させる。
0.58
Data driven cloth deformations.
データ駆動の布の変形。
0.66
Using the generated datasets either from scans or synthetic data, a big part of the research concentrate in achieve high detailed cloth deformations with tailored designed networks [19, 40, 10, 3, 18, 42], GANs [32, 14] or even more recently with implicit functions [6].
These methods assume each cloth deformation frame is independent from each other and just concentrate to obtain reliable reconstructions in still images.
Other methods go one step further and they try to infer the cloth deformation given a human pose and shape [13, 7, 25] obtaining very convincing results.
Above methods reason about cloth geometry to obtain plausible cloth deformations, but ignore the underlying physics of the cloth, which can help to achieve more natural deformations.
This is especially true when the cloth deformations are affected by the motion of the body.
これは特に、布の変形が体の動きによって影響を受ける場合に当てはまる。
0.76
Using the physics information obtained from a dataset, different networks [9, 38, 30] are able to simulate cloth wrinkles and deformations given a body pose and shape.
While these methods are designed to be optimal on a T-shirt cloth, other cloth garments can be also estimated [16, 31].
これらの方法はTシャツに最適に設計されているのに対し、他の布地も[16,31]と推定できる。
0.81
All simulations are achieved using a dedicated network per cloth garment, which makes these methods not very flexible in case our cloth mesh is different that the one they used for train.
Moreover, a human model usually wears more than a single cloth garment, which means that these methods need to use different networks for the different garments and make them more difficult to integrate in a more larger pipeline.
Problem Formulation Physics-based engines model clothes using spring-mass models.
問題定式化 物理エンジンはスプリング質量モデルを用いて衣服をモデル化する。
0.45
In an oversimplification of how a simulation is performed, we can understand that the force (and hence the displacement) that receives each of these spring-mass elements is estimated as a function of the the skeleton velocities and accelerations.
Building upon this intuitive idea we formulate the problem of predicting cloth dynamics as a regression from current and past body velocities and accelerations to cloth-to-body offset displacements.
Given this notation, we can formulate our problem as that of learning the mapping M : X → Y, where X =
この記法が与えられた場合、写像 M : X → Y を学ぶための問題として、X = を定式化することができる。
0.69
a,k v,k−2:k, IB
a,k v,k−2:k,IB
0.43
{IB a,k−2:k} are the velocities and accelerations of the body surface points in the frames k − 2, k − 1 and k; and Y = IC o,k are the garment offsets at the current frame k.
IB a,k −2:k} はフレーム k − 2, k − 1 および k における体表面点の速度と加速度であり、Y = IC o,k は現在のフレーム k における衣服オフセットである。
0.87
3.2. Model Fig. 2 shows an schematic of the PhysXNet pipeline.
3.2. モデル 第2図はphysxnetパイプラインの図式を示している。
0.64
Given a sequence of human body motions, the UV maps for body velocities and accelerations are computed in triplets and passed to the network in order to infer the UV maps of the cloth offsets for the current evaluated frame.
Then, the vertices of a given garment are projected to the correspondent UV garment map to obtain the offsets respect to the body surface point for each one of the vertices and hence, the final position for the garment cloth.
The PhysXNet network is trained with two separate models where, a generator model produce samples of the UV garment maps, and a disciminator model tries to determine whether these samples are real or fake.
Then, it starts a Minimax strategy game [8] with the generator trying to ”fool” the discriminator, and the discriminator trying to ”catch” the generator wrong samples.
Thus, the discriminator is trained in a supervised manner, where the input data from the generator should return D(G(X)) = 0 and the input real data should return D(Y) = 1.
(1) o,k: Ladv = Ey[log(D(X))] + Ex[log(1 − D(G(X)))] The generator is trained to produce data output as similar as possible to the ground truth data Y. In the generator loss LG is used a regularization term, that ensures that generated garment UV maps ˆIC o,k stay close to the ground truth garment UV maps IC o,k|1 (2) LG = Ex[1 − log(D(G(X)))] + λL1 ∗ |IC where λL1 is a parameter that controls the weight of the regularization term.
Then the ”body” encoder is connected to a ”garment” decoder, one for each garment template, that returns the offsets positions of the garment respect to the body.
Given a sequence of human body motions without wearing any cloth, the velocities and accelerations of the body surface are calculated and registered in UV maps.
The PhysXNet network receive the current body UV maps with the two previous body UV maps to generate three different garment estimates that encodes the offset of the garment respect to the body.
It includes a parametric body model for pose and shape and, a library of clothes ready to use in a single click, which allows us to accelerate the generation of a physics cloth dataset.
In the current physics simulator based on spring-mass model [1], the cloth behavior is influenced by different parameters that can be grouped in three main areas:
spring-mass model [1]に基づく現在の物理シミュレータでは、布の挙動は3つの主な領域にグループ化できる異なるパラメータに影響されている。
0.80
1) garment parameters gp,
1) 衣服パラメータgp,
0.79
2) world parameters wp and
2)world parameters wp および
0.80
3) external forces parameters fp.
3) 外部力パラメータfp。
0.76
World parameters such as gravity and air friction are unchanged for all simulations.
世界のパラメータ 重力と空気の摩擦は全てのシミュレーションで変化しない。
0.77
External forces such as velocity and acceleration parameters are constrained by the action defined in the motion files, and the garment parameters such as bending, stiffness, compression and shear, are adjusted to match a cotton fabric style simulation for each one of the cloth templates.
The simulations are run with collisions and selfcollisions activated.
シミュレーションは衝突と自己衝突で実行されます。
0.80
4.2. Generate train UV maps
4.2. 電車UVマップの生成
0.57
k ∈ R3×Mb and dresses MDr
k ∈ R3×Mb とドレスMDr
0.80
The synthetic dataset is generated from the 3D mesh models for body and clothes.
合成データセットは、身体および衣服の3dメッシュモデルから生成される。
0.87
The body mesh is a parametk (α, θ) ∈ R3×N with N vertices and a set of ric model MB parameters to control shape α and pose θ at frame sequence k.
ボディメッシュは n 個の頂点を持つパラメトク (α, θ) ∈ r3×n と、形状 α を制御し、フレームシーケンス k で θ をポーズする ric モデル mb パラメータの集合である。
0.82
This body 3D model will wear each one of the three folk ∈ R3×Mt, bottoms lowing cloth mesh templates, tops MT k ∈ R3×Md, with Mt, Mb, MBt Md vertices respectively.
For simplicity in the notation we will refer to the cloth mesh template models as M C k when the models are at sequence frame k.
表記法における単純性のために、モデルがシーケンスフレーム k であるとき、布メッシュテンプレートモデルを M C k と呼ぶ。
0.80
As there is no direct correspondence between the vertices of the body mesh and the vertices of the cloth templates, we define a transference matrix T BC which relates the body vertices with a point in the cloth surface, see Fig 3 and T CB relates cloth vertices with a point in the body surface, see Fig 4.
Then, the vector ∆x, ∆y, ∆z between the body and the garment is stored in the garment UV map.
そして、体と衣服の間にあるベクターax,y,zを衣料UVマップに記憶する。
0.61
Figure 4: Cloth mesh to garment template projection.
図4:布のメッシュと衣料テンプレートの投影。
0.73
An arbitrary garment is worn into the human body model in Tpose position.
任意の衣服をヒトの体型に、トース位置で装着する。
0.64
Then, for every vertex, a ray is thrown in the direction of its normal until intersects the body model.
そして、すべての頂点に対して、体モデルと交差するまで、通常の方向に向かって光線が投げられる。
0.74
The point on the surface of the body model has a correspondence with the coordinates of the garment templates.
ボディモデルの表面上の点は、衣料テンプレートの座標と対応している。
0.63
in Eq 3: k = TCBMB
Eq 3 では k = TCBMB
0.56
MC Ok = fP,k(fp, gp, wp)
MC Ok = fP,k(fp, gp, wp)
0.42
k (α, θ) + Ok
k (α, θ) + Ok
0.42
(3) (4) where fP,k(bf , gp, wp) is a function that defines the offset positions of each vertex given a set of parameters such as body forces fp, world scene wp and garment fabric gp.
(3) (4) ここで fp,k(bf , gp, wp) は、体力 fp, world scene wp, clothing fabric gp などのパラメータが与えられた各頂点のオフセット位置を定義する関数である。 訳抜け防止モード: (3) (4) ここで fp, k(bf, gp, wp ) は、体力 fp のようなパラメータの集合が与えられた各頂点のオフセット位置を定義する関数である。 world scene wp and clothing fabric gp (英語)
0.65
Body UV maps. Neural networks are more efficient when using 2D image representations, and for that reason we will represent our 3D models surface by means of a UV maps.
Each pixel (u, v) of the UV layout has a direct correspondence with a point in the mesh surface stored in the transference matrix TU B. Therefore, the body mesh surface is represented by the body UV map IB k (u, v).
From the body UV map positions we can easily obtain the UV maps for velocity IB
ボディーUVマップ位置から、速度IBのためのUVマップを容易に取得できる
0.82
v,k(u, v) and acceleration IB
v,k(u,v)および加速IB
0.76
a,k(u, v).
a,k(u,v) である。
0.83
IB k (u, v) = TU B(u, v)MB IB v,k(u, v) = IB IB a,k(u, v) = IB
IB k (u, v) = TU B(u, v)MB IB v,k(u, v) = IB IB a,k(u, v) = IB
0.42
k − IB k−1 v,k(u, v) − IB
k − IB k−1 v,k(u, v) − IB
0.48
k (α, θ) v,k−1(u, v)
k (α, θ) v,k−1(u, v)
0.45
(5) (6) (7)
(5) (6) (7)
0.42
The original body UV map layout is modified to occupy as many pixels as possible inside the layout and therefore, get a better sampling of the surface of the body.
Garment UV maps. The garment UV maps I C k (u, v) will contain the offset vectors from the body surface to the cloth surface points for each pixel in the transference matrix TBC(u, v), Eq 8.
the cloth point correspondence. This process is illustrated in Fig 3.
布の点の対応。 この過程は図3で示される。
0.67
IC k (u, v) = TBC(u, v)MC
IC k (u, v) = TBC(u, v)MC
0.42
k − MB
k − mb である。
0.48
k (α, θ) (8)
k (α, θ) (8)
0.42
The case of the dress garment IDr
ドレスウェアIDrの1例
0.49
k (u, v) is a bit different, since in the lower part of the dress garment will be parts of the mesh that have no body correspondence due to the rays along the surface normals of the inside part of the leg never impact to the center of the garment.
Therefore, another body mesh M Bd k (α, θ) where the legs are joined by an ellipsoid is created.
したがって、脚を楕円体で接合する別のボディーメッシュmbdk(α,θ)が形成される。 訳抜け防止モード: したがって、別のボディーメッシュ M Bd k ( α, θ ) 脚は楕円体で結合する 作られています
0.77
4.3. Evaluate different garments
4.3. 異なる衣服の評価
0.57
Thus, given a garment model MX
したがって、衣服モデルMXが与えられる。
0.60
The main advantage of the PhysXNet network over other methods is that we can easily use garments from different sources without the need of retrain the network.
These garments need to be able to be encapsulated in one of the three cloth templates, but there is no condition about the number of vertices neither the topology.
k ∈ R3×Nx where Nx is an arbitrary number of vertices we need to find the transference matrix TXB that relates a vertex of the model with the garment templates IC.
k ∈ R3×Nx ここで、Nx は任意の数の頂点であり、モデルの頂点と衣服テンプレート IC を関連付ける伝達行列 TXB を見つける必要がある。
0.82
This process illustrated in Fig 4, and consists into throw a ray from the cloth vertex Nx along its normal direction to the body surface in a T-pose, in order to find the body UV map coordinate (u, v).
All data UV maps, IB k are normalized independently from −1 to 1.
すべてのデータ UV マップ IB k は−1 から 1 に独立して正規化される。
0.78
The network discriminator is trained with soft labels, using random uniform sampling from 0.0 to 0.3 for estimated labels, and from 0.7 to 1.0 for ground truth labels.
Moreover, a random 5% of training data on each epoch contain flip labels.
さらに、各エポックのトレーニングデータのランダム5%はフリップラベルを含む。
0.68
Image UV map sizes are W = H = 256, λL1 = 100 and learning rate r = 2e − 04.
画像UVマップのサイズは W = H = 256, λL1 = 100, 学習率 r = 2e − 04 である。
0.87
The architecture is trained up to 150 epochs for 2 days in a single NVIDIA GeForce GTX 1080 GPU and inference mean time per frame is 0.0313s (load data, run, save files).
We next evaluate our proposed PhysXNet performing several quantitative and qualitative experiments.
次に,提案するPhysXNetを定量的,定性的な実験により評価する。
0.61
In the quantitative experiments, we compare our proposed method with the Linear Blend Skinning (LBS) method as a baseline.
定量的実験では,提案手法を線形ブレンドスキニング法(LBS)をベースラインとして比較した。
0.76
The LBS method calculates the displacement of each vertex according to a weighted linear combination of the assigned skeleton segments.
LBS法は、割り当てられた骨格セグメントの重み付け線形結合に基づいて各頂点の変位を算出する。
0.75
Results are given by comparing the estimated UV garment maps with the ground truth UV maps for each vertex of the garment template and also for each pixel in the UV garment map.
In the qualitative results, we compare our proposed method with LBS and TailorNet [24].
定性化の結果,提案手法をLBSとTailorNet [24]と比較した。
0.72
We also show the results of PhysXNet with different body shapes and other garment meshes than the ones used for train.
また,PhysXNetの体型や衣服のメッシュが,電車で使用するものと異なる結果を示した。
0.71
5.1. Quantitative results
5.1. 定量結果
0.54
We provide two different measures for the quantitative results.
定量的結果に対する2つの異なる尺度を提供する。
0.62
First, we calculate the mean squared error (MSE) for each valid pixel of the PhysXNet estimated UV map templates ˆIC 0,k with the ground truth UV maps obtained from the synthetic dataset IC 0,k.
The evaluated actions in the Fig 6 are in the following jump, walking, moon walk, Chinese dance, punch, order: balancing, ballet, stretch arms, salsa dance, jogging, side step, and strong gesture.
The first bar with color cyan is for the tops template, the second bar with gray color is for the bottoms template and the third bar with purple color is for the dress template.
Some of these actions have very soft movements, like moon walk, balancing, walking, which results in small velocities and accelerations while some of the other motions like strong gesture, jump, punch in very few frames the pose has big changes which produces large velocities and accelerations.
The reason why the dress template errors are bigger is due to the hallucination that the network needs to do in the legs of the body, as there are parts of the dress that have no direct correspondence with the input body UV maps.
The proposed PhysXNet is also able to deal with different body shapes, as the output of the network are the offsets of each garment respect to the body, and also with cloth garments that contains different number of vertices and different topologies.
This is possible due to the output UV map templates encode the surface of the garment, and when using a different mesh topology, it is only necessary to project the vertices of the mesh with the UV map without being necessary to retrain the network.
Results for 3D cloth (tshirt, shorts, shirt2, skirt) are shown in Fig 9.
図9に3D布(tシャツ、ショートパンツ、シャツ2、スカート)の結果が示されています。
0.70
Comparisons. We compare our PhysXNet network with the LBS method and TailorNet [24] in the case of Tops and Bottoms templates, and only with LBS method in the case of Dress template, as TailorNet does not have a dress model.
The human models used in our case, Makehuman [20], and TailorNet, SMPL [17], are different, and this makes that the represented actions are not exactly the same due to the internal bone structures and bone lengths.
In Fig 8 we can observe the differences between the three methods.
図8では、3つの方法の違いを観察できます。
0.73
While in the LBS and TailorNet methods, the bottom part of the shirt is not moving while the performing punch action, in our proposed method, the shirt contains the movement produced by the body movements.
The main reason for this behavior is because our method takes into account the current and past body motion and is able to apply it to the cloth, while other two methods are static and only use current body pose.
Similar we can observe in Fig 7 with the dress template.
同様に、fig 7ではドレステンプレートで観察できます。
0.71
英語(論文から抽出)
日本語訳
スコア
Figure 8: Qualitative results.
図8: 質的な結果。
0.81
Comparison for the LBS, PhysXNet and TailorNet methods for the action Punch.
アクションパンチのためのLBS, PhysXNet, TailorNet 法の比較
0.69
Most differences can be found in the bottom part of the shirt.
ほとんどの違いはシャツの下部にある。
0.51
The proposed PhysXNet is able to model the movement of the bottom part of the shirt during the action, while other two methods keep it on the same position for all the frames.
The counterpart, is that our model is very widely sampled, making difficult to capture small wrinkles.
一方、我々のモデルは非常に広くサンプル化されており、小さなしわを捉えるのが難しくなっている。
0.65
A second difference is about the network weights, while for a single garment, network weights are similar in size, in TailorNet, if a user wants to use more models it is necessary to download more weights.
In our proposed method, the same weights are used for the three templates which can be applied to a large variety of garments.
提案手法では,多種多様な衣服に適用可能な3種類のテンプレートに対して,同じ重みが用いられる。
0.75
The last difference is about the execution time, as expected larger models comes with larger execution times.
最後の違いは、予想されるより大きなモデルがより大きな実行時間を持つように、実行時間に関するものだ。
0.61
Hence, in PhysXNet, with a single pass of the network we obtain the outputs of the three garment templates, in TailorNet it is be necessary to perform inference for each one of the desired garments.
Note that the proposed PhysXNet method can deal with multiple cloth templates at the same time, hence, network weights size, and inference time are much lower than TailorNet method.
6. Conclusions We presented a network, PhysXNet, that generates cloth physical dynamics for three totally different garment templates at the same time.
The network is able to generalize to unseen body actions, different body shapes and different cloth 3D models, making the model suitable to integrate it into a larger pipeline.
Our network can simulate the cloth physics behavior for any 3D cloth mesh randomly downloaded from the internet that fits to any of the three garment templates without being retrained.
The proposed method is compared quantitatively with the synthetic dataset ground truth and qualitatively with a baseline, LBS, and to an stateof-the-art method, TailorNet.
Acknowledgment This work is supported partly by the Spanish government under project MoHuCo PID2020-120049RB-I00 , the ERANet Chistera project IPALM PCI2019-103386 and Mar´ıa de Maeztu Seal of Excellence MDM-2016-0656.
承認 この事業の一部は、MoHuCo PID2020-120049RB-I00 、ERANet Chistera Project IPALM PCI2019-103386、Mar ́ıa de Maeztu Seal of Excellence MDM-2016-0656の下でスペイン政府によって支援されている。
0.54
英語(論文から抽出)
日本語訳
スコア
References [1] David Baraff and Andrew Witkin.
参照 [1] david baraff と andrew witkin。
0.67
Large steps in cloth simIn SIGGRAPH ’98: Proceedings of the 25th anulation.
Smplicit: Topology-aware generative model for clothed people.
Smplicit: 服装者のためのトポロジー対応生成モデル。
0.70
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
ieee/cvf conference on computer vision and pattern recognition(cvpr)202 1年開催。
0.59
2 [7] Zhenglin Geng, Daniel Johnson, and Ronald Fedkiw.
2 7] ジングリン・ゲン、ダニエル・ジョンソン、ロナルド・フェドキュー
0.47
Coercing machine learning to output physically accurate results.
相関機械学習は物理的に正確な結果を出力する。
0.60
Journal of Computational Physics, 406, 2020.
計算物理学雑誌『406』、2020年。
0.78
2 [8] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
2 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio 訳抜け防止モード: 2 イアン・J・グッドフェロー, Jean Pouget - Abadie Mehdi Mirza, Bing Xu, David Warde - Farley, Sherjil Ozair アーロン・クールヴィルとヨシュア・ベンジオ。
0.60
Generative adversarial nets. In NIPS, 2014.
敵ネットの生成。 2014年、NIPS。
0.70
3 [9] Peng Guan, Loretta Reiss, David A. Hirshberg, Alexander Weiss, and Michael J. Black.
3 9]peng guan、loretta reiss、david a. hirshberg、alexander weiss、michael j. black。
0.52
Drape: Dressing any person.
ドレープ:誰にでも着る。
0.59
ACM Trans. Graph.
ACMトランス。 グラフ。
0.73
, 31(4), 2012.
, 31(4), 2012.
0.38
3 [10] Erhan Gundogdu, Victor Constantin, Amrollah Seifoddini, Minh Dang, Mathieu Salzmann, and Pascal Fua.