input
stringlengths
14
315
answer
stringlengths
9
2.16k
gold_ctxs
listlengths
1
15
ctxs
listlengths
11
186
What are the side effects of group convolution?
The side effects of group convolutions are: blocked flow of information between channel groups when multiple group convolutions are combined; and damaged individual convolution filters for each group due to decreased number of input channels [16].
[ 16 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computation at billions of FLOPs. This report examines the opposite extreme: pursuing the best accuracy in very limited computational budgets at tens or hundreds of MFLOPs, focusing on common mobile platforms such as drones, robots, and smartphones. Note that many existing works (16, 22, 43, 42, 38, 27) focus on pruning, compressing, or low-bit representing a “basic” network architecture. Here we aim to explore a highly efficient basic architecture specially designed for our desired computing ranges. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_1", "text": " We notice that state-of-the-art basic architectures such as Xception  and ResNeXt  become less efficient in extremely small networks because of the costly dense 1×1111\\times 1 convolutions. We propose using pointwise group convolutions to reduce computation complexity of 1×1111\\times 1 convolutions. To overcome the side effects brought by group convolutions, we come up with a novel channel shuffle operation to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient architecture called ShuffleNet. Compared with popular structures like  (30, 9, 40), for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_2", "text": " We evaluate our models on the challenging ImageNet classification (4, 29) and MS COCO object detection  tasks. A series of controlled experiments shows the effectiveness of our design principles and the better performance over other structures. Compared with the state-of-the-art architecture MobileNet , ShuffleNet achieves superior performance by a significant margin, e.g. absolute 7.8% lower ImageNet top-1 error at level of 40 MFLOPs. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_3", "text": " We also examine the speedup on real hardware, i.e. an off-the-shelf ARM-based computing core. The ShuffleNet model achieves ∼similar-to\\sim13×\\times actual speedup (theoretical speedup is 18×\\times) over AlexNet  while maintaining comparable accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_4", "text": " The last few years have seen the success of deep neural networks in computer vision tasks (21, 36, 28), in which model designs play an important role. The increasing needs of running high quality deep neural networks on embedded devices encourage the study on efficient model designs . For example, GoogLeNet  increases the depth of networks with much lower complexity compared to simply stacking convolution layers. SqueezeNet  reduces parameters and computation significantly while maintaining accuracy. ResNet (9, 10) utilizes the efficient bottleneck structure to achieve impressive performance. SENet  introduces an architectural unit that boosts performance at slight computation cost. Concurrent with us, a very recent work  employs reinforcement learning and model search to explore efficient model designs. The proposed mobile NASNet model achieves comparable performance with our counterpart ShuffleNet model (26.0% @ 564 MFLOPs vs. 26.3% @ 524 MFLOPs for ImageNet classification error). But  do not report results on extremely tiny models (e.g. complexity less than 150 MFLOPs), nor evaluate the actual inference time on mobile devices. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_5", "text": " The concept of group convolution, which was first introduced in AlexNet  for distributing the model over two GPUs, has been well demonstrated its effectiveness in ResNeXt . Depthwise separable convolution proposed in Xception  generalizes the ideas of separable convolutions in Inception series (34, 32). Recently, MobileNet  utilizes the depthwise separable convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolution and depthwise separable convolution in a novel form. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_6", "text": " To the best of our knowledge, the idea of channel shuffle operation is rarely mentioned in previous work on efficient model design, although CNN library cuda-convnet  supports “random sparse convolution” layer, which is equivalent to random channel shuffle followed by a group convolutional layer. Such “random shuffle” operation has different purpose and been seldom exploited later. Very recently, another concurrent work   also adopt this idea for a two-stage convolution. However,   did not specially investigate the effectiveness of channel shuffle itself and its usage in tiny model design. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_7", "text": " This direction aims to accelerate inference while preserving accuracy of a pre-trained model. Pruning network connections (6, 7) or channels  reduces redundant connections in a pre-trained model while maintaining performance. Quantization (31, 27, 39, 45, 44) and factorization (22, 16, 18, 37) are proposed in literature to reduce redundancy in calculations to speed up inference. Without modifying the parameters, optimized convolution algorithms implemented by FFT (25, 35) and other methods  decrease time consumption in practice. Distilling  transfers knowledge from large models into small ones, which makes training small models easier. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_8", "text": " Modern convolutional neural networks (30, 33, 34, 32, 9, 10) usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception  and ResNeXt  introduce efficient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1×1111\\times 1 convolutions (also called pointwise convolutions in  ) into account, which require considerable complexity. For example, in ResNeXt  only 3×3333\\times 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in  ). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might significantly damage the accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_9", "text": " To address the issue, a straightforward solution is to apply channel sparse connections, for example group convolutions, also on 1×1111\\times 1 layers. By ensuring that each convolution operates only on the corresponding input channel group, group convolution significantly reduces computation cost. However, if multiple group convolutions stack together, there is one side effect: outputs from a certain channel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convolution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information flow between channel groups and weakens representation. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_10", "text": " If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups. This can be efficiently and elegantly implemented by a channel shuffle operation (Fig 1 (c)): suppose a convolutional layer with g𝑔g groups whose output has g×n𝑔𝑛g\\times n channels; we first reshape the output channel dimension into (g,n)𝑔𝑛(g,n), transposing and then flattening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_11", "text": " Channel shuffle operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efficient network unit with channel shuffle and group convolution. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_12", "text": " Taking advantage of the channel shuffle operation, we propose a novel ShuffleNet unit specially designed for small networks. We start from the design principle of bottleneck unit  in Fig 2 (a). It is a residual block. In its residual branch, for the 3×3333\\times 3 layer, we apply a computational economical 3×3333\\times 3 depthwise convolution  on the bottleneck feature map. Then, we replace the first 1×1111\\times 1 layer with pointwise group convolution followed by a channel shuffle operation, to form a ShuffleNet unit, as shown in Fig 2 (b). The purpose of the second pointwise group convolution is to recover the channel dimension to match the shortcut path. For simplicity, we do not apply an extra channel shuffle operation after the second pointwise layer as it results in comparable scores. The usage of batch normalization (BN)  and nonlinearity is similar to  (9, 40), except that we do not use ReLU after depthwise convolution as suggested by  . As for the case where ShuffleNet is applied with stride, we simply make two modifications (see Fig 2 (c)): (i) add a 3×3333\\times 3 average pooling on the shortcut path; (ii) replace the element-wise addition with channel concatenation, which makes it easy to enlarge channel dimension with little extra computation cost. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_13", "text": " Thanks to pointwise group convolution with channel shuffle, all components in ShuffleNet unit can be computed efficiently. Compared with ResNet  (bottleneck design) and ResNeXt , our structure has less complexity under the same settings. For example, given the input size c×h×w𝑐ℎ𝑤c\\times h\\times w and the bottleneck channels m𝑚m, ResNet unit requires h​w​(2​c​m+9​m2)ℎ𝑤2𝑐𝑚9superscript𝑚2hw(2cm+9m^{2}) FLOPs and ResNeXt has h​w​(2​c​m+9​m2/g)ℎ𝑤2𝑐𝑚9superscript𝑚2𝑔hw(2cm+9m^{2}/g) FLOPs, while our ShuffleNet unit requires only h​w​(2​c​m/g+9​m)ℎ𝑤2𝑐𝑚𝑔9𝑚hw(2cm/g+9m) FLOPs, where g𝑔g means the number of groups for convolutions. In other words, given a computational budget, ShuffleNet can use wider feature maps. We find this is critical for small networks, as tiny networks usually have an insufficient number of channels to process the information. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_14", "text": " In addition, in ShuffleNet depthwise convolution only performs on bottleneck feature maps. Even though depthwise convolution usually has very low theoretical complexity, we find it difficult to efficiently implement on low-power mobile devices, which may result from a worse computation/memory access ratio compared with other dense operations. Such drawback is also referred in  , which has a runtime library based on TensorFlow . In ShuffleNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possible. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_15", "text": " Built on ShuffleNet units, we present the overall ShuffleNet architecture in Table 1. The proposed network is mainly composed of a stack of ShuffleNet units grouped into three stages. The first building block in each stage is applied with stride = 2. Other hyper-parameters within a stage stay the same, and for the next stage the output channels are doubled. Similar to  , we set the number of bottleneck channels to 1/4 of the output channels for each ShuffleNet unit. Our intent is to provide a reference design as simple as possible, although we find that further hyper-parameter tunning might generate better results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_16", "text": " In ShuffleNet units, group number g𝑔g controls the connection sparsity of pointwise convolutions. Table 1 explores different group numbers and we adapt the output channels to ensure overall computation cost roughly unchanged (∼similar-to\\sim140 MFLOPs). Obviously, larger group numbers result in more output channels (thus more convolutional filters) for a given complexity constraint, which helps to encode more information, though it might also lead to degradation for an individual convolutional filter due to limited corresponding input channels. In Sec 4.1.1 we will study the impact of this number subject to different computational constrains. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_17", "text": " To customize the network to a desired complexity, we can simply apply a scale factor s𝑠s on the number of channels. For example, we denote the networks in Table 1 as ”ShuffleNet 1×\\times”, then ”ShuffleNet s×s\\times” means scaling the number of filters in ShuffleNet 1×\\times by s𝑠s times thus overall complexity will be roughly s2superscript𝑠2s^{2} times of ShuffleNet 1×\\times. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_18", "text": " We mainly evaluate our models on the ImageNet 2012 classification dataset (29, 4). We follow most of the training settings and hyper-parameters used in  , with two exceptions: (i) we set the weight decay to 4e-5 instead of 1e-4 and use linear-decay learning rate policy (decreased from 0.5 to 0); (ii) we use slightly less aggressive scale augmentation for data preprocessing. Similar modifications are also referenced in   because such small networks usually suffer from underfitting rather than overfitting. It takes 1 or 2 days to train a model for 3×1053superscript1053\\times 10^{5} iterations on 4 GPUs, whose batch size is set to 1024. To benchmark, we compare single crop top-1 performance on ImageNet validation set, i.e. cropping 224×224224224224\\times 224 center view from 256×256\\times input image and evaluating classification accuracy. We use exactly the same settings for all models to ensure fair comparisons. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_19", "text": " The core idea of ShuffleNet lies in pointwise group convolution and channel shuffle operation. In this subsection we evaluate them respectively. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_20", "text": " To evaluate the importance of pointwise group convolutions, we compare ShuffleNet models of the same complexity whose numbers of groups range from 1 to 8. If the group number equals 1, no pointwise group convolution is involved and then the ShuffleNet unit becomes an ”Xception-like”  structure. For better understanding, we also scale the width of the networks to 3 different complexities and compare their classification performance respectively. Results are shown in Table 2. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_21", "text": " From the results, we see that models with group convolutions (g>1𝑔1g>1) consistently perform better than the counterparts without pointwise group convolutions (g=1𝑔1g=1). Smaller models tend to benefit more from groups. For example, for ShuffleNet 1×\\times the best entry (g=8𝑔8g=8) is 1.2% better than the counterpart, while for ShuffleNet 0.5×\\times and 0.25×\\times the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it benefits more from enlarged feature maps. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_22", "text": " Table 2 also shows that for some models (e.g. ShuffleNet 0.5×\\times) when group numbers become relatively large (e.g. g=8𝑔8g=8), the classification score saturates or even drops. With an increase in group number (thus wider feature maps), input channels for each convolutional filter become fewer, which may harm representation capability. Interestingly, we also notice that for smaller models such as ShuffleNet 0.25×\\times larger group numbers tend to better results consistently, which suggests wider feature maps bring more benefits for smaller models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_23", "text": " The purpose of shuffle operation is to enable cross-group information flow for multiple group convolution layers. Table 3 compares the performance of ShuffleNet structures (group number is set to 3 or 8 for instance) with/without channel shuffle. The evaluations are performed under three different scales of complexity. It is clear that channel shuffle consistently boosts classification scores for different settings. Especially, when group number is relatively large (e.g. g=8𝑔8g=8), models with channel shuffle outperform the counterparts by a significant margin, which shows the importance of cross-group information interchange. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_24", "text": " Recent leading convolutional units in VGG , ResNet , GoogleNet , ResNeXt  and Xception  have pursued state-of-the-art results with large models (e.g. ≥1absent1\\geq 1GFLOPs), but do not fully explore low-complexity conditions. In this section we survey a variety of building blocks and make comparisons with ShuffleNet under the same complexity constraint. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_25", "text": " For fair comparison, we use the overall network architecture as shown in Table 1. We replace the ShuffleNet units in Stage 2-4 with other structures, then adapt the number of channels to ensure the complexity remains unchanged. The structures we explored include: ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_26", "text": " • VGG-like. Following the design principle of VGG net , we use a two-layer 3×\\times3 convolutions as the basic building block. Different from  , we add a Batch Normalization layer  after each of the convolutions to make end-to-end training easier. • ResNet. We adopt the ”bottleneck” design in our experiment, which has been demonstrated more efficient in   . Same as  , the bottleneck ratio111In the bottleneck-like units (like ResNet, ResNeXt or ShuffleNet) bottleneck ratio implies the ratio of bottleneck channels to output channels. For example, bottleneck ratio = 1:4:141:4 means the output feature map is 4 times the width of the bottleneck feature map. is also 1:4:141:4. • Xception-like. The original structure proposed in   involves fancy designs or hyper-parameters for different stages, which we find difficult for fair comparison on small models. Instead, we remove the pointwise group convolutions and channel shuffle operation from ShuffleNet (also equivalent to ShuffleNet with g=1𝑔1g=1). The derived structure shares the same idea of “depthwise separable convolution” as in  , which is called an Xception-like structure here. • ResNeXt. We use the settings of cardinality =16absent16=16 and bottleneck ratio =1:2:absent12=1:2 as suggested in  . We also explore other settings, e.g. bottleneck ratio =1:4:absent14=1:4, and get similar results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_27", "text": " We use exactly the same settings to train these models. Results are shown in Table 4. Our ShuffleNet models outperform most others by a significant margin under different complexities. Interestingly, we find an empirical relationship between feature map channels and classification accuracy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, ShuffleNet models are 50, 192, 192, 288, 576 respectively, which is consistent with the increase of accuracy. Since the efficient design of ShuffleNet, we can use more channels for a given computation budget, thus usually resulting in better performance. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_28", "text": " Note that the above comparisons do not include GoogleNet or Inception series (33, 34, 32). We find it nontrivial to generate such Inception structures to small networks because the original design of Inception module involves too many hyper-parameters. As a reference, the first GoogleNet version  has 31.3% top-1 error at the cost of 1.5 GFLOPs (See Table 6). More sophisticated Inception versions (34, 32) are more accurate, however, involve significantly increased complexity. Recently, Kim et al. propose a lightweight network structure named PVANET  which adopts Inception units. Our reimplemented PVANET (with 224×\\times224 input size) has 29.7% classification error with a computation complexity of 557 MFLOPs, while our ShuffleNet 2x model (g=3𝑔3g=3) gets 26.3% with 524 MFLOPs (see Table 6). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_29", "text": " Recently Howard et al. have proposed MobileNets  which mainly focus on efficient network architecture for mobile devices. MobileNet takes the idea of depthwise separable convolution from   and achieves state-of-the-art results on small models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_30", "text": " Table 5 compares classification scores under a variety of complexity levels. It is clear that our ShuffleNet models are superior to MobileNet for all the complexities. Though our ShuffleNet network is specially designed for small models (<150absent150<150 MFLOPs), we find it is still better than MobileNet for higher computation cost, e.g. 3.1% more accurate than MobileNet 1×\\times at the cost of 500 MFLOPs. For smaller networks (∼similar-to\\sim40 MFLOPs) ShuffleNet surpasses MobileNet by 7.8%. Note that our ShuffleNet architecture contains 50 layers while MobileNet only has 28 layers. For better understanding, we also try ShuffleNet on a 26-layer architecture by removing half of the blocks in Stage 2-4 (see ”ShuffleNet 0.5×\\times shallow (g=3𝑔3g=3)” in Table 5). Results show that the shallower model is still significantly better than the corresponding MobileNet, which implies that the effectiveness of ShuffleNet mainly results from its efficient structure, not the depth. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_31", "text": " Table 6 compares our ShuffleNet with a few popular models. Results show that with similar accuracy ShuffleNet is much more efficient than others. For example, ShuffleNet 0.5×\\times is theoretically 18×\\times faster than AlexNet  with comparable classification score. We will evaluate the actual running time in Sec 4.5. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_32", "text": " It is also worth noting that the simple architecture design makes it easy to equip ShuffeNets with the latest advances such as (13, 26). For example, in the authors propose Squeeze-and-Excitation (SE) blocks which achieve state-of-the-art results on large ImageNet models. We find SE modules also take effect in combination with the backbone ShuffleNets, for instance, boosting the top-1 error of ShuffleNet 2×\\times to 24.7% (shown in Table 5). Interestingly, though negligible increase of theoretical complexity, we find ShuffleNets with SE modules are usually 25∼40%similar-to25percent4025\\sim 40\\% slower than the “raw” ShuffleNets on mobile devices, which implies that actual speedup evaluation is critical on low-cost architecture design. In Sec 4.5 we will make further discussion. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_33", "text": " To evaluate the generalization ability for transfer learning, we test our ShuffleNet model on the task of MS COCO object detection . We adopt Faster-RCNN  as the detection framework and use the publicly released Caffe code (28, 17) for training with default settings. Similar to  , the models are trained on the COCO train+val dataset excluding 5000 minival images and we conduct testing on the minival set. Table 7 shows the comparison of results trained and evaluated on two input resolutions. Comparing ShuffleNet 2×\\times with MobileNet whose complexity are comparable (524 vs. 569 MFLOPs), our ShuffleNet 2×\\times surpasses MobileNet by a significant margin on both resolutions; our ShuffleNet 1×\\times also achieves comparable results with MobileNet on 600×\\times resolution, but has ∼similar-to\\sim4×\\times complexity reduction. We conjecture that this significant gain is partly due to ShuffleNet’s simple design of architecture without bells and whistles. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_34", "text": " Finally, we evaluate the actual inference speed of ShuffleNet models on a mobile device with an ARM platform. Though ShuffleNets with larger group numbers (e.g. g=4𝑔4g=4 or g=8𝑔8g=8) usually have better performance, we find it less efficient in our current implementation. Empirically g=3𝑔3g=3 usually has a proper trade-off between accuracy and actual inference time. As shown in Table 8, three input resolutions are exploited for the test. Due to memory access and other overheads, we find every 4×\\times theoretical complexity reduction usually results in ∼similar-to\\sim2.6×\\times actual speedup in our implementation. Nevertheless, compared with AlexNet  our ShuffleNet 0.5×\\times model still achieves ∼similar-to\\sim13×\\times actual speedup under comparable classification accuracy (the theoretical speedup is 18×\\times), which is much faster than previous AlexNet-level models or speedup approaches such as  (14, 16, 22, 42, 43, 38). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" } ]
How does normalizing the features and weights in the softmax loss function improve the performance of deep face recognition systems?
Normalizing the weights only can help angular/cosine-margin-based loss to make the learned features more discriminative, whereas normalizing only the learned features can help overcome the bias to the sample distribution of the softmax [25]. Since L2-norms of learned features with softmax loss were observed to be reflective of the quality of the face, making all the features have the same L2-norm may help to give similar attention to all different qualities of samples [26].
[ 25, 26 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early 1990s, the study of FR became popular following the introduction of the historical Eigenface approach . The milestones of feature-based FR over the past years are presented in Fig. 1, in which the times of four major technical streams are highlighted. The holistic approaches derive the low-dimensional representation through certain distribution assumptions, such as linear subspace , manifold , and sparse representation . This idea dominated the FR community in the 1990s and 2000s. However, a well-known problem is that these theoretically plausible holistic methods fail to address the uncontrolled facial changes that deviate from their prior assumptions. In the early 2000s, this problem gave rise to local-feature-based FR. Gabor and LBP , as well as their multilevel and high-dimensional extensions , achieved robust performance through some invariant properties of local filtering. Unfortunately, handcrafted features suffered from a lack of distinctiveness and compactness. In the early 2010s, learning-based local descriptors were introduced to the FR community , in which local filters are learned for better distinctiveness and the encoding codebook is learned for better compactness. However, these shallow representations still have an inevitable limitation on robustness against the complex nonlinear facial appearance variations. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_1", "text": " In general, traditional methods attempted to recognize human face by one or two layer representations, such as filtering responses, histogram of the feature codes, or distribution of the dictionary atoms. The research community studied intensively to separately improve the preprocessing, local descriptors, and feature transformation, but these approaches improved FR accuracy slowly. What’s worse, most methods aimed to address one aspect of unconstrained facial changes only, such as lighting, pose, expression, or disguise. There was no any integrated technique to address these unconstrained challenges integrally. As a result, with continuous efforts of more than a decade, “shallow” methods only improved the accuracy of the LFW benchmark to about 95% , which indicates that “shallow” methods are insufficient to extract stable identity feature invariant to real-world changes. Due to the insufficiency of this technical, facial recognition systems were often reported with unstable performance or failures with countless false alarms in real-world applications. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_2", "text": " But all that changed in 2012 when AlexNet won the ImageNet competition by a large margin using a technique called deep learning . Deep learning methods, such as convolutional neural networks, use a cascade of multiple layers of processing units for feature extraction and transformation. They learn multiple levels of representations that correspond to different levels of abstraction. The levels form a hierarchy of concepts, showing strong invariance to the face pose, lighting, and expression changes, as shown in Fig. 2. It can be seen from the figure that the first layer of the deep neural network is somewhat similar to the Gabor feature found by human scientists with years of experience. The second layer learns more complex texture features. The features of the third layer are more complex, and some simple structures have begun to appear, such as high-bridged nose and big eyes. In the fourth, the network output is enough to explain a certain facial attribute, which can make a special response to some clear abstract concepts such as smile, roar, and even blue eye. In conclusion, in deep convolutional neural networks (CNN), the lower layers automatically learn the features similar to Gabor and SIFT designed for years or even decades (such as initial layers in Fig. 2), and the higher layers further learn higher level abstraction. Finally, the combination of these higher level abstraction represents facial identity with unprecedented stability. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_3", "text": " In 2014, DeepFace achieved the SOTA accuracy on the famous LFW benchmark , approaching human performance on the unconstrained condition for the first time (DeepFace: 97.35% vs. Human: 97.53%), by training a 9-layer model on 4 million facial images. Inspired by this work, research focus has shifted to deep-learning-based approaches, and the accuracy was dramatically boosted to above 99.80% in just three years. Deep learning technique has reshaped the research landscape of FR in almost all aspects such as algorithm designs, training/test datasets, application scenarios and even the evaluation protocols. Therefore, it is of great significance to review the breakthrough and rapid development process in recent years. There have been several surveys on FR (24, 25, 26, 27, 28) and its subdomains, and they mostly summarized and compared a diverse set of techniques related to a specific FR scene, such as illumination-invariant FR , 3D FR , pose-invariant FR . Unfortunately, due to their earlier publication dates, none of them covered the deep learning methodology that is most successful nowadays. This survey focuses only on recognition problem, and one can refer to Ranjan et al. for a brief review of a full deep FR pipeline with detection and alignment, or refer to Jin et al. for a survey of face alignment. Specifically, the major contributions of this survey are as follows: ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_4", "text": " • A systematic review on the evolution of the network architectures and loss functions for deep FR is provided. Various loss functions are categorized into Euclidean-distance-based loss, angular/cosine-margin-based loss and softmax loss and its variations. Both the mainstream network architectures, such as Deepface , DeepID series (34, 35, 21, 36), VGGFace , FaceNet , and VGGFace2 , and other architectures designed for FR are covered. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_5", "text": " • We categorize the new face processing methods based on deep learning, such as those used to handle recognition difficulty on pose changes, into two classes: “one-to-many augmentation” and “many-to-one normalization”, and discuss how emerging generative adversarial network (GAN) facilitates deep FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_6", "text": " • We present a comparison and analysis on public available databases that are of vital importance for both model training and testing. Major FR benchmarks, such as LFW , IJB-A/B/C (41, 42, 43), Megaface , and MS-Celeb-1M , are reviewed and compared, in term of the four aspects: training methodology, evaluation tasks and metrics, and recognition scenes, which provides an useful reference for training and testing deep FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_7", "text": " • Besides the general purpose tasks defined by the major databases, we summarize a dozen scenario-specific databases and solutions that are still challenging for deep learning, such as anti-attack, cross-pose FR, and cross-age FR. By reviewing specially designed methods for these unsolved problems, we attempt to reveal the important issues for future research on deep FR, such as adversarial samples, algorithm/data biases, and model interpretability. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_8", "text": " The remainder of this survey is structured as follows. In Section II, we introduce some background concepts and terminologies, and then we briefly introduce each component of FR. In Section III, different network architectures and loss functions are presented. Then, we summarize the face processing algorithms and the datasets. In Section V, we briefly introduce several methods of deep FR used for different scenes. Finally, the conclusion of this paper and discussion of future works are presented in Section VI. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_9", "text": " As mentioned in , there are three modules needed for FR system, as shown in Fig. 3. First, a face detector is used to localize faces in images or videos. Second, with the facial landmark detector, the faces are aligned to normalized canonical coordinates. Third, the FR module is implemented with these aligned face images. We only focus on the FR module throughout the remainder of this paper. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_10", "text": " Before a face image is fed to an FR module, face anti-spoofing, which recognizes whether the face is live or spoofed, is applied to avoid different types of attacks. Then, recognition can be performed. As shown in Fig. 3(c), an FR module consists of face processing, deep feature extraction and face matching, and it can be described as follows: ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_11", "text": " M​(F​(Pi​(Ii)),F​(Pj​(Ij)))𝑀𝐹subscript𝑃𝑖subscript𝐼𝑖𝐹subscript𝑃𝑗subscript𝐼𝑗M(F(P_{i}(I_{i})),F(P_{j}(I_{j}))) (1) where Iisubscript𝐼𝑖I_{i} and Ijsubscript𝐼𝑗I_{j} are two face images, respectively. P𝑃P stands for face processing to handle intra-personal variations before training and testing, such as poses, illuminations, expressions and occlusions. F𝐹F denotes feature extraction, which encodes the identity information. The feature extractor is learned by loss functions when training, and is utilized to extract features of faces when testing. M𝑀M means a face matching algorithm used to compute similarity scores of features to determine the specific identity of faces. Different from object classification, the testing identities are usually disjoint from the training data in FR, which makes the learned classifier cannot be used to recognize testing faces. Therefore, face matching algorithm is an essential part in FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_12", "text": " Although deep-learning-based approaches have been widely used, Mehdipour et al. proved that various conditions, such as poses, illuminations, expressions and occlusions, still affect the performance of deep FR. Accordingly, face processing is introduced to address this problem. The face processing methods are categorized as “one-to-many augmentation” and “many-to-one normalization”, as shown in Table I. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_13", "text": " • “One-to-many augmentation”. These methods generate many patches or images of the pose variability from a single image to enable deep networks to learn pose-invariant representations. • “Many-to-one normalization”. These methods recover the canonical view of face images from one or many images of a nonfrontal view; then, FR can be performed as if it were under controlled conditions. Note that we mainly focus on deep face processing method designed for pose variations in this paper, since pose is widely regarded as a major challenge in automatic FR applications and other variations can be solved by the similar methods. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_14", "text": " Network Architecture. The architectures can be categorized as backbone and assembled networks, as shown in Table II. Inspired by the extraordinary success on the ImageNet challenge, the typical CNN architectures, e.g. AlexNet, VGGNet, GoogleNet, ResNet and SENet (22, 75, 76, 77, 78), are introduced and widely used as the baseline models in FR (directly or slightly modified). In addition to the mainstream, some assembled networks, e.g. multi-task networks and multi-input networks, are utilized in FR. Hu et al. shows that accumulating the results of assembled networks provides an increase in performance compared with an individual network. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_15", "text": " Loss Function. The softmax loss is commonly used as the supervision signal in object recognition, and it encourages the separability of features. However, the softmax loss is not sufficiently effective for FR because intra-variations could be larger than inter-differences and more discriminative features are required when recognizing different people. Many works focus on creating novel loss functions to make features not only more separable but also discriminative, as shown in Table III. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_16", "text": " FR can be categorized as face verification and face identification. In either scenario, a set of known subjects is initially enrolled in the system (the gallery), and during testing, a new subject (the probe) is presented. After the deep networks are trained on massive data with the supervision of an appropriate loss function, each of the test images is passed through the networks to obtain a deep feature representation. Using cosine distance or L2 distance, face verification computes one-to-one similarity between the gallery and probe to determine whether the two images are of the same subject, whereas face identification computes one-to-many similarity to determine the specific identity of a probe face. In addition to these, other methods are introduced to postprocess the deep features such that the face matching is performed efficiently and accurately, such as metric learning, sparse-representation-based classifier (SRC), and so forth. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_17", "text": " To sum up, we present FR modules and their commonly-used methods in Fig. 4 to help readers to get a view of the whole FR. In deep FR, various training and testing face databases are constructed, and different architectures and losses of deep FR always follow those of deep object classification and are modified according to unique characteristics of FR. Moreover, in order to address unconstrained facial changes, face processing methods are further designed to handle poses, expressions and occlusions variations. Benefiting from these strategies, deep FR system significantly improves the SOTA and surpasses human performance. When the applications of FR becomes more and more mature in general scenario, recently, different solutions are driven for more difficult specific scenarios, such as cross-pose FR, cross-age FR, video FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_18", "text": " For most applications, it is difficult to include the candidate faces during the training stage, which makes FR become a “zero-shot” learning task. Fortunately, since all human faces share a similar shape and texture, the representation learned from a small proportion of faces can generalize well to the rest. Based on this theory, a straightforward way to improve generalized performance is to include as many IDs as possible in the training set. For example, Internet giants such as Facebook and Google have reported their deep FR system trained by 106−107superscript106superscript10710^{6}-10^{7} IDs (38, 20). ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_19", "text": " Unfortunately, these personal datasets, as well as prerequisite GPU clusters for distributed model training, are not accessible for academic community. Currently, public available training databases for academic research consist of only 103−105superscript103superscript10510^{3}-10^{5} IDs. Instead, academic community makes effort to design effective loss functions and adopts efficient architectures to make deep features more discriminative using the relatively small training data sets. For instance, the accuracy of most popular LFW benchmark has been boosted from 97% to above 99.8% in the pasting four years, as enumerated in Table IV. In this section, we survey the research efforts on different loss functions and network architectures that have significantly improved deep FR methods. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_20", "text": " Inheriting from the object classification network such as AlexNet, the initial Deepface and DeepID adopted cross-entropy based softmax loss for feature learning. After that, people realized that the softmax loss is not sufficient by itself to learn discriminative features, and more researchers began to explore novel loss functions for enhanced generalization ability. This becomes the hottest research topic in deep FR research, as illustrated in Fig. 5. Before 2017, Euclidean-distance-based loss played an important role; In 2017, angular/cosine-margin-based loss as well as feature and weight normalization became popular. It should be noted that, although some loss functions share the similar basic idea, the new one is usually designed to facilitate the training procedure by easier parameter or sample selection. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_21", "text": " Euclidean-distance-based loss is a metric learning method (118, 119) that embeds images into Euclidean space in which intra-variance is reduced and inter-variance is enlarged. The contrastive loss and the triplet loss are the commonly used loss functions. The contrastive loss (35, 21, 36, 61, 120) requires face image pairs, and then pulls together positive pairs and pushes apart negative pairs. ℒ=yi​j​m​a​x​(0,‖f​(xi)−f​(xj)‖2−ϵ+)+(1−yi​j)​m​a​x​(0,ϵ−−‖f​(xi)−f​(xj)‖2)ℒsubscript𝑦𝑖𝑗𝑚𝑎𝑥0subscriptdelimited-∥∥𝑓subscript𝑥𝑖𝑓subscript𝑥𝑗2superscriptitalic-ϵ1subscript𝑦𝑖𝑗𝑚𝑎𝑥0superscriptitalic-ϵsubscriptdelimited-∥∥𝑓subscript𝑥𝑖𝑓subscript𝑥𝑗2\\begin{split}\\mathcal{L}=&y_{ij}max\\left(0,\\left\\|f(x_{i})-f(x_{j})\\right\\|_{2}-\\epsilon^{+}\\right)\\\\ &+(1-y_{ij})max\\left(0,\\epsilon^{-}-\\left\\|f(x_{i})-f(x_{j})\\right\\|_{2}\\right)\\end{split} (2) where yi​j=1subscript𝑦𝑖𝑗1y_{ij}=1 means xisubscript𝑥𝑖x_{i} and xjsubscript𝑥𝑗x_{j} are matching samples and yi​j=0subscript𝑦𝑖𝑗0y_{ij}=0 means non-matching samples. f​(⋅)𝑓⋅f(\\cdot) is the feature embedding, ϵ+superscriptitalic-ϵ\\epsilon^{+} and ϵ−superscriptitalic-ϵ\\epsilon^{-} control the margins of the matching and non-matching pairs respectively. DeepID2 combined the face identification (softmax) and verification (contrastive loss) supervisory signals to learn a discriminative representation, and joint Bayesian (JB) was applied to obtain a robust embedding space. Extending from DeepID2 , DeepID2+ increased the dimension of hidden representations and added supervision to early convolutional layers. DeepID3 further introduced VGGNet and GoogleNet to their work. However, the main problem with the contrastive loss is that the margin parameters are often difficult to choose. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_22", "text": " Contrary to contrastive loss that considers the absolute distances of the matching pairs and non-matching pairs, triplet loss considers the relative difference of the distances between them. Along with FaceNet proposed by Google, Triplet loss (38, 37, 81, 80, 58, 60) was introduced into FR. It requires the face triplets, and then it minimizes the distance between an anchor and a positive sample of the same identity and maximizes the distance between the anchor and a negative sample of a different identity. FaceNet made ‖f​(xia)−f​(xip)‖22+α<−‖f​(xia)−f​(xin)‖22superscriptsubscriptnorm𝑓superscriptsubscript𝑥𝑖𝑎𝑓superscriptsubscript𝑥𝑖𝑝22𝛼superscriptsubscriptnorm𝑓superscriptsubscript𝑥𝑖𝑎𝑓superscriptsubscript𝑥𝑖𝑛22\\left\\|f(x_{i}^{a})-f(x_{i}^{p})\\right\\|_{2}^{2}+\\alpha<-\\left\\|f(x_{i}^{a})-f(x_{i}^{n})\\right\\|_{2}^{2} using hard triplet face samples, where xiasuperscriptsubscript𝑥𝑖𝑎x_{i}^{a}, xipsuperscriptsubscript𝑥𝑖𝑝x_{i}^{p} and xinsuperscriptsubscript𝑥𝑖𝑛x_{i}^{n} are the anchor, positive and negative samples, respectively, α𝛼\\alpha is a margin and f​(⋅)𝑓⋅f(\\cdot) represents a nonlinear transformation embedding an image into a feature space. Inspired by FaceNet , TPE and TSE learned a linear projection W𝑊W to construct triplet loss. Other methods optimize deep models using both triplet loss and softmax loss (59, 58, 60, 121). They first train networks with softmax and then fine-tune them with triplet loss. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_23", "text": " However, the contrastive loss and triplet loss occasionally encounter training instability due to the selection of effective training samples, some paper begun to explore simple alternatives. Center loss and its variants (82, 116, 102) are good choices for reducing intra-variance. The center loss learned a center for each class and penalized the distances between the deep features and their corresponding class centers. This loss can be defined as follows: ℒC=12​∑i=1m‖xi−cyi‖22subscriptℒ𝐶12superscriptsubscript𝑖1𝑚superscriptsubscriptnormsubscript𝑥𝑖subscript𝑐subscript𝑦𝑖22\\mathcal{L}_{C}=\\frac{1}{2}\\sum_{i=1}^{m}\\left\\|x_{i}-c_{y_{i}}\\right\\|_{2}^{2} (3) where xisubscript𝑥𝑖x_{i} denotes the i𝑖i-th deep feature belonging to the yisubscript𝑦𝑖y_{i}-th class and cyisubscript𝑐subscript𝑦𝑖c_{y_{i}} denotes the yisubscript𝑦𝑖y_{i}-th class center of deep features. To handle the long-tailed data, a range loss , which is a variant of center loss, is used to minimize k greatest range’s harmonic mean values in one class and maximize the shortest inter-class distance within one batch. Wu et al. proposed a center-invariant loss that penalizes the difference between each center of classes. Deng et al. selected the farthest intra-class samples and the nearest inter-class samples to compute a margin loss. However, the center loss and its variants suffer from massive GPU memory consumption on the classification layer, and prefer balanced and sufficient training data for each identity. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_24", "text": " In 2017, people had a deeper understanding of loss function in deep FR and thought that samples should be separated more strictly to avoid misclassifying the difficult samples. Angular/cosine-margin-based loss (104, 84, 105, 106, 108) is proposed to make learned features potentially separable with a larger angular/cosine distance. The decision boundary in softmax loss is (W1−W2)​x+b1−b2=0subscript𝑊1subscript𝑊2𝑥subscript𝑏1subscript𝑏20\\left(W_{1}-W_{2}\\right)x+b_{1}-b_{2}=0, where x𝑥x is feature vector, Wisubscript𝑊𝑖W_{i} and bisubscript𝑏𝑖b_{i} are weights and bias in softmax loss, respectively. Liu et al. reformulated the original softmax loss into a large-margin softmax (L-Softmax) loss. They constrain b1=b2=0subscript𝑏1subscript𝑏20b_{1}=b_{2}=0, so the decision boundaries for class 1 and class 2 become ‖x‖​(‖W1‖​c​o​s​(m​θ1)−‖W2‖​c​o​s​(θ2))=0norm𝑥normsubscript𝑊1𝑐𝑜𝑠𝑚subscript𝜃1normsubscript𝑊2𝑐𝑜𝑠subscript𝜃20\\left\\|x\\right\\|\\left(\\left\\|W_{1}\\right\\|cos\\left(m\\theta_{1}\\right)-\\left\\|W_{2}\\right\\|cos\\left(\\theta_{2}\\right)\\right)=0 and ‖x‖​(‖W1‖​‖W2‖​c​o​s​(θ1)−c​o​s​(m​θ2))=0norm𝑥normsubscript𝑊1normsubscript𝑊2𝑐𝑜𝑠subscript𝜃1𝑐𝑜𝑠𝑚subscript𝜃20\\left\\|x\\right\\|\\left(\\left\\|W_{1}\\right\\|\\left\\|W_{2}\\right\\|cos\\left(\\theta_{1}\\right)-cos\\left(m\\theta_{2}\\right)\\right)=0, respectively, where m𝑚m is a positive integer introducing an angular margin, and θisubscript𝜃𝑖\\theta_{i} is the angle between Wisubscript𝑊𝑖W_{i} and x𝑥x. Due to the non-monotonicity of the cosine function, a piece-wise function is applied in L-softmax to guarantee the monotonicity. The loss function is defined as follows: ℒi=−l​o​g​(e‖Wy​i‖​‖xi‖​φ​(θy​i)e‖Wy​i‖​‖xi‖​φ​(θy​i)+∑j≠yie‖Wy​i‖​‖xi‖​c​o​s​(θj))subscriptℒ𝑖𝑙𝑜𝑔superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃𝑦𝑖superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃𝑦𝑖subscript𝑗subscript𝑦𝑖superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝑐𝑜𝑠subscript𝜃𝑗\\mathcal{L}_{i}=-log\\left(\\frac{e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{yi})}}{e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{yi})+\\sum_{j\\neq y_{i}}e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|cos(\\theta_{j})}}}\\right) (4) where φ​(θ)=(−1)k​c​o​s​(m​θ)−2​k,θ∈(k​πm,(k+1)​πm)formulae-sequence𝜑𝜃superscript1𝑘𝑐𝑜𝑠𝑚𝜃2𝑘𝜃𝑘𝜋𝑚𝑘1𝜋𝑚\\varphi(\\theta)=(-1)^{k}cos(m\\theta)-2k,\\theta\\in\\left(\\frac{k\\pi}{m},\\frac{(k+1)\\pi}{m}\\right) (5) Considering that L-Softmax is difficult to converge, it is always combined with softmax loss to facilitate and ensure the convergence. Therefore, the loss function is changed into: fyi=λ​‖Wyi‖​‖xi‖​c​o​s​(θyi)+‖Wyi‖​‖xi‖​φ​(θyi)1+λsubscript𝑓subscript𝑦𝑖𝜆normsubscript𝑊subscript𝑦𝑖normsubscript𝑥𝑖𝑐𝑜𝑠subscript𝜃subscript𝑦𝑖normsubscript𝑊subscript𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃subscript𝑦𝑖1𝜆f_{y_{i}}=\\frac{\\lambda\\left\\|W_{y_{i}}\\right\\|\\left\\|x_{i}\\right\\|cos(\\theta_{y_{i}})+\\left\\|W_{y_{i}}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{y_{i}})}{1+\\lambda}, where λ𝜆\\lambda is a dynamic hyper-parameter. Based on L-Softmax, A-Softmax loss further normalized the weight W𝑊W by L2 norm (‖W‖=1norm𝑊1\\left\\|W\\right\\|=1) such that the normalized vector will lie on a hypersphere, and then the discriminative face features can be learned on a hypersphere manifold with an angular margin (Fig. 6). Liu et al. introduced a deep hyperspherical convolution network (SphereNet) that adopts hyperspherical convolution as its basic convolution operator and is supervised by angular-margin-based loss. To overcome the optimization difficulty of L-Softmax and A-Softmax, which incorporate the angular margin in a multiplicative manner, ArcFace and CosFace , AMS loss respectively introduced an additive angular/cosine margin c​o​s​(θ+m)𝑐𝑜𝑠𝜃𝑚cos(\\theta+m) and c​o​s​θ−m𝑐𝑜𝑠𝜃𝑚cos\\theta-m. They are extremely easy to implement without tricky hyper-parameters λ𝜆\\lambda, and are more clear and able to converge without the softmax supervision. The decision boundaries under the binary classification case are given in Table V. Based on large margin, FairLoss and AdaptiveFace further proposed to adjust the margins for different classes adaptively to address the problem of unbalanced data. Compared to Euclidean-distance-based loss, angular/cosine-margin-based loss explicitly adds discriminative constraints on a hypershpere manifold, which intrinsically matches the prior that human face lies on a manifold. However, Wang et al. showed that angular/cosine-margin-based loss can achieve better results on a clean dataset, but is vulnerable to noise and becomes worse than center loss and softmax in the high-noise region as shown in Fig. 7. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_25", "text": " In 2017, in addition to reformulating softmax loss into an angular/cosine-margin-based loss as mentioned above, some works tries to normalize the features and weights in loss functions to improve the model performance, which can be written as follows: W^=W‖W‖,x^=α​x‖x‖formulae-sequence^𝑊𝑊norm𝑊^𝑥𝛼𝑥norm𝑥\\hat{W}=\\frac{W}{\\left\\|W\\right\\|},\\hat{x}=\\alpha\\frac{x}{\\left\\|x\\right\\|} (6) where α𝛼\\alpha is a scaling parameter, x𝑥x is the learned feature vector, W𝑊W is weight of last fully connected layer. Scaling x𝑥x to a fixed radius α𝛼\\alpha is important, as Wang et al. proved that normalizing both features and weights to 1 will make the softmax loss become trapped at a very high value on the training set. After that, the loss function, e.g. softmax, can be performed using the normalized features and weights. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_26", "text": " Some papers (84, 108) first normalized the weights only and then added angular/cosine margin into loss functions to make the learned features be discriminative. In contrast, some works, such as (109, 111), adopted feature normalization only to overcome the bias to the sample distribution of the softmax. Based on the observation of that the L2-norm of features learned using the softmax loss is informative of the quality of the face, L2-softmax enforced all the features to have the same L2-norm by feature normalization such that similar attention is given to good quality frontal faces and blurry faces with extreme pose. Rather than scaling x𝑥x to the parameter α𝛼\\alpha, Hasnat et al. normalized features with x^=x−μσ2^𝑥𝑥𝜇superscript𝜎2\\hat{x}=\\frac{x-\\mu}{\\sqrt{\\sigma^{2}}}, where μ𝜇\\mu and σ2superscript𝜎2\\sigma^{2} are the mean and variance. Ring loss encouraged the norm of samples being value R𝑅R (a learned parameter) rather than explicit enforcing through a hard normalization operation. Moreover, normalizing both features and weights (110, 112, 115, 105, 106) has become a common strategy. Wang et al. explained the necessity of this normalization operation from both analytic and geometric perspectives. After normalizing features and weights, CoCo loss optimized the cosine distance among data features, and Hasnat et al. used the von Mises-Fisher (vMF) mixture model as the theoretical basis to develop a novel vMF mixture loss and its corresponding vMF deep features. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_27", "text": " Mainstream architectures. The commonly used network architectures of deep FR have always followed those of deep object classification and evolved from AlexNet to SENet rapidly. We present the most influential architectures of deep object classification and deep face recognition in chronological order 111The time we present is when the paper was published. in Fig. 8. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_28", "text": " In 2012, AlexNet was reported to achieve the SOTA recognition accuracy in the ImageNet large-scale visual recognition competition (ILSVRC) 2012, exceeding the previous best results by a large margin. AlexNet consists of five convolutional layers and three fully connected layers, and it also integrates various techniques, such as rectified linear unit (ReLU), dropout, data augmentation, and so forth. ReLU was widely regarded as the most essential component for making deep learning possible. Then, in 2014, VGGNet proposed a standard network architecture that used very small 3×3333\\times 3 convolutional filters throughout and doubled the number of feature maps after the 2×\\times2 pooling. It increased the depth of the network to 16-19 weight layers, which further enhanced the flexibility to learn progressive nonlinear mappings by deep architectures. In 2015, the 22-layer GoogleNet introduced an “inception module” with the concatenation of hybrid feature maps, as well as two additional intermediate softmax supervised signals. It performs several convolutions with different receptive fields (1×1111\\times 1, 3×3333\\times 3 and 5×5555\\times 5) in parallel, and concatenates all feature maps to merge the multi-resolution information. In 2016, ResNet proposed to make layers learn a residual mapping with reference to the layer inputs ℱ​(x):=ℋ​(x)−xassignℱ𝑥ℋ𝑥𝑥\\mathcal{F}(x):=\\mathcal{H}(x)-x rather than directly learning a desired underlying mapping ℋ​(x)ℋ𝑥\\mathcal{H}(x) to ease the training of very deep networks (up to 152 layers). The original mapping is recast into ℱ​(x)+xℱ𝑥𝑥\\mathcal{F}(x)+x and can be realized by “shortcut connections”. As the champion of ILSVRC 2017, SENet introduced a “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. These blocks can be integrated with modern architectures, such as ResNet, and improves their representational power. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_29", "text": " With the evolved architectures and advanced training techniques, such as batch normalization (BN), the network becomes deeper and the training becomes more controllable. Following these architectures in object classification, the networks in deep FR are also developed step by step, and the performance of deep FR is continually improving. We present these mainstream architectures of deep FR in Fig. 9. In 2014, DeepFace was the first to use a nine-layer CNN with several locally connected layers. With 3D alignment for face processing, it reaches an accuracy of 97.35% on LFW. In 2015, FaceNet used a large private dataset to train a GoogleNet. It adopted a triplet loss function based on triplets of roughly aligned matching/nonmatching face patches generated by a novel online triplet mining method and achieved good performance of 99.63%. In the same year, VGGface designed a procedure to collect a large-scale dataset from the Internet. It trained the VGGNet on this dataset and then fine-tuned the networks via a triplet loss function similar to FaceNet. VGGface obtains an accuracy of 98.95%. In 2017, SphereFace used a 64-layer ResNet architecture and proposed the angular softmax (A-Softmax) loss to learn discriminative face features with angular margin. It boosts the achieves to 99.42% on LFW. In the end of 2017, a new large-scale face dataset, namely VGGface2 , was introduced, which consists of large variations in pose, age, illumination, ethnicity and profession. Cao et al. first trained a SENet with MS-celeb-1M dataset and then fine-tuned the model with VGGface2 , and achieved the SOTA performance on the IJB-A and IJB-B . ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_30", "text": " Light-weight networks. Using deeper neural network with hundreds of layers and millions of parameters to achieve higher accuracy comes at cost. Powerful GPUs with larger memory size are needed, which makes the applications on many mobiles and embedded devices impractical. To address this problem, light-weight networks are proposed. Light CNN (85, 86) proposed a max-feature-map (MFM) activation function that introduces the concept of maxout in the fully connected layer to CNN. The MFM obtains a compact representation and reduces the computational cost. Sun et al. proposed to sparsify deep networks iteratively from the previously learned denser models based on a weight selection criterion. MobiFace adopted fast downsampling and bottleneck residual block with the expansion layers and achieved high performance with 99.7% on LFW database. Although some other light-weight CNNs, such as SqueezeNet, MobileNet, ShuffleNet and Xception (126, 127, 128, 129), are still not widely used in FR, they deserve more attention. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_31", "text": " Adaptive-architecture networks. Considering that designing architectures manually by human experts are time-consuming and error-prone processes, there is growing interest in adaptive-architecture networks which can find well-performing architectures, e.g. the type of operation every layer executes (pooling, convolution, etc) and hyper-parameters associated with the operation (number of filters, kernel size and strides for a convolutional layer, etc), according to the specific requirements of training and testing data. Currently, neural architecture search (NAS) is one of the promising methodologies, which has outperformed manually designed architectures on some tasks such as image classification or semantic segmentation . Zhu et al. integrated NAS technology into face recognition. They used reinforcement learning algorithm (policy gradient) to guide the controller network to train the optimal child architecture. Besides NAS, there are some other explorations to learn optimal architectures adaptively. For example, conditional convolutional neural network (c-CNN) dynamically activated sets of kernels according to modalities of samples; Han et al. proposed a novel contrastive convolution consisted of a trunk CNN and a kernel generator, which is beneficial owing to its dynamistic generation of contrastive kernels based on the pair of faces being compared. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_32", "text": " Joint alignment-recognition networks. Recently, an end-to-end system (91, 92, 93, 94) was proposed to jointly train FR with several modules (face detection, alignment, and so forth) together. Compared to the existing methods in which each module is generally optimized separately according to different objectives, this end-to-end system optimizes each module according to the recognition objective, leading to more adequate and robust inputs for the recognition model. For example, inspired by spatial transformer , Hayat et al. proposed a CNN-based data-driven approach that learns to simultaneously register and represent faces (Fig. 10), while Wu et al. designed a novel recursive spatial transformer (ReST) module for CNN allowing face alignment and recognition to be jointly optimized. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_33", "text": " Multi-input networks. In “one-to-many augmentation”, multiple images with variety are generated from one image in order to augment training data. Taken these multiple images as input, multiple networks are also assembled together to extract and combine features of different type of inputs, which can outperform an individual network. In (58, 59, 60, 99, 34, 21, 35), assembled networks are built after different face patches are cropped, and then different types of patches are fed into different sub-networks for representation extraction. By combining the results of sub-networks, the performance can be improved. Other papers (96, 95, 98) used assembled networks to recognize images with different poses. For example, Masi et al. adjusted the pose to frontal (0∘superscript00^{\\circ}), half-profile (40∘superscript4040^{\\circ}) and full-profile views (75∘superscript7575^{\\circ}) and then addressed pose variation by assembled pose networks. A multi-view deep network (MvDN) consists of view-specific subnetworks and common subnetworks; the former removes view-specific variations, and the latter obtains common representations. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_34", "text": " Multi-task networks. FR is intertwined with various factors, such as pose, illumination, and age. To solve this problem, multitask learning is introduced to transfer knowledge from other relevant tasks and to disentangle nuisance factors. In multi-task networks, identity classification is the main task and the side tasks are pose, illumination, and expression estimations, among others. The lower layers are shared among all the tasks, and the higher layers are disentangled into different sub-networks to generate the task-specific outputs. In , the task-specific sub-networks are branched out to learn face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and FR. Yin et al. proposed to automatically assign the dynamic loss weights for each side task. Peng et al. used a feature reconstruction metric learning to disentangle a CNN into sub-networks for jointly learning the identity and non-identity features as shown in Fig. 11. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_35", "text": " During testing, the cosine distance and L2 distance are generally employed to measure the similarity between the deep features x1subscript𝑥1x_{1} and x2subscript𝑥2x_{2}; then, threshold comparison and the nearest neighbor (NN) classifier are used to make decision for verification and identification. In addition to these common methods, there are some other explorations. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_36", "text": " Metric learning, which aims to find a new metric to make two classes more separable, can also be used for face matching based on extracted deep features. The JB model is a well-known metric learning method (35, 21, 36, 34, 120), and Hu et al. proved that it can improve the performance greatly. In the JB model, a face feature x𝑥x is modeled as x=μ+ε𝑥𝜇𝜀x=\\mu+\\varepsilon, where μ𝜇\\mu and ε𝜀\\varepsilon are identity and intra-personal variations, respectively. The similarity score r​(x1,x2)𝑟subscript𝑥1subscript𝑥2r(x_{1},x_{2}) can be represented as follows: r​(x1,x2)=l​o​g​P​(x1,x2|HI)P​(x1,x2|HE)𝑟subscript𝑥1subscript𝑥2𝑙𝑜𝑔𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐼𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐸r(x_{1},x_{2})=log\\frac{P\\left(x_{1},x_{2}|H_{I}\\right)}{P\\left(x_{1},x_{2}|H_{E}\\right)} (7) where P​(x1,x2|HI)𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐼P(x_{1},x_{2}|H_{I}) is the probability that two faces belong to the same identity and P​(x1,x2|HE)𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐸P(x_{1},x_{2}|H_{E}) is the probability that two faces belong to different identities. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_37", "text": " After cosine distance was computed, Cheng et al. proposed a heuristic voting strategy at the similarity score level to combine the results of multiple CNN models and won first place in Challenge 2 of MS-celeb-1M 2017. Yang et al. extracted the local adaptive convolution features from the local regions of the face image and used the extended SRC for FR with a single sample per person. Guo et al. combined deep features and the SVM classifier to perform recognition. Wang et al. first used product quantization (PQ) to directly retrieve the top-k most similar faces and re-ranked these faces by combining similarities from deep features and the COTS matcher . In addition, Softmax can be also used in face matching when the identities of training set and test set overlap. For example, in Challenge 2 of MS-celeb-1M, Ding et al. trained a 21,000-class softmax classifier to directly recognize faces of one-shot classes and normal classes after augmenting feature by a conditional GAN; Guo et al. trained the softmax classifier combined with underrepresented-classes promotion (UP) loss term to enhance the performance on one-shot classes. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_38", "text": " When the distributions of training data and testing data are the same, the face matching methods mentioned above are effective. However, there is always a distribution change or domain shift between two data domains that can degrade the performance on test data. Transfer learning (144, 145) has recently been introduced into deep FR to address the problem of domain shift. It learns transferable features using a labeled source domain (training data) and an unlabeled target domain (testing data) such that domain discrepancy is reduced and models trained on source domain will also perform well on target domain. Sometimes, this technology is applied to face matching. For example, Crosswhite et al. and Xiong et al. adopted template adaptation to the set of media in a template by combining CNN features with template-specific linear SVMs. But most of the time, it is not enough to do transfer learning only at face matching stage. Transfer learning should be embedded in deep models to learn more transferable representations. Kan et al. proposed a bi-shifting autoencoder network (BAE) for domain adaptation across view angle, ethnicity, and imaging sensor; while Luo et al. utilized the multi-kernels maximum mean discrepancy (MMD) to reduce domain discrepancies. Sohn et al. used adversarial learning to transfer knowledge from still image FR to video FR. Moreover, fine-tuning the CNN parameters from a prelearned model using a target training dataset is a particular type of transfer learning, and is commonly employed by numerous methods (151, 152, 103). ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_39", "text": " We present the development of face processing methods in chronological order in Fig. 12. As we can see from the figure, most papers attempted to perform face processing by autoencoder model in 2014 and 2015; while 3D model played an important role in 2016. GAN has drawn substantial attention from the deep learning and computer vision community since it was first proposed by Goodfellow et al. It can be used in different fields and was also introduced into face processing in 2017. GAN can be used to perform “one-to-many augmentation” and “many-to-one normalization”, and it broke the limit that face synthesis should be done under supervised way. Although GAN has not been widely used in face processing for training and recognition, it has great latent capacity for preprocessing, for example, Dual-Agent GANs (DA-GAN) won the 1st places on verification and identification tracks in the NIST IJB-A 2017 FR competitions. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_40", "text": " Collecting a large database is extremely expensive and time consuming. The methods of “one-to-many augmentation” can mitigate the challenges of data collection, and they can be used to augment not only training data but also the gallery of test data. we categorized them into four classes: data augmentation, 3D model, autoencoder model and GAN model. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_41", "text": " Data augmentation. Common data augmentation methods consist of photometric transformations (75, 22) and geometric transformations, such as oversampling (multiple patches obtained by cropping at different scales) , mirroring , and rotating the images. Recently, data augmentation has been widely used in deep FR algorithms (58, 59, 60, 35, 21, 36, 61, 62). for example, Sun et al. cropped 400 face patches varying in positions, scales, and color channels and mirrored the images. Liu et al. generated seven overlapped image patches centered at different landmarks on the face region and trained them with seven CNNs with the same structure. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_42", "text": " 3D model. 3D face reconstruction is also a way to enrich the diversity of training data. They utilize 3D structure information to model the transformation between poses. 3D models first use 3D face data to obtain morphable displacement fields and then apply them to obtain 2D face data in different pose angles. There is a large number of papers about this domain, but we only focus on the 3D face reconstruction using deep methods or used for deep FR. In , Masi et al. generated face images with new intra-class facial appearance variations, including pose, shape and expression, and then trained a 19-layer VGGNet with both real and augmented data. Masi et al. used generic 3D faces and rendered fixed views to reduce much of the computational effort. Richardson et al. employed an iterative 3D CNN by using a secondary input channel to represent the previous network’s output as an image for reconstructing a 3D face as shown in Fig. 13. Dou et al. used a multi-task CNN to divide 3D face reconstruction into neutral 3D reconstruction and expressive 3D reconstruction. Tran et al. directly regressed 3D morphable face model (3DMM) parameters from an input photo by a very deep CNN architecture. An et al. synthesized face images with various poses and expressions using the 3DMM method, then reduced the gap between synthesized data and real data with the help of MMD. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_43", "text": " Autoencoder model. Rather than reconstructing 3D models from a 2D image and projecting it back into 2D images of different poses, autoencoder models can generate 2D target images directly. Taken a face image and a pose code encoding a target pose as input, an encoder first learns pose-invariant face representation, and then a decoder generates a face image with the same identity viewed at the target pose by using the pose-invariant representation and the pose code. For example, given the target pose codes, multi-view perceptron (MVP) trained some deterministic hidden neurons to learn pose-invariant face representations, and simultaneously trained some random hidden neurons to capture pose features, then a decoder generated the target images by combining pose-invariant representations with pose features. As shown in Fig. 14, Yim et al. and Qian et al. introduced an auxiliary CNN to generate better images viewed at the target poses. First, an autoencoder generated the desired pose image, then the auxiliary CNN reconstructed the original input image back from the generated target image, which guarantees that the generated image is identity-preserving. In , two groups of units are embedded between encoder and decoder. The identity units remain unchanged and the rotation of images is achieved by taking actions to pose units at each time step. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_44", "text": " GAN model. In GAN models, a generator aims to fool a discriminator through generating images that resemble the real images, while the discriminator aims to discriminate the generated samples from the real ones. By this minimax game between generator and discriminator, GAN can successfully generate photo-realistic images with different poses. After using a 3D model to generate profile face images, DA-GAN refined the images by a GAN, which combines prior knowledge of the data distribution and knowledge of faces (pose and identity perception loss). CVAE-GAN combined a variational auto-encoder with a GAN for augmenting data, and took advantages of both statistic and pairwise feature matching to make the training process converge faster and more stably. In addition to synthesizing diverse faces from noise, some papers also explore to disentangle the identity and variation, and synthesize new faces by exchanging identity and variation from different people. In CG-GAN , a generator directly resolves each representation of input image into a variation code and an identity code and regroups these codes for cross-generating, simultaneously, a discriminator ensures the reality of generated images. Bao et al. extracted identity representation of one input image and attribute representation of any other input face image, then synthesized new faces by recombining these representations. This work shows superior performance in generating realistic and identity preserving face images, even for identities outside the training dataset. Unlike previous methods that treat classifier as a spectator, FaceID-GAN proposed a three-player GAN where the classifier cooperates together with the discriminator to compete with the generator from two different aspects, i.e. facial identity and image quality respectively. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_45", "text": " In contrast to “one-to-many augmentation”, the methods of “many-to-one normalization” produce frontal faces and reduce appearance variability of test data to make faces align and compare easily. It can be categorized as autoencoder model, CNN model and GAN model. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_46", "text": " Autoencoder model. Autoencoder can also be applied to “many-to-one normalization”. Different from the autoencoder model in “one-to-many augmentation” which generates the desired pose images with the help of pose codes, autoencoder model here learns pose-invariant face representation by an encoder and directly normalizes faces by a decoder without pose codes. Zhu et al. (66, 67) selected canonical-view images according to the face images’ symmetry and sharpness and then adopted an autoencoder to recover the frontal view images by minimizing the reconstruction loss error. The proposed stacked progressive autoencoders (SPAE) progressively map the nonfrontal face to the frontal face through a stack of several autoencoders. Each shallow autoencoders of SPAE is designed to convert the input face images at large poses to a virtual view at a smaller pose, so the pose variations are narrowed down gradually layer by layer along the pose manifold. Zhang et al. built a sparse many-to-one encoder to enhance the discriminant of the pose free feature by using multiple random faces as the target values for multiple encoders. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_47", "text": " CNN model. CNN models usually directly learn the 2D mappings between non-frontal face images and frontal images, and utilize these mapping to normalize images in pixel space. The pixels in normalized images are either directly the pixels or the combinations of the pixels in non-frontal images. In LDF-Net , the displacement field network learns the shifting relationship of two pixels, and the translation layer transforms the input non-frontal face image into a frontal one with this displacement field. In GridFace shown in Fig. 15, first, the rectification network normalizes the images by warping pixels from the original image to the canonical one according to the computed homography matrix, then the normalized output is regularized by an implicit canonical view face prior, finally, with the normalized faces as input, the recognition network learns discriminative face representation via metric learning. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_48", "text": " GAN model. Huang et al. proposed a two-pathway generative adversarial network (TP-GAN) that contains four landmark-located patch networks and a global encoder-decoder network. Through combining adversarial loss, symmetry loss and identity-preserving loss, TP-GAN generates a frontal view and simultaneously preserves global structures and local details as shown in Fig. 16. In a disentangled representation learning generative adversarial network (DR-GAN) , the generator serves as a face rotator, in which an encoder produces an identity representation, and a decoder synthesizes a face at the specified pose using this representation and a pose code. And the discriminator is trained to not only distinguish real vs. synthetic images, but also predict the identity and pose of a face. Yin et al. incorporated 3DMM into the GAN structure to provide shape and appearance priors to guide the generator to frontalization. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_49", "text": " In the past three decades, many face databases have been constructed with a clear tendency from small-scale to large-scale, from single-source to diverse-sources, and from lab-controlled to real-world unconstrained condition, as shown in Fig. 17. As the performance of some simple databases become saturated, e.g. LFW , more and more complex databases were continually developed to facilitate the FR research. It can be said without exaggeration that the development process of the face databases largely leads the direction of FR research. In this section, we review the development of major training and testing academic databases for the deep FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_50", "text": " The prerequisite of effective deep FR is a sufficiently large training dataset. Zhou et al. suggested that large amounts of data with deep learning improve the performance of FR. The results of Megaface Challenge also revealed that premier deep FR methods were typically trained on data larger than 0.5M images and 20K people. The early works of deep FR were usually trained on private training datasets. Facebook’s Deepface model was trained on 4M images of 4K people; Google’s FaceNet was trained on 200M images of 3M people; DeepID serial models (34, 35, 21, 36) were trained on 0.2M images of 10K people. Although they reported ground-breaking performance at this stage, researchers cannot accurately reproduce or compare their models without public training datasets. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_51", "text": " To address this issue, CASIA-Webface provided the first widely-used public training dataset for the deep model training purpose, which consists of 0.5M images of 10K celebrities collected from the web. Given its moderate size and easy usage, it has become a great resource for fair comparisons for academic deep models. However, its relatively small data and ID size may not be sufficient to reflect the power of many advanced deep learning methods. Currently, there have been more databases providing public available large-scale training dataset (Table VI), especially three databases with over 1M images, namely MS-Celeb-1M , VGGface2 , and Megaface (44, 164), and we summary some interesting findings about these training sets, as shown in Fig. 18. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_52", "text": " Depth v.s. breadth. These large training sets are expanded from depth or breadth. VGGface2 provides a large-scale training dataset of depth, which have limited number of subjects but many images for each subjects. The depth of dataset enforces the trained model to address a wide range intra-class variations, such as lighting, age, and pose. In contrast, MS-Celeb-1M and Mageface (Challenge 2) offers large-scale training datasets of breadth, which contains many subject but limited images for each subjects. The breadth of dataset ensures the trained model to cover the sufficiently variable appearance of various people. Cao et al. conducted a systematic studies on model training using VGGface2 and MS-Celeb-1M, and found an optimal model by first training on MS-Celeb-1M (breadth) and then fine-tuning on VGGface2 (depth). ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_53", "text": " Long tail distribution. The utilization of long tail distribution is different among datasets. For example, in Challenge 2 of MS-Celeb-1M, the novel set specially uses the tailed data to study low-shot learning; central part of the long tail distribution is used by the Challenge 1 of MS-Celeb-1M and images’ number is approximately limited to 100 for each celebrity; VGGface and VGGface2 only use the head part to construct deep databases; Megaface utilizes the whole distribution to contain as many images as possible, the minimal number of images is 3 per person and the maximum is 2469. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_54", "text": " Data engineering. Several popular benchmarks, such as LFW unrestricted protocol, Megaface Challenge 1, MS-Celeb-1M Challenge 1&2, explicitly encourage researchers to collect and clean a large-scale data set for enhancing the capability of deep neural network. Although data engineering is a valuable problem to computer vision researchers, this protocol is more incline to the industry participants. As evidence, the leaderboards of these experiments are mostly occupied by the companies holding invincible hardwares and data scales. This phenomenon may not be beneficial for developments of new models in academic community. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_55", "text": " Data noise. Owing to data source and collecting strategies, existing large-scale datasets invariably contain label noises. Wang et al. profiled the noise distribution in existing datasets in Fig. 19 and showed that the noise percentage increases dramatically along the scale of data. Moreover, they found that noise is more lethal on a 10,000-class problem of FR than on a 10-class problem of object classification and that label flip noise severely deteriorates the performance of a model, especially the model using A-softmax . Therefore, building a sufficiently large and clean dataset for academic research is very meaningful. Deng et al. found there are serious label noise in MS-Celeb-1M , and they cleaned the noise of MS-Celeb-1M, and made the refined dataset public available. Microsoft and Deepglint jointly released the largest public data set with cleaned labels, which includes 4M images cleaned from MS-Celeb-1M dataset and 2.8M aligned images of 100K Asian celebrities. Moreover, Zhan et al. shifted the focus from cleaning the datasets to leveraging more unlabeled data. Through automatically assigning pseudo labels to unlabeled data with the help of relational graphs, they obtained competitive or even better results over the fully-supervised counterpart. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_56", "text": " Data bias. Large-scale training datasets, such as CASIA-WebFace , VGGFace2 and MS-Celeb-1M , are typically constructed by scraping websites like Google Images, and consist of celebrities on formal occasions: smiling, make-up, young, and beautiful. They are largely different from databases captured in the daily life (e.g. Megaface). The biases can be attributed to many exogenous factors in data collection, such as cameras, lightings, preferences over certain types of backgrounds, or annotator tendencies. Dataset biases adversely affect cross-dataset generalization; that is, the performance of the model trained on one dataset drops significantly when applied to another one. One persuasive evidence is presented by P.J. Phillips’ study which conducted a cross benchmark assessment of VGGFace model for face recognition. The VGGFace model achieves 98.95% on LFW and 97.30% on YTF , but only obtains 26%, 52% and 85% on Ugly, Bad and Good partition of GBU database . ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_57", "text": " Demographic bias (e.g., race/ethnicity, gender, age) in datasets is a universal but urgent issue to be solved in data bias field. In existing training and testing datasets, the male, White, and middle-aged cohorts always appear more frequently, as shown in Table VII, which inevitably causes deep learning models to replicate and even amplify these biases resulting in significantly different accuracies when deep models are applied to different demographic groups. Some researches (145, 171, 172) showed that the female, Black, and younger cohorts are usually more difficult to recognize in FR systems trained with commonly-used datasets. For example, Wang et al. proposed a Racial Faces in-the-Wild (RFW) database and proved that existing commercial APIs and the SOTA algorithms indeed work unequally for different races and the maximum difference in error rate between the best and worst groups is 12%, as shown in Table VIII. Hupont et al. showed that SphereFace has a TAR of 0.87 for White males which drops to 0.28 for Asian females, at a FAR of 1​e−41𝑒41e-4. Such bias can result in mistreatment of certain demographic groups, by either exposing them to a higher risk of fraud, or by making access to services more difficult. Therefore, addressing data bias and enhancing fairness of FR systems in real life are urgent and necessary tasks. Collecting balanced data to train a fair model or designing some debiasing algorithms are effective way. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_58", "text": " In terms of training protocol, FR can be categorized into subject-dependent and subject-independent settings, as illustrated in Fig. 20. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_59", "text": " Subject-dependent protocol. For subject-dependent protocol, all testing identities are predefined in training set, it is natural to classify testing face images to the given identities. Therefore, subject-dependent FR can be well addressed as a classification problem, where features are expected to be separable. The protocol is mostly adopted by the early-stage (before 2010) FR studies on FERET , AR , and is suitable only for some small-scale applications. The Challenge 2 of MS-Celeb-1M is the only large-scale database using subject-dependent training protocol. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_60", "text": " Subject-independent protocol. For subject-independent protocol, the testing identities are usually disjoint from the training set, which makes FR more challenging yet close to practice. Because it is impossible to classify faces to known identities in training set, generalized representation is essential. Due to the fact that human faces exhibit similar intra-subject variations, deep models can display transcendental generalization ability when training with a sufficiently large set of generic subjects, where the key is to learn discriminative large-margin deep features. This generalization ability makes subject-independent FR possible. Almost all major face-recognition benchmarks, such as LFW , PaSC , IJB-A/B/C (41, 42, 43) and Megaface (44, 164), require the tested models to be trained under subject-independent protocol. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_61", "text": " In order to evaluate whether our deep models can solve the different problems of FR in real life, many testing datasets are designed to evaluate the models in different tasks, i.e. face verification, close-set face identification and open-set face identification. In either task, a set of known subjects is initially enrolled in the system (the gallery), and during testing, a new subject (the probe) is presented. Face verification computes one-to-one similarity between the gallery and probe to determine whether the two images are of the same subject, whereas face identification computes one-to-many similarity to determine the specific identity of a probe face. When the probe appears in the gallery identities, this is referred to as closed-set identification; when the probes include those who are not in the gallery, this is open-set identification. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_62", "text": " Face verification is relevant to access control systems, re-identification, and application independent evaluations of FR algorithms. It is classically measured using the receiver operating characteristic (ROC) and estimated mean accuracy (Acc). At a given threshold (the independent variable), ROC analysis measures the true accept rate (TAR), which is the fraction of genuine comparisons that correctly exceed the threshold, and the false accept rate (FAR), which is the fraction of impostor comparisons that incorrectly exceed the threshold. And Acc is a simplified metric introduced by LFW , which represents the percentage of correct classifications. With the development of deep FR, more accurate recognitions are required. Customers concern more about the TAR when FAR is kept in a very low rate in most security certification scenario. PaSC reports TAR at a FAR of 10−2superscript10210^{-2}; IJB-A evaluates TAR at a FAR of 10−3superscript10310^{-3}; Megaface (44, 164) focuses on TAR@10−6superscript10610^{-6}FAR; especially, in MS-celeb-1M challenge 3 , TAR@10−9superscript10910^{-9}FAR is reported. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_63", "text": " Close-set face identification is relevant to user driven searches (e.g., forensic identification), rank-N and cumulative match characteristic (CMC) is commonly used metrics in this scenario. Rank-N is based on what percentage of probe searches return the probe’s gallery mate within the top k𝑘k rank-ordered results. The CMC curve reports the percentage of probes identified within a given rank (the independent variable). IJB-A/B/C (41, 42, 43) concern on the rank-1 and rank-5 recognition rate. The MegaFace challenge (44, 164) systematically evaluates rank-1 recognition rate function of increasing number of gallery distractors (going from 10 to 1 Million), the results of the SOTA evaluated on MegaFace challenge are listed in Table IX. Rather than rank-N and CMC, MS-Celeb-1M further applies a precision-coverage curve to measure identification performance under a variable threshold t𝑡t. The probe is rejected when its confidence score is lower than t𝑡t. The algorithms are compared in term of what fraction of passed probes, i.e. coverage, with a high recognition precision, e.g. 95% or 99%, the results of the SOTA evaluated on MS-Celeb-1M challenge are listed in Table X. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_64", "text": " Open-set face identification is relevant to high throughput face search systems (e.g., de-duplication, watch list identification), where the recognition system should reject unknown/unseen subjects (probes who do not present in gallery) at test time. At present, there are very few databases covering the task of open-set FR. IJB-A/B/C (41, 42, 43) benchmarks introduce a decision error tradeoff (DET) curve to characterize the the false negative identification rate (FNIR) as function of the false positive identification rate (FPIR). FPIR measures what fraction of comparisons between probe templates and non-mate gallery templates result in a match score exceeding T𝑇T. At the same time, FNIR measures what fraction of probe searches will fail to match a mated gallery template above a score of T𝑇T. The algorithms are compared in term of the FNIR at a low FPIR, e.g. 1% or 10%, the results of the SOTA evaluated on IJB-A dataset as listed in Table XI. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_65", "text": " Public available training databases are mostly collected from the photos of celebrities due to privacy issue, it is far from images captured in the daily life with diverse scenes. In order to study different specific scenarios, more difficult and realistic datasets are constructed accordingly, as shown in Table XII. According to their characteristics, we divide these scenes into four categories: cross-factor FR, heterogenous FR, multiple (or single) media FR and FR in industry (Fig. 21). ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_66", "text": " • Cross-factor FR. Due to the complex nonlinear facial appearance, some variations will be caused by people themselves, such as cross-pose, cross-age, make-up, and disguise. For example, CALFW , MORPH , CACD and FG-NET are commonly used datasets with different age range; CFP only focuses on frontal and profile face, CPLFW is extended from LFW and contains different poses. Disguised faces in the wild (DFW) evaluates face recognition across disguise. • Heterogenous FR. It refers to the problem of matching faces across different visual domains. The domain gap is mainly caused by sensory devices and cameras settings, e.g. visual light vs. near-infrared and photo vs. sketch. For example, CUFSF and CUFS are commonly used photo-sketch datasets and CUFSF dataset is harder due to lighting variation and shape exaggeration. • Multiple (or single) media FR. Ideally, in FR, many images of each subject are provided in training datasets and image-to-image recognitions are performed when testing. But the situation will be different in reality. Sometimes, the number of images per person in training set could be very small, such as MS-Celeb-1M challenge 2 . This challenge is often called low- shot or few-shot FR. Moreover, each subject face in test set may be enrolled with a set of images and videos and set-to-set recognition should be performed, such as IJB-A and PaSC . • FR in industry. Although deep FR has achieved beyond human performance on some standard benchmarks, but some other factors should be given more attention rather than accuracy when deep FR is adopted in industry, e.g. anti-attack (CASIA-FASD ) and 3D FR (Bosphorus , BU-3DFE and FRGCv2 ). Compared to publicly available 2D face databases, 3D scans are hard to acquire, and the number of scans and subjects in public 3D face databases is still limited, which hinders the development of 3D deep FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_67", "text": " Despite the high accuracy in the LFW and Megaface (44, 164) benchmarks, the performance of FR models still hardly meets the requirements in real-world application. A conjecture in industry is made that results of generic deep models can be improved simply by collecting big datasets of the target scene. However, this holds only to a certain degree. More and more concerns on privacy may make the collection and human-annotation of face data become illegal in the future. Therefore, significant efforts have been paid to design excellent algorithms to address the specific problems with limited data in these realistic scenes. In this section, we present several special algorithms of FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_68", "text": " As shows that many existing algorithms suffer a decrease of over 10% from frontal-frontal to frontal-profile verification, cross-pose FR is still an extremely challenging scene. In addition to the aforementioned methods, including “one-to-many augmentation”, “many-to-one normalization” and assembled networks (Section 4 and 3.2.2), there are some other algorithms designed for cross-pose FR. Considering the extra burden of above methods, Cao et al. attempted to perform frontalization in the deep feature space rather than the image space. A deep residual equivariant mapping (DREAM) block dynamically added residuals to an input representation to transform a profile face to a frontal image. Chen et al. proposed to combine feature extraction with multi-view subspace learning to simultaneously make features be more pose-robust and discriminative. Pose Invariant Model (PIM) jointly performed face frontalization and learned pose invariant representations end-to-end to allow them to mutually boost each other, and further introduced unsupervised cross-domain adversarial training and a learning to learn strategy to provide high-fidelity frontal reference face images. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_69", "text": " Cross-age FR is extremely challenging due to the changes in facial appearance by the aging process over time. One direct approach is to synthesize the desired image with target age such that the recognition can be performed in the same age group. A generative probabilistic model was used by to model the facial aging process at each short-term stage. The identity-preserved conditional generative adversarial networks (IPCGANs) framework utilized a conditional-GAN to generate a face in which an identity-preserved module preserved the identity information and an age classifier forced the generated face with the target age. Antipov et al. proposed to age faces by GAN, but the synthetic faces cannot be directly used for face verification due to its imperfect preservation of identities. Then, they used a local manifold adaptation (LMA) approach to solve the problem of . In , high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales to generate more lifelike facial details. An alternative to address the cross-age problem is to decompose aging and identity components separately and extract age-invariant representations. Wen et al. developed a latent identity analysis (LIA) layer to separate these two components, as shown in Fig. 22. In , age-invariant features were obtained by subtracting age-specific factors from the representations with the help of the age estimation task. In , face features are decomposed in the spherical coordinate system, in which the identity-related components are represented with angular coordinates and the age-related information is encoded with radial coordinate. Additionally, there are other methods designed for cross-age FR. For example, Bianco ett al. and El et al. fine-tuned the CNN to transfer knowledge across age. Wang et al. proposed a siamese deep network to perform multi-task learning of FR and age estimation. Li et al. integrated feature extraction and metric learning via a deep CNN. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_70", "text": " Makeup is widely used by the public today, but it also brings challenges for FR due to significant facial appearance changes. The research on matching makeup and nonmakeup face images is receiving increasing attention. Li et al. generated nonmakeup images from makeup ones by a bi-level adversarial network (BLAN) and then used the synthesized nonmakeup images for verification as shown in Fig. 23. Sun et al. pretrained a triplet network on videos and fine-tuned it on a small makeup datasets. Specially, facial disguise (214, 228, 229) is a challenging research topic in makeup face recognition. By using disguise accessories such as wigs, beard, hats, mustache, and heavy makeup, disguise introduces two variations: (i) when a person wants to obfuscate his/her own identity, and (ii) another individual impersonates someone else’s identity. Obfuscation increases intra-class variations whereas impersonation reduces the inter-class dissimilarity, thereby affecting face recognition/verification task. To address this issue, a variety of methods are proposed. Zhang et al. first trained two DCNNs for generic face recognition and then used Principal Components Analysis (PCA) to find the transformation matrix for disguised face recognition adaptation. Kohli et al. finetuned models using disguised faces. Smirnov et al. proposed a hard example mining method benefitted from class-wise (Doppelganger Mining ) and example-wise mining to learn useful deep embeddings for disguised face recognition. Suri et al. learned the representations of images in terms of colors, shapes, and textures (COST) using an unsupervised dictionary learning method, and utilized the combination of COST features and CNN features to perform recognition. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_71", "text": " Due to the excellent performance of the near-infrared spectrum (NIS) images under low-light scenarios, NIS images are widely applied in surveillance systems. Because most enrolled databases consist of visible light (VIS) spectrum images, how to recognize a NIR face from a gallery of VIS images has been a hot topic. Saxena et al. and Liu et al. transferred the VIS deep networks to the NIR domain by fine-tuning. Lezama et al. used a VIS CNN to recognize NIR faces by transforming NIR images to VIS faces through cross-spectral hallucination and restoring a low-rank structure for features through low-rank embedding. Reale et al. trained a VISNet (for visible images) and a NIRNet (for near-infrared images), and coupled their output features by creating a siamese network. He et al. (238, 239) divided the high layer of the network into a NIR layer, a VIS layer and a NIR-VIS shared layer, then, a modality-invariant feature can be learned by the NIR-VIS shared layer. Song et al. embedded cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network. In , the low-rank relevance and cross-modal ranking were used to alleviate the semantic gap. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_72", "text": " Although deep networks are robust to low resolution to a great extent, there are still a few studies focused on promoting the performance of low-resolution FR. For example, Zangeneh et al. proposed a CNN with a two-branch architecture (a super-resolution network and a feature extraction network) to map the high- and low-resolution face images into a common space where the intra-person distance is smaller than the inter-person distance. Shen et al. exploited the face semantic information and local structural constraints to better restore the shape and detail of face images. In addition, they optimized the network with perceptual and adversarial losses to produce photo-realistic results. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_73", "text": " The photo-sketch FR may help law enforcement to quickly identify suspects. The commonly used methods can be categorized as two classes. One is to utilize transfer learning to directly match photos to sketches. Deep networks are first trained using a large face database of photos and are then fine-tuned using small sketch database (243, 244). The other is to use the image-to-image translation, where the photo can be transformed to a sketch or the sketch to a photo; then, FR can be performed in one domain. Zhang et al. developed a fully convolutional network with generative loss and a discriminative regularizer to transform photos to sketches. Zhang et al. utilized a branched fully convolutional neural network (BFCN) to generate a structure-preserved sketch and a texture-preserved sketch, and then they fused them together via a probabilistic method. Recently, GANs have achieved impressive results in image generation. Yi et al. , Kim et al. and Zhu et al. used two generators, GAsubscript𝐺𝐴G_{A} and GBsubscript𝐺𝐵G_{B}, to generate sketches from photos and photos from sketches, respectively (Fig. 24). Based on , Wang et al. proposed a multi-adversarial network to avoid artifacts by leveraging the implicit presence of feature maps of different resolutions in the generator subnetwork. Similar to photo-sketch FR, photo-caricature FR is one kind of heterogenous FR scenes which is challenging and important to understanding of face perception. Huo et al. built a large dataset of caricatures and photos, and provided several evaluation protocols and their baseline performances for comparison. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_74", "text": " For many practical applications, such as surveillance and security, the FR system should recognize persons with a very limited number of training samples or even with only one sample. The methods of low-shot learning can be categorized as 1) synthesizing training data and 2) learning more powerful features. Hong et al. generated images in various poses using a 3D face model and adopted deep domain adaptation to handle other variations, such as blur, occlusion, and expression (Fig. 25). Choe et al. used data augmentation methods and a GAN for pose transition and attribute boosting to increase the size of the training dataset. Wu et al. proposed a framework with hybrid classifiers using a CNN and a nearest neighbor (NN) model. Guo et al. made the norms of the weight vectors of the one-shot classes and the normal classes aligned to address the data imbalance problem. Cheng et al. proposed an enforced softmax that contains optimal dropout, selective attenuation, L2 normalization and model-level optimization. Yin et al. augmented feature space of low-shot classes by transferring the principal components from regular to low-shot classes to encourage the variance of low-shot classes to mimic that of regular classes. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_75", "text": " Different from traditional image-to-image recognition, set-to-set recognition takes a set (heterogeneous contents containing both images and videos) as the smallest unit of representation. This kind of setting does reflect the real-world biometric scenarios, thereby attracting a lot of attention. After learning face representations of media in each set, two strategies are generally adopted to perform set-to-set matching. One is to use these representations to perform pair-wise similarity comparison of two sets and aggregate the results into a single and final score by max score pooling , average score pooling and its variations (253, 254). The other strategy is feature pooling (96, 103, 81) which first aggregates face representations into a single representation for each set and then performs a comparison between two sets. In addition to the commonly used strategies, there are also some novel methods proposed for set/template-based FR. For example, Hayat et al. proposed a deep heterogeneous feature fusion network to exploit the features’ complementary information generated by different CNNs. Liu et al. introduced the actor-critic reinforcement learning for set-based FR. They casted the inner-set dependency modeling to a Markov decision process in the latent space, and trained a dependency-aware attention control agent to make attention control for each image in each step. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_76", "text": " There are two key issues in video FR: one is to integrate the information across different frames together to build a representation of the video face, and the other is to handle video frames with severe blur, pose variations, and occlusions. For frame aggregation, Yang et al. proposed a neural aggregation network (NAN) in which the aggregation module, consisting of two attention blocks driven by a memory, produces a 128-dimensional vector representation (Fig. 26). Rao et al. aggregated raw video frames directly by combining the idea of metric learning and adversarial learning. For dealing with bad frames, Rao et al. discarded the bad frames by treating this operation as a Markov decision process and trained the attention model through a deep reinforcement learning framework. Ding et al. artificially blurred clear images for training to learn blur-robust face representations. Parchami et al. used a CNN to reconstruct a lower-quality video into a high-quality face. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_77", "text": " 3D FR has inherent advantages over 2D methods, but 3D deep FR is not well developed due to the lack of large annotated 3D data. To enlarge 3D training datasets, most works use the methods of “one-to-many augmentation” to synthesize 3D faces. However, the effective methods for extracting deep features of 3D faces remain to be explored. Kim et al. fine-tuned a 2D CNN with a small amount of 3D scans for 3D FR. Zulqarnain et al. used a three-channel (corresponding to depth, azimuth and elevation angles of the normal vector) image as input and minimized the average prediction log-loss. Zhang et al. first selected 30 feature points from the Candide-3 face model to characterize faces, then conducted the unsupervised pretraining of face depth data, and finally performed the supervised fine-tuning. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_78", "text": " Partial FR, in which only arbitrary-size face patches are presented, has become an emerging problem with increasing requirements of identification from CCTV cameras and embedded vision systems in mobile devices, robots and smart home facilities. He et al. divided the aligned face image into several multi-scale patches, and the dissimilarity between two partial face images is calculated as the weighted L2 distance between corresponding patches. Dynamic feature matching (DFM) utilized a sliding window of the same size as the probe feature maps to decompose the gallery feature maps into several gallery sub-feature maps, and the similarity-guided constraint imposed on sparse representation classification (SRC) provides an alignment-free matching. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_79", "text": " With the emergence of mobile phones, tablets and augmented reality, FR has been applied in mobile devices. Due to computational limitations, the recognition tasks in these devices need to be carried out in a light but timely fashion. MobiFace required efficient memory and low cost operators by adopting fast downsampling and bottleneck residual block, and achieves 99.7% on LFW database and 91.3% on Megaface database. Tadmor et al. proposed a multibatch method that first generates signatures for a minibatch of k𝑘k face images and then constructs an unbiased estimate of the full gradient by relying on all k2−ksuperscript𝑘2𝑘k^{2}-k pairs from the minibatch. As mentioned in Section 3.2.1, light-weight deep networks (126, 127, 128, 129) perform excellently in the fundamental tasks of image classification and deserve further attention in FR tasks. Moreover, some well-known compressed networks such as Pruning (264, 265, 266), BinaryNets (267, 268, 269, 270), Mimic Networks (271, 272), also have potential to be introduced into FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_80", "text": " With the success of FR techniques, various types of attacks, such as face spoofing and adversarial perturbations, are becoming large threats. Face spoofing involves presenting a fake face to the biometric sensor using a printed photograph, worn mask, or even an image displayed on another electronic device. In order to defense this type of attack, several methods are proposed (211, 273, 274, 275, 276, 277, 278, 279). Atoum et al. proposed a novel two-stream CNN in which the local features discriminate the spoof patches that are independent of the spatial face areas, and holistic depth maps ensure that the input live sample has a face-like depth. Yang et al. trained a CNN using both a single frame and multiple frames with five scales as input, and using the live/spoof label as the output. Taken the sequence of video frames as input, Xu et al. applied LSTM units on top of CNN to obtain end-to-end features to recognize spoofing faces which leveraged the local and dense property from convolution operation and learned the temporal structure using LSTM units. Li et al. and Patel et al. fine-tuned their networks from a pretrained model by training sets of real and fake images. Jourabloo et al. proposed to inversely decompose a spoof face into the live face and the spoof noise pattern. Adversarial perturbation is the other type of attack which can be defined as the addition of a minimal vector r𝑟r such that with addition of this vector into the input image x𝑥x, i.e. (x+r)𝑥𝑟(x+r), the deep learning models misclassifies the input while people will not. Recently, more and more work has begun to focus on solving this perturbation of FR. Goswami et al. proposed to detect adversarial samples by characterizing abnormal filter response behavior in the hidden layers and increase the network’s robustness by removing the most problematic filters. Goel et al. provided an open source implementation of adversarial detection and mitigation algorithms. Despite of progresses of anti-attack algorithms, attack methods are updated as well and remind us the need to further increase security and robustness in FR systems, for example, Mai et al. proposed a neighborly de-convolutional neural network (NbNet) to reconstruct a fake face using the stolen deep templates. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_81", "text": " As described in Section 5.1, existing datasets are highly biased in terms of the distribution of demographic cohorts, which may dramatically impact the fairness of deep models. To address this issue, there are some works that seek to introduce fairness into face recognition and mitigate demographic bias, e,g. unbalanced-training , attribute removal (284, 285, 286) and domain adaptation (173, 287, 147). 1) Unbalanced-training methods mitigate the bias via model regularization, taking into consideration of the fairness goal in the overall model objective function. For example, RL-RBN formulated the process of finding the optimal margins for non-Caucasians as a Markov decision process and employed deep Q-learning to learn policies based on large margin loss. 2) Attribute removal methods confound or remove demographic information of faces to learn attribute-invariant representations. For example, Alvi et al. applied a confusion loss to make a classifier fail to distinguish attributes of examples so that multiple spurious variations are removed from the feature representation. SensitiveNets proposed to introduce sensitive information into triplet loss. They minimized the sensitive information, while maintaining distances between positive and negative embeddings. 3) Domain adaptation methods propose to investigate data bias problem from a domain adaptation point of view and attempt to design domain-invariant feature representations to mitigate bias across domains. IMAN simultaneously aligned global distribution to decrease race gap at domain-level, and learned the discriminative target representations at cluster level. Kan directly converted the Caucasian data to non-Caucasian domain in the image space with the help of sparse reconstruction coefficients learnt in the common subspace. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_82", "text": " In this paper, we provide a comprehensive survey of deep FR from both data and algorithm aspects. For algorithms, mainstream and special network architectures are presented. Meanwhile, we categorize loss functions into Euclidean-distance-based loss, angular/cosine-margin-based loss and variable softmax loss. For data, we summarize some commonly used datasets. Moreover, the methods of face processing are introduced and categorized as “one-to-many augmentation” and “many-to-one normalization”. Finally, the special scenes of deep FR, including video FR, 3D FR and cross-age FR, are briefly introduced. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_83", "text": " Taking advantage of big annotated data and revolutionary deep learning techniques, deep FR has dramatically improved the SOTA performance and fostered successful real-world applications. With the practical and commercial use of this technology, many ideal assumptions of academic research were broken, and more and more real-world issues are emerging. To the best our knowledge, major technical challenges include the following aspects. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_84", "text": " • Security issues. Presentation attack , adversarial attack (280, 281, 290), template attack and digital manipulation attack (292, 293) are developing to threaten the security of deep face recognition systems. 1) Presentation attack with 3D silicone mask, which exhibits skin-like appearance and facial motion, challenges current anti-sproofing methods . 2) Although adversarial perturbation detection and mitigation methods are recently proposed , the root cause of adversarial vulnerability is unclear and thus new types of adversarial attacks are still upgraded continuously (295, 296). 3) The stolen deep feature template can be used to recover its facial appearance, and how to generate cancelable template without loss of accuracy is another important issue. 4) Digital manipulation attack, made feasible by GANs, can generate entirely or partially modified photorealistic faces by expression swap, identity swap, attribute manipulation and entire face synthesis, which remains a main challenge for the security of deep FR. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_85", "text": " • Privacy-preserving face recognition. With the leakage of biological data, privacy concerns are raising nowadays. Facial images can predict not only demographic information such as gender, age, or race, but even the genetic information . Recently, the pioneer works such as Semi-Adversarial Networks (298, 299, 285) have explored to generate a recognizable biometric templates that can hidden some of the private information presented in the facial images. Further research on the principles of visual cryptography, signal mixing and image perturbation to protect users’ privacy on stored face templates are essential for addressing public concern on privacy. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_86", "text": " • Understanding deep face recognition. Deep face recognition systems are now believed to surpass human performance in most scenarios . There are also some interesting attempts to apply deep models to assist human operators for face verification . Despite this progress, many fundamental questions are still open, such as what is the “identity capacity” of a deep representation ? Why deep neural networks, rather than humans, are easily fooled by adversarial samples? While bigger and bigger training dataset by itself cannot solve this problem, deeper understanding on these questions may help us to build robust applications in real world. Recently, a new benchmark called TALFW has been proposed to explore this issue . ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_87", "text": " • Remaining challenges defined by non-saturated benchmark datasets. Three current major datasets, namely, MegaFace (44, 164) , MS-Celeb-1M and IJB-A/B/C (41, 42, 43), are corresponding to large-scale FR with a very large number of candidates, low/one-shot FR and large pose-variance FR which will be the focus of research in the future. Although the SOTA algorithms can be over 99.9 percent accurate on LFW and Megaface (44, 164) databases, fundamental challenges such as matching faces cross ages , poses , sensors, or styles still remain. For both datasets and algorithms, it is necessary to measure and address the racial/gender/age biases of deep FR in future research. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_88", "text": " • Ubiquitous face recognition across applications and scenes. Deep face recognition has been successfully applied on many user-cooperated applications, but the ubiquitous recognition applications in everywhere are still an ambitious goal. In practice, it is difficult to collect and label sufficient samples for innumerable scenes in real world. One promising solution is to first learn a general model and then transfer it to an application-specific scene. While deep domain adaptation has recently been applied to reduce the algorithm bias on different scenes , different races , general solution to transfer face recognition is largely open. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_89", "text": " • Pursuit of extreme accuracy and efficiency. Many killer-applications, such as watch-list surveillance or financial identity verification, require high matching accuracy at very low alarm rate, e.g. 10−9superscript10910^{-9}. It is still a big challenge even with deep learning on massive training data. Meanwhile, deploying deep face recognition on mobile devices pursues the minimum size of feature representation and compressed deep network. It is of great significance for both industry and academic to explore this extreme face-recognition performance beyond human imagination. It is also exciting to constantly push the performance limits of the algorithm after it has already surpassed human. ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_90", "text": " • Fusion issues. Face recognition by itself is far from sufficient to solve all biometric and forensic tasks, such as distinguishing identical twins and matching faces before and after surgery . A reliable solution is to consolidate multiple sources of biometric evidence . These sources of information may correspond to different biometric traits (e.g., face + hand ), sensors (e.g., 2D + 3D face cameras), feature extraction and matching techniques, or instances (e.g., a face sequence of various poses). It is beneficial for face biometric and forensic applications to perform information fusion at the data level, feature level, score level, rank level, and decision level . ", "title": "Deep Face Recognition" }, { "id": "1804.06655_all_91", "text": " This work was partially supported by National Key R&D Program of China (2019YFB1406504) and BUPT Excellent Ph.D. Students Foundation CX2020207. ", "title": "Deep Face Recognition" } ]
If, for a certain model, it was theorized that the penultimate layer is the most important later for generating embeddings, how could discriminative fine-tuning be used to validate or refute that theory?
In this work, discriminative fine-tuning was used to fine-tune each layer with a different learning rate [17]. Specifically, the learning rate was decreased going from the last layer to lower layers [45]. The authors found that this improved performance across several datasets [19].
[ 17, 45, 19 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets Sharif Razavian et al. (2014); Long et al. (2015a); He et al. (2016); Huang et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_1", "text": " Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection Jindal and Liu (2007); Ngai et al. (2011); Chu et al. (2012), emergency response Caragea et al. (2011), and commercial document classification, such as for legal discovery Roitblat et al. (2010). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_2", "text": " While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer Blitzer et al. (2007). For inductive transfer, fine-tuning pretrained word embeddings Mikolov et al. (2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers Peters et al. (2017); McCann et al. (2017); Peters et al. (2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_3", "text": " In light of the benefits of pretraining Erhan et al. (2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via fine-tuning has been unsuccessful for NLP Mou et al. (2016). Dai and Le (2015) first proposed fine-tuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_4", "text": " We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_5", "text": " We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture—with the same hyperparameters and no additions other than tuned dropout hyperparameters—outperforms highly engineered models and transfer learning approaches on six widely studied text classification tasks. On IMDb, with 100100100 labeled examples, ULMFiT matches the performance of training from scratch with 10×10\\times and—given 505050k unlabeled examples—with 100×100\\times more data. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_6", "text": " Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_7", "text": " Features in deep neural networks in CV have been observed to transition from general to task-specific from the first to the last layer Yosinski et al. (2014). For this reason, most work in CV focuses on transferring the first layers of the model Long et al. (2015b). Sharif Razavian et al. (2014) achieve state-of-the-art results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last Donahue et al. (2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_8", "text": " In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns Hariharan et al. (2015) in CV333A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_9", "text": " A related direction is multi-task learning (MTL) Caruana (1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the task-specific objective functions Chen et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_10", "text": " Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA Min et al. (2017), for distantly supervised sentiment analysis Severyn and Moschitti (2015), or MT domains Sennrich et al. (2015) but has been shown to fail between unrelated ones Mou et al. (2016). Dai and Le (2015) also fine-tune a language model, but overfit with 101010k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel fine-tuning techniques to prevent overfitting even with only 100100100 labeled examples and achieves state-of-the-art results also on small datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_11", "text": " We are interested in the most general inductive transfer learning setting for NLP Pan and Yang (2010): Given a static source task 𝒯Ssubscript𝒯𝑆\\mathcal{T}_{S} and any target task 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T} with 𝒯S≠𝒯Tsubscript𝒯𝑆subscript𝒯𝑇\\mathcal{T}_{S}\\neq\\mathcal{T}_{T}, we would like to improve performance on 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T}. Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies Linzen et al. (2016), hierarchical relations Gulordava et al. (2018), and sentiment Radford et al. (2017). In contrast to tasks like MT McCann et al. (2017) and entailment Conneau et al. (2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space ℋℋ\\mathcal{H} that should be useful for many other NLP tasks Vapnik and Kotz (1982); Baxter (2000). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_12", "text": " We propose Universal Language Model Fine-tuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_13", "text": " In our experiments, we use the state-of-the-art language model AWD-LSTM Merity et al. (2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higher-performance language models in the future. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_14", "text": " ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_15", "text": " An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 Merity et al. (2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100100100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_16", "text": " No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_17", "text": " As different layers capture different types of information Yosinski et al. (2014), they should be fine-tuned to different extents. To this end, we propose a novel fine-tuning method, discriminative fine-tuning444 An unrelated method of the same name exists for deep Boltzmann machines Salakhutdinov and Hinton (2009).. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_18", "text": " Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ𝜃\\theta at time step t𝑡t looks like the following Ruder (2016): θt=θt−1−η⋅∇θJ​(θ)subscript𝜃𝑡subscript𝜃𝑡1⋅𝜂subscript∇𝜃𝐽𝜃\\theta_{t}=\\theta_{t-1}-\\eta\\cdot\\nabla_{\\theta}J(\\theta) (1) where η𝜂\\eta is the learning rate and ∇θJ​(θ)subscript∇𝜃𝐽𝜃\\nabla_{\\theta}J(\\theta) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ𝜃\\theta into {θ1,…,θL}superscript𝜃1…superscript𝜃𝐿\\{\\theta^{1},\\ldots,\\theta^{L}\\} where θlsuperscript𝜃𝑙\\theta^{l} contains the parameters of the model at the l𝑙l-th layer and L𝐿L is the number of layers of the model. Similarly, we obtain {η1,…,ηL}superscript𝜂1…superscript𝜂𝐿\\{\\eta^{1},\\ldots,\\eta^{L}\\} where ηlsuperscript𝜂𝑙\\eta^{l} is the learning rate of the l𝑙l-th layer. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_19", "text": " The SGD update with discriminative fine-tuning is then the following: θtl=θt−1l−ηl⋅∇θlJ​(θ)superscriptsubscript𝜃𝑡𝑙superscriptsubscript𝜃𝑡1𝑙⋅superscript𝜂𝑙subscript∇superscript𝜃𝑙𝐽𝜃\\theta_{t}^{l}=\\theta_{t-1}^{l}-\\eta^{l}\\cdot\\nabla_{\\theta^{l}}J(\\theta) (2) We empirically found it to work well to first choose the learning rate ηLsuperscript𝜂𝐿\\eta^{L} of the last layer by fine-tuning only the last layer and using ηl−1=ηl/2.6superscript𝜂𝑙1superscript𝜂𝑙2.6\\eta^{l-1}=\\eta^{l}/2.6 as the learning rate for lower layers. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_20", "text": " For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: c​u​t=⌊T⋅c​u​t​_​f​r​a​c⌋p={t/c​u​t,if​t<c​u​t1−t−c​u​tc​u​t⋅(1/c​u​t​_​f​r​a​c−1),otherwiseηt=ηm​a​x⋅1+p⋅(r​a​t​i​o−1)r​a​t​i​o𝑐𝑢𝑡⋅𝑇𝑐𝑢𝑡_𝑓𝑟𝑎𝑐𝑝cases𝑡𝑐𝑢𝑡if𝑡𝑐𝑢𝑡1𝑡𝑐𝑢𝑡⋅𝑐𝑢𝑡1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐1otherwisesubscript𝜂𝑡⋅subscript𝜂𝑚𝑎𝑥1⋅𝑝𝑟𝑎𝑡𝑖𝑜1𝑟𝑎𝑡𝑖𝑜\\begin{split}cut&=\\lfloor T\\cdot cut\\_frac\\rfloor\\\\ p&=\\begin{cases}t/cut,&\\text{if}\\ t<cut\\\\ 1-\\frac{t-cut}{cut\\cdot(1/cut\\_frac-1)},&\\text{otherwise}\\end{cases}\\\\ \\eta_{t}&=\\eta_{max}\\cdot\\frac{1+p\\cdot(ratio-1)}{ratio}\\end{split} (3) where T𝑇T is the number of training iterations555In other words, the number of epochs times the number of updates per epoch., c​u​t​_​f​r​a​c𝑐𝑢𝑡_𝑓𝑟𝑎𝑐cut\\_frac is the fraction of iterations we increase the LR, c​u​t𝑐𝑢𝑡cut is the iteration when we switch from increasing to decreasing the LR, p𝑝p is the fraction of the number of iterations we have increased or will decrease the LR respectively, r​a​t​i​o𝑟𝑎𝑡𝑖𝑜ratio specifies how much smaller the lowest LR is from the maximum LR ηm​a​xsubscript𝜂𝑚𝑎𝑥\\eta_{max}, and ηtsubscript𝜂𝑡\\eta_{t} is the learning rate at iteration t𝑡t. We generally use c​u​t​_​f​r​a​c=0.1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐0.1cut\\_frac=0.1, r​a​t​i​o=32𝑟𝑎𝑡𝑖𝑜32ratio=32 and ηm​a​x=0.01subscript𝜂𝑚𝑎𝑥0.01\\eta_{max}=0.01. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_21", "text": " STLR modifies triangular learning rates Smith (2017) with a short increase and a long decay period, which we found key for good performance.666We also credit personal communication with the author. In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV Loshchilov and Hutter (2017).777While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_22", "text": " Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization Ioffe and Szegedy (2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_23", "text": " The signal in text classification tasks is often contained in a few words, which may occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step 𝐡Tsubscript𝐡𝑇\\mathbf{h}_{T} of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory 𝐇={𝐡1,…,𝐡T}𝐇subscript𝐡1…subscript𝐡𝑇\\mathbf{H}=\\{\\mathbf{h}_{1},\\ldots,\\mathbf{h}_{T}\\}: 𝐡c=(𝐡T,𝚖𝚊𝚡𝚙𝚘𝚘𝚕​(𝐇),𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕​(𝐇))subscript𝐡𝑐subscript𝐡𝑇𝚖𝚊𝚡𝚙𝚘𝚘𝚕𝐇𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕𝐇\\mathbf{h}_{c}=(\\mathbf{h}_{T},\\mathtt{maxpool}(\\mathbf{H}),\\mathtt{meanpool}(\\mathbf{H})) (4) where ()() is concatenation. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_24", "text": " Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative fine-tuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_25", "text": " Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge Yosinski et al. (2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we fine-tune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ Felbo et al. (2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_26", "text": " While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_27", "text": " Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixed-length batches of size b𝑏b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_28", "text": " Similar to existing work Peters et al. (2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_29", "text": " While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important real-world applications. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_30", "text": " We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches Johnson and Zhang (2017); McCann et al. (2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_31", "text": " For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset Maas et al. (2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_32", "text": " We use the six-class version of the small TREC dataset Voorhees and Tice (1999) dataset of open-domain, fact-based questions divided into broad semantic categories. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_33", "text": " For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_34", "text": " We use the same pre-processing as in earlier work Johnson and Zhang (2017); McCann et al. (2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_35", "text": " We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model Merity et al. (2017a) with an embedding size of 400400400, 333 layers, 115011501150 hidden activations per layer, and a BPTT batch size of 707070. We apply dropout of 0.40.40.4 to layers, 0.30.30.3 to RNN layers, 0.40.40.4 to input embedding layers, 0.050.050.05 to embedding layers, and weight dropout of 0.50.50.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 505050. We use Adam with β1=0.7subscript𝛽10.7\\beta_{1}=0.7 instead of the default β1=0.9subscript𝛽10.9\\beta_{1}=0.9 and β2=0.99subscript𝛽20.99\\beta_{2}=0.99, similar to Dozat and Manning (2017). We use a batch size of 646464, a base learning rate of 0.0040.0040.004 and 0.010.010.01 for fine-tuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task888On small datasets such as TREC-6, we fine-tune the LM only for 151515 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 505050 epochs to be a good default for fine-tuning the classifier.. We otherwise use the same practices used in Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_36", "text": " For each task, we compare against the current state-of-the-art. For the IMDb and TREC-6 datasets, we compare against CoVe McCann et al. (2017), a state-of-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_37", "text": " For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing state-of-the-art requires complex architectures Peters et al. (2018), multiple forms of attention McCann et al. (2017) and sophisticated embedding schemes Johnson and Zhang (2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of real-world datasets: Its documents are generally a few paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_38", "text": " On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6—to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_39", "text": " We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-of-the-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_40", "text": " In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC-6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10%percent1010\\% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 505050 epochs and train all methods but ULMFiT with early stopping. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_41", "text": " One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_42", "text": " On IMDb and AG, supervised ULMFiT with only 100100100 labeled examples matches the performance of training from scratch with 10×10\\times and 20×20\\times more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (505050k for IMDb, 100100100k for AG), at 100100100 labeled examples, we match the performance of training from scratch with 50×50\\times and 100×100\\times more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_43", "text": " We compare using no pretraining with pretraining on WikiText-103 Merity et al. (2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_44", "text": " In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout999To avoid overfitting, we only train the vanilla LM classifier for 555 epochs and keep dropout of 0.40.40.4 in the classifier. with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_45", "text": " We compare no fine-tuning against fine-tuning the full model Erhan et al. (2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_46", "text": " We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) Donahue et al. (2014), ‘Chain-thaw’ Felbo et al. (2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) Loshchilov and Hutter (2017). We use a learning rate ηL=0.01superscript𝜂𝐿0.01\\eta^{L}=0.01 for ‘Discr’, learning rates of 0.0010.0010.001 and 0.00010.00010.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in Felbo et al. (2017), and a learning rate of 0.0010.0010.001 otherwise. We show the results in Table 7. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_47", "text": " Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 00. ‘Chain-thaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_48", "text": " While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful Mou et al. (2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_49", "text": " On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_50", "text": " At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.50.50.5–0.70.70.7. On IMDb we lower the test error from 5.305.305.30 of a single model to 4.584.584.58 for the bidirectional model. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_51", "text": " While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches Conneau et al. (2017); McCann et al. (2017); Peters et al. (2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_52", "text": " Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop Huh et al. (2016), while recent work shows that an alignment between source and target task label sets is important Mahajan et al. (2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion Caruana (1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies Linzen et al. (2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_53", "text": " Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_54", "text": " We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the state-of-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. ", "title": "Universal Language Model Fine-tuning for Text Classification" } ]
Why should LSTM based auto-encoder models learn good features?
Since LSTM based auto-encoder models control the learning an identity mapping, it forced learn good features [14].
[ 14 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised sequence learning tasks, such as speech recognition (Graves & Jaitly, 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014), and caption generation for images (Vinyals et al., 2014). They have also been applied on videos for recognizing actions and generating natural language descriptions (Donahue et al., 2014). A general sequence to sequence learning framework was described by Sutskever et al. (2014) in which a recurrent network is used to encode a sequence into a fixed length representation, and then another recurrent network is used to decode a sequence out of that representation. In this work, we apply and extend this framework to learn representations of sequences of images. We choose to work in the unsupervised setting where we only have access to a dataset of unlabelled videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_1", "text": " Videos are an abundant and rich source of visual information and can be seen as a window into the physics of the world we live in, showing us examples of what constitutes objects, how objects move against backgrounds, what happens when cameras move and how things get occluded. Being able to learn a representation that disentangles these factors would help in making intelligent machines that can understand and act in their environment. Additionally, learning good video representations is essential for a number of useful tasks, such as recognizing actions and gestures. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_2", "text": " Supervised learning has been extremely successful in learning good visual representations that not only produce good results at the task they are trained for, but also transfer well to other tasks and datasets. Therefore, it is natural to extend the same approach to learning video representations. This has led to research in 3D convolutional nets (Ji et al., 2013; Tran et al., 2014), different temporal fusion strategies (Karpathy et al., 2014) and exploring different ways of presenting visual information to convolutional nets (Simonyan & Zisserman, 2014a). However, videos are much higher dimensional entities compared to single images. Therefore, it becomes increasingly difficult to do credit assignment and learn long range structure, unless we collect much more labelled data or do a lot of feature engineering (for example computing the right kinds of flow features) to keep the dimensionality low. The costly work of collecting more labelled data and the tedious work of doing more clever engineering can go a long way in solving particular problems, but this is ultimately unsatisfying as a machine learning solution. This highlights the need for using unsupervised learning to find and represent structure in videos. Moreover, videos have a lot of structure in them (spatial and temporal regularities) which makes them particularly well suited as a domain for building unsupervised learning models. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_3", "text": " When designing any unsupervised learning model, it is crucial to have the right inductive biases and choose the right objective function so that the learning signal points the model towards learning useful features. In this paper, we use the LSTM Encoder-Decoder framework to learn video representations. The key inductive bias here is that the same operation must be applied at each time step to propagate information to the next step. This enforces the fact that the physics of the world remains the same, irrespective of input. The same physics acting on any state, at any time, must produce the next state. Our model works as follows. The Encoder LSTM runs through a sequence of frames to come up with a representation. This representation is then decoded through another LSTM to produce a target sequence. We consider different choices of the target sequence. One choice is to predict the same sequence as the input. The motivation is similar to that of autoencoders – we wish to capture all that is needed to reproduce the input but at the same time go through the inductive biases imposed by the model. Another option is to predict the future frames. Here the motivation is to learn a representation that extracts all that is needed to extrapolate the motion and appearance beyond what has been observed. These two natural choices can also be combined. In this case, there are two decoder LSTMs – one that decodes the representation into the input sequence and another that decodes the same representation to predict the future. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_4", "text": " The inputs to the model can, in principle, be any representation of individual video frames. However, for the purposes of this work, we limit our attention to two kinds of inputs. The first is image patches. For this we use natural image patches as well as a dataset of moving MNIST digits. The second is high-level “percepts” extracted by applying a convolutional net trained on ImageNet. These percepts are the states of last (and/or second-to-last) layers of rectified linear hidden states from a convolutional neural net model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_5", "text": " In order to evaluate the learned representations we qualitatively analyze the reconstructions and predictions made by the model. For a more quantitative evaluation, we use these LSTMs as initializations for the supervised task of action recognition. If the unsupervised learning model comes up with useful representations then the classifier should be able to perform better, especially when there are only a few labelled examples. We find that this is indeed the case. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_6", "text": " The first approaches to learning representations of videos in an unsupervised way were based on ICA (van Hateren & Ruderman, 1998; Hurri & Hyvärinen, 2003). Le et al. (2011) approached this problem using multiple layers of Independent Subspace Analysis modules. Generative models for understanding transformations between pairs of consecutive images are also well studied (Memisevic, 2013; Memisevic & Hinton, 2010; Susskind et al., 2011). This work was extended recently by Michalski et al. (2014) to model longer sequences. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_7", "text": " Recently, Ranzato et al. (2014) proposed a generative model for videos. The model uses a recurrent neural network to predict the next frame or interpolate between frames. In this work, the authors highlight the importance of choosing the right loss function. It is argued that squared loss in input space is not the right objective because it does not respond well to small distortions in input space. The proposed solution is to quantize image patches into a large dictionary and train the model to predict the identity of the target patch. This does solve some of the problems of squared loss but it introduces an arbitrary dictionary size into the picture and altogether removes the idea of patches being similar or dissimilar to one other. Designing an appropriate loss function that respects our notion of visual similarity is a very hard problem (in a sense, almost as hard as the modeling problem we want to solve in the first place). Therefore, in this paper, we use the simple squared loss objective function as a starting point and focus on designing an encoder-decoder RNN architecture that can be used with any loss function. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_8", "text": " In this section, we describe several variants of our LSTM Encoder-Decoder model. The basic unit of our network is the LSTM cell block. Our implementation of LSTMs follows closely the one discussed by Graves (2013). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_9", "text": " In this section we briefly describe the LSTM unit which is the basic building block of our model. The unit is shown in Fig. 1 (reproduced from Graves (2013)). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_10", "text": " Each LSTM unit has a cell which has a state ctsubscript𝑐𝑡c_{t} at time t𝑡t. This cell can be thought of as a memory unit. Access to this memory unit for reading or modifying it is controlled through sigmoidal gates – input gate itsubscript𝑖𝑡i_{t}, forget gate ftsubscript𝑓𝑡f_{t} and output gate otsubscript𝑜𝑡o_{t}. The LSTM unit operates as follows. At each time step it receives inputs from two external sources at each of the four terminals (the three gates and the input). The first source is the current frame 𝐱tsubscript𝐱𝑡{{\\bf x}_{t}}. The second source is the previous hidden states of all LSTM units in the same layer 𝐡t−1subscript𝐡𝑡1{\\bf h}_{t-1}. Additionally, each gate has an internal source, the cell state ct−1subscript𝑐𝑡1c_{t-1} of its cell block. The links between a cell and its own gates are called peephole connections. The inputs coming from different sources get added up, along with a bias. The gates are activated by passing their total input through the logistic function. The total input at the input terminal is passed through the tanh non-linearity. The resulting activation is multiplied by the activation of the input gate. This is then added to the cell state after multiplying the cell state by the forget gate’s activation ftsubscript𝑓𝑡f_{t}. The final output from the LSTM unit htsubscriptℎ𝑡h_{t} is computed by multiplying the output gate’s activation otsubscript𝑜𝑡o_{t} with the updated cell state passed through a tanh non-linearity. These updates are summarized for a layer of LSTM units as follows 𝐢tsubscript𝐢𝑡\\displaystyle{\\bf i}_{t} =\\displaystyle= σ​(Wx​i​𝐱t+Wh​i​𝐡t−1+Wc​i​𝐜t−1+𝐛i),𝜎subscript𝑊𝑥𝑖subscript𝐱𝑡subscript𝑊ℎ𝑖subscript𝐡𝑡1subscript𝑊𝑐𝑖subscript𝐜𝑡1subscript𝐛𝑖\\displaystyle\\sigma\\left(W_{xi}{\\bf x}_{t}+W_{hi}{\\bf h}_{t-1}+W_{ci}{\\bf c}_{t-1}+{\\bf b}_{i}\\right), 𝐟tsubscript𝐟𝑡\\displaystyle{\\bf f}_{t} =\\displaystyle= σ​(Wx​f​𝐱t+Wh​f​𝐡t−1+Wc​f​𝐜t−1+𝐛f),𝜎subscript𝑊𝑥𝑓subscript𝐱𝑡subscript𝑊ℎ𝑓subscript𝐡𝑡1subscript𝑊𝑐𝑓subscript𝐜𝑡1subscript𝐛𝑓\\displaystyle\\sigma\\left(W_{xf}{\\bf x}_{t}+W_{hf}{\\bf h}_{t-1}+W_{cf}{\\bf c}_{t-1}+{\\bf b}_{f}\\right), 𝐜tsubscript𝐜𝑡\\displaystyle{\\bf c}_{t} =\\displaystyle= 𝐟t​𝐜t−1+𝐢t​tanh⁡(Wx​c​𝐱t+Wh​c​𝐡t−1+𝐛c),subscript𝐟𝑡subscript𝐜𝑡1subscript𝐢𝑡subscript𝑊𝑥𝑐subscript𝐱𝑡subscript𝑊ℎ𝑐subscript𝐡𝑡1subscript𝐛𝑐\\displaystyle{\\bf f}_{t}{\\bf c}_{t-1}+{\\bf i}_{t}\\tanh\\left(W_{xc}{\\bf x}_{t}+W_{hc}{\\bf h}_{t-1}+{\\bf b}_{c}\\right), 𝐨tsubscript𝐨𝑡\\displaystyle{\\bf o}_{t} =\\displaystyle= σ​(Wx​o​𝐱t+Wh​o​𝐡t−1+Wc​o​𝐜t+𝐛o),𝜎subscript𝑊𝑥𝑜subscript𝐱𝑡subscript𝑊ℎ𝑜subscript𝐡𝑡1subscript𝑊𝑐𝑜subscript𝐜𝑡subscript𝐛𝑜\\displaystyle\\sigma\\left(W_{xo}{\\bf x}_{t}+W_{ho}{\\bf h}_{t-1}+W_{co}{\\bf c}_{t}+{\\bf b}_{o}\\right), 𝐡tsubscript𝐡𝑡\\displaystyle{\\bf h}_{t} =\\displaystyle= 𝐨t​tanh⁡(𝐜t).subscript𝐨𝑡subscript𝐜𝑡\\displaystyle{\\bf o}_{t}\\tanh({\\bf c}_{t}). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_11", "text": " Note that all Wc⁣∙subscript𝑊𝑐∙W_{c\\bullet} matrices are diagonal, whereas the rest are dense. The key advantage of using an LSTM unit over a traditional neuron in an RNN is that the cell state in an LSTM unit sums activities over time. Since derivatives distribute over sums, the error derivatives don’t vanish quickly as they get sent back into time. This makes it easy to do credit assignment over long sequences and discover long-range features. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_12", "text": " In this section, we describe a model that uses Recurrent Neural Nets (RNNs) made of LSTM units to do unsupervised learning. The model consists of two RNNs – the encoder LSTM and the decoder LSTM as shown in Fig. 2. The input to the model is a sequence of vectors (image patches or features). The encoder LSTM reads in this sequence. After the last input has been read, the decoder LSTM takes over and outputs a prediction for the target sequence. The target sequence is same as the input sequence, but in reverse order. Reversing the target sequence makes the optimization easier because the model can get off the ground by looking at low range correlations. This is also inspired by how lists are represented in LISP. The encoder can be seen as creating a list by applying the cons function on the previously constructed list and the new input. The decoder essentially unrolls this list, with the hidden to output weights extracting the element at the top of the list (car function) and the hidden to hidden weights extracting the rest of the list (cdr function). Therefore, the first element out is the last element in. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_13", "text": " The decoder can be of two kinds – conditional or unconditioned. A conditional decoder receives the last generated output frame as input, i.e., the dotted input in Fig. 2 is present. An unconditioned decoder does not receive that input. This is discussed in more detail in Sec. 2.4. Fig. 2 shows a single layer LSTM Autoencoder. The architecture can be extend to multiple layers by stacking LSTMs on top of each other. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_14", "text": " Why should this learn good features? The state of the encoder LSTM after the last input has been read is the representation of the input video. The decoder LSTM is being asked to reconstruct back the input sequence from this representation. In order to do so, the representation must retain information about the appearance of the objects and the background as well as the motion contained in the video. However, an important question for any autoencoder-style model is what prevents it from learning an identity mapping and effectively copying the input to the output. In that case all the information about the input would still be present but the representation will be no better than the input. There are two factors that control this behaviour. First, the fact that there are only a fixed number of hidden units makes it unlikely that the model can learn trivial mappings for arbitrary length input sequences. Second, the same LSTM operation is used to decode the representation recursively. This means that the same dynamics must be applied on the representation at any stage of decoding. This further prevents the model from learning an identity mapping. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_15", "text": " Another natural unsupervised learning task for sequences is predicting the future. This is the approach used in language models for modeling sequences of words. The design of the Future Predictor Model is same as that of the Autoencoder Model, except that the decoder LSTM in this case predicts frames of the video that come after the input sequence (Fig. 3). Ranzato et al. (2014) use a similar model but predict only the next frame at each time step. This model, on the other hand, predicts a long sequence into the future. Here again we can consider two variants of the decoder – conditional and unconditioned. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_16", "text": " Why should this learn good features? In order to predict the next few frames correctly, the model needs information about which objects and background are present and how they are moving so that the motion can be extrapolated. The hidden state coming out from the encoder will try to capture this information. Therefore, this state can be seen as a representation of the input sequence. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_17", "text": " For each of these two models, we can consider two possibilities - one in which the decoder LSTM is conditioned on the last generated frame and the other in which it is not. In the experimental section, we explore these choices quantitatively. Here we briefly discuss arguments for and against a conditional decoder. A strong argument in favour of using a conditional decoder is that it allows the decoder to model multiple modes in the target sequence distribution. Without that, we would end up averaging the multiple modes in the low-level input space. However, this is an issue only if we expect multiple modes in the target sequence distribution. For the LSTM Autoencoder, there is only one correct target and hence a unimodal target distribution. But for the LSTM Future Predictor there is a possibility of multiple targets given an input because even if we assume a deterministic universe, everything needed to predict the future will not necessarily be observed in the input. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_18", "text": " There is also an argument against using a conditional decoder from the optimization point-of-view. There are strong short-range correlations in video data, for example, most of the content of a frame is same as the previous one. If the decoder was given access to the last few frames while generating a particular frame at training time, it would find it easy to pick up on these correlations. There would only be a very small gradient that tries to fix up the extremely subtle errors that require long term knowledge about the input sequence. In an unconditioned decoder, this input is removed and the model is forced to look for information deep inside the encoder. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_19", "text": " The two tasks – reconstructing the input and predicting the future can be combined to create a composite model as shown in Fig. 4. Here the encoder LSTM is asked to come up with a state from which we can both predict the next few frames as well as reconstruct the input. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_20", "text": " This composite model tries to overcome the shortcomings that each model suffers on its own. A high-capacity autoencoder would suffer from the tendency to learn trivial representations that just memorize the inputs. However, this memorization is not useful at all for predicting the future. Therefore, the composite model cannot just memorize information. On the other hand, the future predictor suffers form the tendency to store information only about the last few frames since those are most important for predicting the future, i.e., in order to predict vtsubscript𝑣𝑡v_{t}, the frames {vt−1,…,vt−k}subscript𝑣𝑡1…subscript𝑣𝑡𝑘\\{v_{t-1},\\ldots,v_{t-k}\\} are much more important than v0subscript𝑣0v_{0}, for some small value of k𝑘k. Therefore the representation at the end of the encoder will have forgotten about a large part of the input. But if we ask the model to also predict all of the input sequence, then it cannot just pay attention to the last few frames. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_21", "text": " We design experiments to accomplish the following objectives: • Get a qualitative understanding of what the LSTM learns to do. • Measure the benefit of initializing networks for supervised learning tasks with the weights found by unsupervised learning, especially with very few training examples. • Compare the different proposed models - Autoencoder, Future Predictor and Composite models and their conditional variants. • Compare with state-of-the-art action recognition benchmarks. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_22", "text": " We use the UCF-101 and HMDB-51 datasets for supervised tasks. The UCF-101 dataset (Soomro et al., 2012) contains 13,320 videos with an average length of 6.2 seconds belonging to 101 different action categories. The dataset has 3 standard train/test splits with the training set containing around 9,500 videos in each split (the rest are test). The HMDB-51 dataset (Kuehne et al., 2011) contains 5100 videos belonging to 51 different action categories. Mean length of the videos is 3.2 seconds. This also has 3 train/test splits with 3570 videos in the training set and rest in test. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_23", "text": " To train the unsupervised models, we used a subset of the Sports-1M dataset (Karpathy et al., 2014), that contains 1 million YouTube clips. Even though this dataset is labelled for actions, we did not do any supervised experiments on it because of logistical constraints with working with such a huge dataset. We instead collected 300 hours of video by randomly sampling 10 second clips from the dataset. It is possible to collect better samples if instead of choosing randomly, we extracted videos where a lot of motion is happening and where there are no shot boundaries. However, we did not do so in the spirit of unsupervised learning, and because we did not want to introduce any unnatural bias in the samples. We also used the supervised datasets (UCF-101 and HMDB-51) for unsupervised training. However, we found that using them did not give any significant advantage over just using the YouTube videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_24", "text": " We extracted percepts using the convolutional neural net model of Simonyan & Zisserman (2014b). The videos have a resolution of 240 ×\\times 320 and were sampled at almost 30 frames per second. We took the central 224 ×\\times 224 patch from each frame and ran it through the convnet. This gave us the RGB percepts. Additionally, for UCF-101, we computed flow percepts by extracting flows using the Brox method and training the temporal stream convolutional network as described by Simonyan & Zisserman (2014a). We found that the fc6 features worked better than fc7 for single frame classification using both RGB and flow percepts. Therefore, we used the 4096-dimensional fc6 layer as the input representation of our data. Besides these percepts, we also trained the proposed models on 32 ×\\times 32 patches of pixels. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_25", "text": " All models were trained using backprop on a single NVIDIA Titan GPU. A two layer 2048 unit Composite model that predicts 13 frames and reconstructs 16 frames took 18-20 hours to converge on 300 hours of percepts. We initialized weights by sampling from a uniform distribution whose scale was set to 1/sqrt(fan-in). Biases at all the gates were initialized to zero. Peep-hole connections were initialized to zero. The supervised classifiers trained on 16 frames took 5-15 minutes to converge. The code can be found at https://github.com/emansim/unsupervised-videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_26", "text": " The aim of this set of experiments to visualize the properties of the proposed models. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_27", "text": " Experiments on MNIST We first trained our models on a dataset of moving MNIST digits. In this dataset, each video was 20 frames long and consisted of two digits moving inside a 64 ×\\times 64 patch. The digits were chosen randomly from the training set and placed initially at random locations inside the patch. Each digit was assigned a velocity whose direction was chosen uniformly randomly on a unit circle and whose magnitude was also chosen uniformly at random over a fixed range. The digits bounced-off the edges of the 64 ×\\times 64 frame and overlapped if they were at the same location. The reason for working with this dataset is that it is infinite in size and can be generated quickly on the fly. This makes it possible to explore the model without expensive disk accesses or overfitting issues. It also has interesting behaviours due to occlusions and the dynamics of bouncing off the walls. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_28", "text": " We first trained a single layer Composite Model. Each LSTM had 2048 units. The encoder took 10 frames as input. The decoder tried to reconstruct these 10 frames and the future predictor attempted to predict the next 10 frames. We used logistic output units with a cross entropy loss function. Fig. 5 shows two examples of running this model. The true sequences are shown in the first two rows. The next two rows show the reconstruction and future prediction from the one layer Composite Model. It is interesting to note that the model figures out how to separate superimposed digits and can model them even as they pass through each other. This shows some evidence of disentangling the two independent factors of variation in this sequence. The model can also correctly predict the motion after bouncing off the walls. In order to see if adding depth helps, we trained a two layer Composite Model, with each layer having 2048 units. We can see that adding depth helps the model make better predictions. Next, we changed the future predictor by making it conditional. We can see that this model makes sharper predictions. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_29", "text": " Experiments on Natural Image Patches Next, we tried to see if our models can also work with natural image patches. For this, we trained the models on sequences of 32 ×\\times 32 natural image patches extracted from the UCF-101 dataset. In this case, we used linear output units and the squared error loss function. The input was 16 frames and the model was asked to reconstruct the 16 frames and predict the future 13 frames. Fig. 6 shows the results obtained from a two layer Composite model with 2048 units. We found that the reconstructions and the predictions are both very blurry. We then trained a bigger model with 4096 units. The outputs from this model are also shown in Fig. 6. We can see that the reconstructions get much sharper. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_30", "text": " Generalization over time scales In the next experiment, we test if the model can work at time scales that are different than what it was trained on. We take a one hidden layer unconditioned Composite Model trained on moving MNIST digits. The model has 2048 LSTM units and looks at a 64 ×\\times 64 input. It was trained on input sequences of 10 frames to reconstruct those 10 frames as well as predict 10 frames into the future. In order to test if the future predictor is able to generalize beyond 10 frames, we let the model run for 100 steps into the future. Fig. 7(a) shows the pattern of activity in the LSTM units of the future predictor pathway for a randomly chosen test input. It shows the activity at each of the three sigmoidal gates (input, forget, output), the input (after the tanh non-linearity, before being multiplied by the input gate), the cell state and the final output (after being multiplied by the output gate). Even though the units are ordered randomly along the vertical axis, we can see that the dynamics has a periodic quality to it. The model is able to generate persistent motion for long periods of time. In terms of reconstruction, the model only outputs blobs after the first 15 frames, but the motion is relatively well preserved. More results, including long range future predictions over hundreds of time steps can see been at http://www.cs.toronto.edu/~nitish/unsupervised_video. To show that setting up a periodic behaviour is not trivial, Fig. 7(b) shows the activity from a randomly initialized future predictor. Here, the LSTM state quickly converges and the outputs blur completely. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_31", "text": " Out-of-domain Inputs Next, we test this model’s ability to deal with out-of-domain inputs. For this, we test the model on sequences of one and three moving digits. The model was trained on sequences of two moving digits, so it has never seen inputs with just one digit or three digits. Fig. 8 shows the reconstruction and future prediction results. For one moving digit, we can see that the model can do a good job but it really tries to hallucinate a second digit overlapping with the first one. The second digit shows up towards the end of the future reconstruction. For three digits, the model merges digits into blobs. However, it does well at getting the overall motion right. This highlights a key drawback of modeling entire frames of input in a single pass. In order to model videos with variable number of objects, we perhaps need models that not only have an attention mechanism in place, but can also learn to execute themselves a variable number of times and do variable amounts of computation. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_32", "text": " Visualizing Features Next, we visualize the features learned by this model. Fig. 9 shows the weights that connect each input frame to the encoder LSTM. There are four sets of weights. One set of weights connects the frame to the input units. There are three other sets, one corresponding to each of the three gates (input, forget and output). Each weight has a size of 64 ×\\times 64. A lot of features look like thin strips. Others look like higher frequency strips. It is conceivable that the high frequency features help in encoding the direction and velocity of motion. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_33", "text": " Fig. 10 shows the output features from the two LSTM decoders of a Composite Model. These correspond to the weights connecting the LSTM output units to the output layer. They appear to be somewhat qualitatively different from the input features shown in Fig. 9. There are many more output features that are local blobs, whereas those are rare in the input features. In the output features, the ones that do look like strips are much shorter than those in the input features. One way to interpret this is the following. The model needs to know about motion (which direction and how fast things are moving) from the input. This requires precise information about location (thin strips) and velocity (high frequency strips). But when it is generating the output, the model wants to hedge its bets so that it does not suffer a huge loss for predicting things sharply at the wrong place. This could explain why the output features have somewhat bigger blobs. The relative shortness of the strips in the output features can be explained by the fact that in the inputs, it does not hurt to have a longer feature than what is needed to detect a location because information is coarse-coded through multiple features. But in the output, the model may not want to put down a feature that is bigger than any digit because other units will have to conspire to correct for it. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_34", "text": " The aim of this set of experiments is to see if the features learned by unsupervised learning can help improve performance on supervised tasks. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_35", "text": " We trained a two layer Composite Model with 2048 hidden units with no conditioning on either decoders. The model was trained on percepts extracted from 300 hours of YouTube data. The model was trained to autoencode 16 frames and predict the next 13 frames. We initialize an LSTM classifier with the weights learned by the encoder LSTM from this model. The classifier is shown in Fig. 11. The output from each LSTM in the second layer goes into a softmax classifier that makes a prediction about the action being performed at each time step. Since only one action is being performed in each video in the datasets we consider, the target is the same at each time step. At test time, the predictions made at each time step are averaged. To get a prediction for the entire video, we average the predictions from all 16 frame blocks in the video with a stride of 8 frames. Using a smaller stride did not improve results. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_36", "text": " The baseline for comparing these models is an identical LSTM classifier but with randomly initialized weights. All classifiers used dropout regularization, where we dropped activations as they were communicated across layers but not through time within the same LSTM as proposed in Zaremba et al. (2014). We emphasize that this is a very strong baseline and does significantly better than just using single frames. Using dropout was crucial in order to train good baseline models especially with very few training examples. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_37", "text": " Fig. 12 compares three models - single frame classifier (logistic regression), baseline LSTM classifier and the LSTM classifier initialized with weights from the Composite Model as the number of labelled videos per class is varied. Note that having one labelled video means having many labelled 16 frame blocks. We can see that for the case of very few training examples, unsupervised learning gives a substantial improvement. For example, for UCF-101, the performance improves from 29.6% to 34.3% when training on only one labelled video. As the size of the labelled dataset grows, the improvement becomes smaller. Even for the full UCF-101 dataset we still get a considerable improvement from 74.5% to 75.8%. On HMDB-51, the improvement is from 42.8% to 44.0% for the full dataset (70 videos per class) and 14.4% to 19.1% for one video per class. Although, the improvement in classification by using unsupervised learning was not as big as we expected, we still managed to yield an additional improvement over a strong baseline. We discuss some avenues for improvements later. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_38", "text": " We further ran similar experiments on the optical flow percepts extracted from the UCF-101 dataset. A temporal stream convolutional net, similar to the one proposed by Simonyan & Zisserman (2014b), was trained on single frame optical flows as well as on stacks of 10 optical flows. This gave an accuracy of 72.2% and 77.5% respectively. Here again, our models took 16 frames as input, reconstructed them and predicted 13 frames into the future. LSTMs with 128 hidden units improved the accuracy by 2.1% to 74.3% for the single frame case. Bigger LSTMs did not improve results. By pretraining the LSTM, we were able to further improve the classification to 74.9% (±0.1plus-or-minus0.1\\pm 0.1). For stacks of 10 frames we improved very slightly to 77.7%. These results are summarized in Table 1. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_39", "text": " The aim of this set of experiments is to compare the different variants of the model proposed in this paper. Since it is always possible to get lower reconstruction error by copying the inputs, we cannot use input reconstruction error as a measure of how good a model is doing. However, we can use the error in predicting the future as a reasonable measure of how good the model is doing. Besides, we can use the performance on supervised tasks as a proxy for how good the unsupervised model is doing. In this section, we present results from these two analyses. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_40", "text": " Future prediction results are summarized in Table 2. For MNIST we compute the cross entropy of the predictions with respect to the ground truth, both of which are 64 ×\\times 64 patches. For natural image patches, we compute the squared loss. We see that the Composite Model always does a better job of predicting the future compared to the Future Predictor. This indicates that having the autoencoder along with the future predictor to force the model to remember more about the inputs actually helps predict the future better. Next, we can compare each model with its conditional variant. Here, we find that the conditional models perform better, as was also noted in Fig. 5. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_41", "text": " Next, we compare the models using performance on a supervised task. Table 3 shows the performance on action recognition achieved by finetuning different unsupervised learning models. Besides running the experiments on the full UCF-101 and HMDB-51 datasets, we also ran the experiments on small subsets of these to better highlight the case where we have very few training examples. We find that all unsupervised models improve over the baseline LSTM which is itself well-regularized by using dropout. The Autoencoder model seems to perform consistently better than the Future Predictor. The Composite model which combines the two does better than either one alone. Conditioning on the generated inputs does not seem to give a clear advantage over not doing so. The Composite Model with a conditional future predictor works the best, although its performance is almost same as that of the Composite Model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_42", "text": " Finally, we compare our models to the state-of-the-art action recognition results. The performance is summarized in Table 4. The table is divided into three sets. The first set compares models that use only RGB data (single or multiple frames). The second set compares models that use explicitly computed flow features only. Models in the third set use both. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_43", "text": " On RGB data, our model performs at par with the best deep models. It performs 3% better than the LRCN model that also used LSTMs on top of convnet features111However, the improvement is only partially from unsupervised learning, since we used a better convnet model.. Our model performs better than C3D features that use a 3D convolutional net. However, when the C3D features are concatenated with fc6 percepts, they do slightly better than our model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_44", "text": " The improvement for flow features over using a randomly initialized LSTM network is quite small. We believe this is atleast partly due to the fact that the flow percepts already capture a lot of the motion information that the LSTM would otherwise discover. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_45", "text": " When we combine predictions from the RGB and flow models, we obtain 84.3 accuracy on UCF-101. We believe further improvements can be made by running the model over different patch locations and mirroring the patches. Also, our model can be applied deeper inside the convnet instead of just at the top-level. That can potentially lead to further improvements. In this paper, we focus on showing that unsupervised training helps consistently across both datasets and across different sized training sets. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_46", "text": " We proposed models based on LSTMs that can learn good video representations. We compared them and analyzed their properties through visualizations. Moreover, we managed to get an improvement on supervised tasks. The best performing model was the Composite Model that combined an autoencoder and a future predictor. Conditioning on generated outputs did not have a significant impact on the performance for supervised tasks, however it made the future predictions look slightly better. The model was able to persistently generate motion well beyond the time scales it was trained for. However, it lost the precise object features rapidly after the training time scale. The features at the input and output layers were found to have some interesting properties. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_47", "text": " To further get improvements for supervised tasks, we believe that the model can be extended by applying it convolutionally across patches of the video and stacking multiple layers of such models. Applying this model in the lower layers of a convolutional net could help extract motion information that would otherwise be lost across max-pooling layers. In our future work, we plan to build models based on these autoencoders from the bottom up instead of applying them only to percepts. ", "title": "Unsupervised Learning of Video Representations using LSTMs" } ]
Can the the NetVLAD pooling layer be inserted into any other CNN or does it support certain architectures?
Yes, it is a generic building block and can be inserted into any other CNN architectures [48].
[ 48 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented reality  or geo-localizing archival imagery . ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_1", "text": " The place recognition problem, however, still remains extremely challenging. How can we recognize the same street-corner in the entire city or on the scale of the entire country despite the fact it can be captured in different illuminations or change its appearance over time? The fundamental scientific question is what is the appropriate representation of a place that is rich enough to distinguish similarly looking places yet compact to represent entire cities or countries. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_2", "text": " The place recognition problem has been traditionally cast as an instance retrieval task, where the query image location is estimated using the locations of the most visually similar images obtained by querying a large geotagged database (66, 10, 35, 81, 80, 4). Each database image is represented using local invariant features  such as SIFT  that are aggregated into a single vector representation for the entire image such as bag-of-visual-words (74, 53), VLAD (3, 29) or Fisher vector (52, 31). The resulting representation is then usually compressed and efficiently indexed (74, 28). The image database can be further augmented by 3D structure that enables recovery of accurate camera pose (40, 63, 64). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_3", "text": " In the last few years convolutional neural networks (CNNs) (38, 39) have emerged as powerful image representations for various category-level recognition tasks such as object classification (37, 49, 73, 77), scene recognition  or object detection . The basic principles of CNNs are known from 80’s (38, 39) and the recent successes are a combination of advances in GPU-based computation power together with large labelled image datasets . While it has been shown that the trained representations are, to some extent, transferable between recognition tasks (19, 21, 49, 69, 89), a direct application of CNN representations trained for object classification  as black-box descriptor extractors has so far yielded limited improvements in performance on instance-level recognition tasks (6, 7, 22, 60, 62). In this work we investigate whether this gap in performance can be bridged by CNN representations developed and trained directly for place recognition. This requires addressing the following three main challenges. First, what is a good CNN architecture for place recognition? Second, how to gather sufficient amount of annotated data for the training? Third, how can we train the developed architecture in an end-to-end manner tailored for the place recognition task? To address these challenges we bring the following three innovations. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_4", "text": " First, building on the lessons learnt from the current well performing hand-engineered object retrieval and place recognition pipelines (2, 3, 25, 80) we develop a convolutional neural network architecture for place recognition that aggregates mid-level (conv5) convolutional features extracted from the entire image into a compact single vector representation amenable to efficient indexing. To achieve this, we design a new trainable generalized VLAD layer, NetVLAD, inspired by the Vector of Locally Aggregated Descriptors (VLAD) representation  that has shown excellent performance in image retrieval and place recognition. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. The resulting aggregated representation is then compressed using Principal Component Analysis (PCA) to obtain the final compact descriptor of the image. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_5", "text": " Second, to train the architecture for place recognition, we gather a large dataset of multiple panoramic images depicting the same place from different viewpoints over time from the Google Street View Time Machine. Such data is available for vast areas of the world, but provides only weak form of supervision: we know the two panoramas are captured at approximately similar positions based on their (noisy) GPS but we don’t know which parts of the panoramas depict the same parts of the scene. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_6", "text": " Third, we develop a learning procedure for place recognition that learns parameters of the architecture in an end-to-end manner tailored for the place recognition task from the weakly labelled Time Machine imagery. The resulting representation is robust to changes in viewpoint and lighting conditions, while simultaneously learns to focus on the relevant parts of the image such as the building façades and the skyline, while ignoring confusing elements such as cars and people that may occur at many different places. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_7", "text": " We show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_8", "text": " While there have been many improvements in designing better image retrieval (2, 3, 12, 11, 17, 26, 27, 29, 25, 32, 48, 51, 52, 53, 54, 71, 78, 79, 82) and place recognition (4, 10, 15, 16, 24, 9, 35, 46, 44, 64, 65, 63, 75, 81, 80) systems, not many works have performed learning for these tasks. All relevant learning-based approaches fall into one or both of the following two categories: (i) learning for an auxiliary task (e.g. some form of distinctiveness of local features (4, 15, 30, 35, 58, 59, 90)), and (ii) learning on top of shallow hand-engineered descriptors that cannot be fine-tuned for the target task (2, 24, 9, 35, 57). Both of these are in spirit opposite to the core idea behind deep learning that has provided a major boost in performance in various recognition tasks: end-to-end learning. We will indeed show in section 5.2 that training representations directly for the end-task, place recognition, is crucial for obtaining good performance. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_9", "text": " Numerous works concentrate on learning better local descriptors or metrics to compare them (88, 55, 45, 48, 71, 56, 70, 50), but even though some of them show results on image retrieval, the descriptors are learnt on the task of matching local image patches, and not directly with image retrieval in mind. Some of them also make use of hand-engineered features to bootstrap the learning, i.e. to provide noisy training data (55, 45, 48, 71, 50). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_10", "text": " Several works have investigated using CNN-based features for image retrieval. These include treating activations from certain layers directly as descriptors by concatenating them (8, 60), or by pooling (6, 22, 7). However, none of these works actually train the CNNs for the task at hand, but use CNNs as black-box descriptor extractors. One exception is the work of Babenko et al. in which the network is fine-tuned on an auxiliary task of classifying 700 landmarks. However, again the network is not trained directly on the target retrieval task. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_11", "text": " Finally, recently and performed end-to-end learning for different but related tasks of ground-to-aerial matching and camera pose estimation . ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_12", "text": " Building on the success of current place recognition systems (e.g. (66, 35, 10, 64, 65, 81, 4, 80, 63)), we cast place recognition as image retrieval. The query image with unknown location is used to visually search a large geotagged image database, and the locations of top ranked images are used as suggestions for the location of the query. This is generally done by designing a function f𝑓f which acts as the “image representation extractor”, such that given an image Iisubscript𝐼𝑖I_{i} it produces a fixed size vector f​(Ii)𝑓subscript𝐼𝑖f(I_{i}). The function is used to extract the representations for the entire database {Ii}subscript𝐼𝑖\\{I_{i}\\}, which can be done offline, and to extract the query image representation f​(q)𝑓𝑞f(q), done online. At test time, the visual search is performed by finding the nearest database image to the query, either exactly or through fast approximate nearest neighbour search, by sorting images based on the Euclidean distance d​(q,Ii)𝑑𝑞subscript𝐼𝑖d(q,I_{i}) between f​(q)𝑓𝑞f(q) and f​(Ii)𝑓subscript𝐼𝑖f(I_{i}). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_13", "text": " While previous works have mainly used hand-engineered image representations (e.g. f​(I)𝑓𝐼f(I) corresponds to extracting SIFT descriptors , followed by pooling into a bag-of-words vector or a VLAD vector ), here we propose to learn the representation f​(I)𝑓𝐼f(I) in an end-to-end manner, directly optimized for the task of place recognition. The representation is parametrized with a set of parameters θ𝜃\\theta and we emphasize this fact by referring to it as fθ​(I)subscript𝑓𝜃𝐼f_{\\theta}(I). It follows that the Euclidean distance dθ​(Ii,Ij)=∥fθ​(Ii)−fθ​(Ij)∥subscript𝑑𝜃subscript𝐼𝑖subscript𝐼𝑗delimited-∥∥subscript𝑓𝜃subscript𝐼𝑖subscript𝑓𝜃subscript𝐼𝑗d_{\\theta}(I_{i},I_{j})=\\lVert f_{\\theta}(I_{i})-f_{\\theta}(I_{j})\\rVert also depends on the same parameters. An alternative setup would be to learn the distance function itself, but here we choose to fix the distance function to be Euclidean distance, and to pose our problem as the search for the explicit feature map fθsubscript𝑓𝜃f_{\\theta} which works well under the Euclidean distance. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_14", "text": " In section 3 we describe the proposed representation fθsubscript𝑓𝜃f_{\\theta} based on a new deep convolutional neural network architecture inspired by the compact aggregated image descriptors for instance retrieval. In section 4 we describe a method to learn the parameters θ𝜃\\theta of the network in an end-to-end manner using weakly supervised training data from the Google Street View Time Machine. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_15", "text": " This section describes the proposed CNN architecture fθsubscript𝑓𝜃f_{\\theta}, guided by the best practices from the image retrieval community. Most image retrieval pipelines are based on (i) extracting local descriptors, which are then (ii) pooled in an orderless manner. The motivation behind this choice is that the procedure provides significant robustness to translation and partial occlusion. Robustness to lighting and viewpoint changes is provided by the descriptors themselves, and scale invariance is ensured through extracting descriptors at multiple scales. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_16", "text": " In order to learn the representation end-to-end, we design a CNN architecture that mimics this standard retrieval pipeline in an unified and principled manner with differentiable modules. For step (i), we crop the CNN at the last convolutional layer and view it as a dense descriptor extractor. This has been observed to work well for instance retrieval (6, 7, 62) and texture recognition . Namely, the output of the last convolutional layer is a H×W×D𝐻𝑊𝐷H\\times W\\times D map which can be considered as a set of D-dimensional descriptors extracted at H×W𝐻𝑊H\\times W spatial locations. For step (ii) we design a new pooling layer inspired by the Vector of Locally Aggregated Descriptors (VLAD)  that pools extracted descriptors into a fixed image representation and its parameters are learnable via back-propagation. We call this new pooling layer “NetVLAD” layer and describe it in the next section. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_17", "text": " Vector of Locally Aggregated Descriptors (VLAD)  is a popular descriptor pooling method for both instance level retrieval  and image classification . It captures information about the statistics of local descriptors aggregated over the image. Whereas bag-of-visual-words (14, 74) aggregation keeps counts of visual words, VLAD stores the sum of residuals (difference vector between the descriptor and its corresponding cluster centre) for each visual word. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_18", "text": " Formally, given N𝑁N D-dimensional local image descriptors {𝐱i}subscript𝐱𝑖\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}\\} as input, and K𝐾K cluster centres (“visual words”) {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} as VLAD parameters, the output VLAD image representation V𝑉V is K×D𝐾𝐷K\\times D-dimensional. For convenience we will write V𝑉V as a K×D𝐾𝐷K\\times D matrix, but this matrix is converted into a vector and, after normalization, used as the image representation. The (j,k)𝑗𝑘(j,k) element of V𝑉V is computed as follows: V​(j,k)=∑i=1Nak​(𝐱i)​(xi​(j)−ck​(j)),𝑉𝑗𝑘superscriptsubscript𝑖1𝑁subscript𝑎𝑘subscript𝐱𝑖subscript𝑥𝑖𝑗subscript𝑐𝑘𝑗V(j,k)=\\sum_{i=1}^{N}a_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})\\left(x_{i}(j)-c_{k}(j)\\right), (1) where xi​(j)subscript𝑥𝑖𝑗x_{i}(j) and ck​(j)subscript𝑐𝑘𝑗c_{k}(j) are the j𝑗j-th dimensions of the i𝑖i-th descriptor and k𝑘k-th cluster centre, respectively. ak​(𝐱i)subscript𝑎𝑘subscript𝐱𝑖a_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) denotes the membership of the descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to k𝑘k-th visual word, i.e. it is 111 if cluster 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k} is the closest cluster to descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} and 00 otherwise. Intuitively, each D-dimensional column k𝑘k of V𝑉V records the sum of residuals (𝐱i−𝐜k)subscript𝐱𝑖subscript𝐜𝑘(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}) of descriptors which are assigned to cluster 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}. The matrix V𝑉V is then L2-normalized column-wise (intra-normalization ), converted into a vector, and finally L2-normalized in its entirety . ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_19", "text": " In order to profit from years of wisdom produced in image retrieval, we propose to mimic VLAD in a CNN framework and design a trainable generalized VLAD layer, NetVLAD. The result is a powerful image representation trainable end-to-end on the target task (in our case place recognition). To construct a layer amenable to training via backpropagation, it is required that the layer’s operation is differentiable with respect to all its parameters and the input. Hence, the key challenge is to make the VLAD pooling differentiable, which we describe next. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_20", "text": " The source of discontinuities in VLAD is the hard assignment ak​(𝐱i)subscript𝑎𝑘subscript𝐱𝑖a_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) of descriptors 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to clusters centres 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}. To make this operation differentiable, we replace it with soft assignment of descriptors to multiple clusters a¯k​(𝐱i)=e−α​∥𝐱i−𝐜k∥2∑k′e−α​∥𝐱i−𝐜k′∥2,subscript¯𝑎𝑘subscript𝐱𝑖superscript𝑒𝛼superscriptdelimited-∥∥subscript𝐱𝑖subscript𝐜𝑘2subscriptsuperscript𝑘′superscript𝑒𝛼superscriptdelimited-∥∥subscript𝐱𝑖subscript𝐜superscript𝑘′2\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})=\\frac{e^{-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\rVert^{2}}}{\\sum_{k^{\\prime}}{e^{-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k^{\\prime}}\\rVert^{2}}}}, (2) which assigns the weight of descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to cluster 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k} proportional to their proximity, but relative to proximities to other cluster centres. a¯k​(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) ranges between 0 and 1, with the highest weight assigned to the closest cluster centre. α𝛼\\alpha is a parameter (positive constant) that controls the decay of the response with the magnitude of the distance. Note that for α→+∞→𝛼\\alpha\\to+\\infty this setup replicates the original VLAD exactly as a¯k​(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) for the closest cluster would be 111 and 00 otherwise. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_21", "text": " By expanding the squares in (2), it is easy to see that the term e−α​∥𝐱i∥2superscript𝑒𝛼superscriptdelimited-∥∥subscript𝐱𝑖2e^{-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}\\rVert^{2}} cancels between the numerator and the denominator resulting in a soft-assignment of the following form a¯k​(𝐱i)=e𝐰kT​𝐱i+bk∑k′e𝐰k′T​𝐱i+bk′,subscript¯𝑎𝑘subscript𝐱𝑖superscript𝑒superscriptsubscript𝐰𝑘𝑇subscript𝐱𝑖subscript𝑏𝑘subscriptsuperscript𝑘′superscript𝑒superscriptsubscript𝐰superscript𝑘′𝑇subscript𝐱𝑖subscript𝑏superscript𝑘′\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})=\\frac{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k}}}{\\sum_{k^{\\prime}}{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k^{\\prime}}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k^{\\prime}}}}}, (3) where vector 𝐰k=2​α​𝐜ksubscript𝐰𝑘2𝛼subscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}=2\\alpha\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k} and scalar bk=−α​∥𝐜k∥2subscript𝑏𝑘𝛼superscriptdelimited-∥∥subscript𝐜𝑘2b_{k}=-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\rVert^{2}. The final form of the NetVLAD layer is obtained by plugging the soft-assignment (3) into the VLAD descriptor (1) resulting in V​(j,k)=∑i=1Ne𝐰kT​𝐱i+bk∑k′e𝐰k′T​𝐱i+bk′​(xi​(j)−ck​(j)),𝑉𝑗𝑘superscriptsubscript𝑖1𝑁superscript𝑒superscriptsubscript𝐰𝑘𝑇subscript𝐱𝑖subscript𝑏𝑘subscriptsuperscript𝑘′superscript𝑒superscriptsubscript𝐰superscript𝑘′𝑇subscript𝐱𝑖subscript𝑏superscript𝑘′subscript𝑥𝑖𝑗subscript𝑐𝑘𝑗V(j,k)=\\sum_{i=1}^{N}\\frac{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k}}}{\\sum_{k^{\\prime}}{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k^{\\prime}}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k^{\\prime}}}}}\\left(x_{i}(j)-c_{k}(j)\\right), (4) where {𝐰k}subscript𝐰𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}\\}, {bk}subscript𝑏𝑘\\{b_{k}\\} and {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} are sets of trainable parameters for each cluster k𝑘k. Similarly to the original VLAD descriptor, the NetVLAD layer aggregates the first order statistics of residuals (𝐱i−𝐜k)subscript𝐱𝑖subscript𝐜𝑘(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}) in different parts of the descriptor space weighted by the soft-assignment a¯k​(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) of descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to cluster k𝑘k. Note however, that the NetVLAD layer has three independent sets of parameters {𝐰k}subscript𝐰𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}\\}, {bk}subscript𝑏𝑘\\{b_{k}\\} and {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\}, compared to just {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} of the original VLAD. This enables greater flexibility than the original VLAD, as explained in figure 3. Decoupling {𝐰k,bk}subscript𝐰𝑘subscript𝑏𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k},b_{k}\\} from {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} has been proposed in as a means to adapt the VLAD to a new dataset. All parameters of NetVLAD are learnt for the specific task in an end-to-end manner. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_22", "text": " As illustrated in figure 2 the NetVLAD layer can be visualized as a meta-layer that is further decomposed into basic CNN layers connected up in a directed acyclic graph. First, note that the first term in eq. (4) is a soft-max function σk​(𝐳)=exp⁡(zk)∑k′exp⁡(zk′)subscript𝜎𝑘𝐳subscript𝑧𝑘subscriptsuperscript𝑘′subscript𝑧superscript𝑘′\\sigma_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf z$}}{\\mbox{\\boldmath$\\textstyle\\bf z$}}{\\mbox{\\boldmath$\\scriptstyle\\bf z$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf z$}})=\\frac{\\exp(z_{k})}{\\sum_{k^{\\prime}}{\\exp(z_{k^{\\prime}})}}. Therefore, the soft-assignment of the input array of descriptors 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} into K𝐾K clusters can be seen as a two step process: (i) a convolution with a set of K𝐾K filters {𝐰k}subscript𝐰𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}\\} that have spatial support 1×1111\\times 1 and biases {bk}subscript𝑏𝑘\\{b_{k}\\}, producing the output sk​(𝐱i)=𝐰kT​𝐱i+bksubscript𝑠𝑘subscript𝐱𝑖superscriptsubscript𝐰𝑘𝑇subscript𝐱𝑖subscript𝑏𝑘s_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})=\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k}; (ii) the convolution output is then passed through the soft-max function σksubscript𝜎𝑘\\sigma_{k} to obtain the final soft-assignment a¯k​(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) that weights the different terms in the aggregation layer that implements eq. (4). The output after normalization is a (K×D)×1𝐾𝐷1(K\\times D)\\times 1 descriptor. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_23", "text": " Other works have proposed to pool CNN activations using VLAD or Fisher Vectors (FV) (22, 13), but do not learn the VLAD/FV parameters nor the input descriptors. The most related method to ours is the one of Sydorov et al. , which proposes to learn FV parameters jointly with an SVM for the end classification objective. However, in their work it is not possible to learn the input descriptors as they are hand-engineered (SIFT), while our VLAD layer is easily pluggable into any CNN architecture as it is amenable to backpropagation. “Fisher Networks”  stack Fisher Vector layers on top of each other, but the system is not trained end-to-end, only hand-crafted features are used, and the layers are trained greedily in a bottom-up fashion. Finally, our architecture is also related to bilinear networks , recently developed for a different task of fine-grained category-level recognition. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_24", "text": " We also experiment with Max-pooling of the D-dimensional features across the H×W𝐻𝑊H\\times W spatial locations, thus producing a D-dimensional output vector, which is then L2-normalized. Both of these operations can be implemented using standard layers in public CNN packages. This setup mirrors the method of (6, 62), but a crucial difference is that we will learn the representation (section 4) while (60, 6, 62) only use pretrained networks. Results will show (section 5.2) that simply using CNNs off-the-shelf results in poor performance, and that training for the end-task is crucial. Additionally, VLAD will prove itself to be superior to the Max-pooling baseline. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_25", "text": " In the previous section we have designed a new CNN architecture as an image representation for place recognition. Here we describe how to learn its parameters in an end-to-end manner for the place recognition task. The two main challenges are: (i) how to gather enough annotated training data and (ii) what is the appropriate loss for the place recognition task. To address theses issues, we will first show that it is possible to obtain large amounts of weakly labelled imagery depicting the same places over time from the Google Street View Time Machine. Second, we will design a new weakly supervised triplet ranking loss that can deal with the incomplete and noisy position annotations of the Street View Time Machine imagery. The details are below. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_26", "text": " We propose to exploit a new source of data – Google Street View Time Machine – which provides multiple street-level panoramic images taken at different times at close-by spatial locations on the map. As will be seen in section 5.2, this novel data source is precious for learning an image representation for place recognition. As shown in figure 4, the same locations are depicted at different times and seasons, providing the learning algorithm with crucial information it can use to discover which features are useful or distracting, and what changes should the image representation be invariant to, in order to achieve good place recognition performance. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_27", "text": " The downside of the Time Machine imagery is that it provides only incomplete and noisy supervision. Each Time Machine panorama comes with a GPS tag giving only its approximate location on the map, which can be used to identify close-by panoramas but does not provide correspondences between parts of the depicted scenes. In detail, as the test queries are perspective images from camera phones, each panorama is represented by a set of perspective images sampled evenly in different orientations and two elevation angles (35, 10, 24, 81). Each perspective image is labelled with the GPS position of the source panorama. As a result, two geographically close perspective images do not necessarily depict the same objects since they could be facing different directions or occlusions could take place (e.g. the two images are around a corner from each other), etc. Therefore, for a given training query q𝑞q, the GPS information can only be used as a source of (i) potential positives {piq}subscriptsuperscript𝑝𝑞𝑖\\{p^{q}_{i}\\}, i.e. images that are geographically close to the query, and (ii) definite negatives {njq}subscriptsuperscript𝑛𝑞𝑗\\{n^{q}_{j}\\}, i.e. images that are geographically far from the query.111Note that even faraway images can depict the same object. For example, the Eiffel Tower can be visible from two faraway locations in Paris. But, for the purpose of localization we consider in this paper such image pairs as negative examples because they are not taken from the same place. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_28", "text": " We wish to learn a representation fθsubscript𝑓𝜃f_{\\theta} that will optimize place recognition performance. That is, for a given test query image q𝑞q, the goal is to rank a database image Ii⁣∗subscript𝐼𝑖I_{i*} from a close-by location higher than all other far away images Iisubscript𝐼𝑖I_{i} in the database. In other words, we wish the Euclidean distance dθ​(q,I)subscript𝑑𝜃𝑞𝐼d_{\\theta}(q,I) between the query q𝑞q and a close-by image Ii⁣∗subscript𝐼𝑖I_{i*} to be smaller than the distance to far away images in the database Iisubscript𝐼𝑖I_{i}, i.e. dθ​(q,Ii⁣∗)<dθ​(q,Ii)subscript𝑑𝜃𝑞subscript𝐼𝑖subscript𝑑𝜃𝑞subscript𝐼𝑖d_{\\theta}(q,I_{i*})<d_{\\theta}(q,I_{i}), for all images Iisubscript𝐼𝑖I_{i} further than a certain distance from the query on the map. Next we show how this requirement can be translated into a ranking loss between training triplets {q,Ii⁣∗,Ii}𝑞subscript𝐼𝑖subscript𝐼𝑖\\{q,I_{i*},I_{i}\\}. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_29", "text": " From the Google Street View Time Machine data, we obtain a training dataset of tuples (q,{piq},{njq})𝑞subscriptsuperscript𝑝𝑞𝑖subscriptsuperscript𝑛𝑞𝑗(q,\\{p^{q}_{i}\\},\\{n^{q}_{j}\\}), where for each training query image q𝑞q we have a set of potential positives {piq}subscriptsuperscript𝑝𝑞𝑖\\{p^{q}_{i}\\} and the set of definite negatives {njq}subscriptsuperscript𝑛𝑞𝑗\\{n^{q}_{j}\\}. The set of potential positives contains at least one positive image that should match the query, but we do not know which one. To address this ambiguity, we propose to identify the best matching potential positive image pi⁣∗qsubscriptsuperscript𝑝𝑞𝑖p^{q}_{i*} pi⁣∗q=arg⁡minpiq⁡dθ​(q,piq)subscriptsuperscript𝑝𝑞𝑖subscriptsubscriptsuperscript𝑝𝑞𝑖subscript𝑑𝜃𝑞subscriptsuperscript𝑝𝑞𝑖p^{q}_{i*}=\\operatorname*{\\arg\\!\\min}_{p^{q}_{i}}d_{\\theta}(q,p^{q}_{i}) (5) for each training tuple (q,{piq},{njq})𝑞subscriptsuperscript𝑝𝑞𝑖subscriptsuperscript𝑛𝑞𝑗(q,\\{p^{q}_{i}\\},\\{n^{q}_{j}\\}). The goal then becomes to learn an image representation fθsubscript𝑓𝜃f_{\\theta} so that distance dθ​(q,pi⁣∗q)subscript𝑑𝜃𝑞subscriptsuperscript𝑝𝑞𝑖d_{\\theta}(q,p^{q}_{i*}) between the training query q𝑞q and the best matching potential positive pi⁣∗qsubscriptsuperscript𝑝𝑞𝑖p^{q}_{i*} is smaller than the distance dθ​(q,njq)subscript𝑑𝜃𝑞subscriptsuperscript𝑛𝑞𝑗d_{\\theta}(q,n^{q}_{j}) between the query q𝑞q and all negative images qjsubscript𝑞𝑗q_{j}: dθ​(q,pi⁣∗q)<dθ​(q,njq),∀j.subscript𝑑𝜃𝑞subscriptsuperscript𝑝𝑞𝑖subscript𝑑𝜃𝑞subscriptsuperscript𝑛𝑞𝑗for-all𝑗d_{\\theta}(q,p^{q}_{i*})<d_{\\theta}(q,n^{q}_{j}),~{}~{}~{}\\forall j. (6) Based on this intuition we define a weakly supervised ranking loss Lθsubscript𝐿𝜃L_{\\theta} for a training tuple (q,{piq},{njq})𝑞subscriptsuperscript𝑝𝑞𝑖subscriptsuperscript𝑛𝑞𝑗(q,\\{p^{q}_{i}\\},\\{n^{q}_{j}\\}) as Lθ=∑jl​(mini⁡dθ2​(q,piq)+m−dθ2​(q,njq)),subscript𝐿𝜃subscript𝑗𝑙subscript𝑖subscriptsuperscript𝑑2𝜃𝑞subscriptsuperscript𝑝𝑞𝑖𝑚subscriptsuperscript𝑑2𝜃𝑞subscriptsuperscript𝑛𝑞𝑗L_{\\theta}=\\sum_{j}l\\left(\\min_{i}d^{2}_{\\theta}(q,p^{q}_{i})+m-d^{2}_{\\theta}(q,n^{q}_{j})\\right), (7) where l𝑙l is the hinge loss l​(x)=max⁡(x,0)𝑙𝑥𝑥0l(x)=\\max(x,0), and m𝑚m is a constant parameter giving the margin. Note that equation (7) is a sum of individual losses for negative images njqsubscriptsuperscript𝑛𝑞𝑗n^{q}_{j}. For each negative, the loss l𝑙l is zero if the distance between the query and the negative is greater by a margin than the distance between the query and the best matching positive. Conversely, if the margin between the distance to the negative image and to the best matching positive is violated, the loss is proportional to the amount of violation. Note that the above loss is related to the commonly used triplet loss (68, 87, 86, 67), but adapted to our weakly supervised scenario using a formulation (given by equation (5)) similar to multiple instance learning (20, 36, 85). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_30", "text": " We train the parameters θ𝜃\\theta of the representation fθsubscript𝑓𝜃f_{\\theta} using Stochastic Gradient Descent (SGD) on a large set of training tuples from Time Machine data. Details of the training procedure are given in appendix A. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_31", "text": " In this section we describe the used datasets and evaluation methodology (section 5.1), and give quantitative (section 5.2) and qualitative (section 5.3) results to validate our approach. Finally, we also test the method on the standard image retrieval benchmarks (section 5.4). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_32", "text": " We report results on two publicly available datasets. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_33", "text": " contains 250k database images downloaded from Google Street View and 24k test queries generated from Street View but taken at different times, years apart. We divide this dataset into three roughly equal parts for training, validation and testing, each containing around 83k database images and 8k queries, where the division was done geographically to ensure the sets contain independent images. To facilitate faster training, for some experiments, a smaller subset (Pitts30k) is used, containing 10k database images in each of the train/val(idation)/test sets, which are also geographically disjoint. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_34", "text": " contains 76k database images and 315 query images taken using mobile phone cameras. This is an extremely challenging dataset where the queries were taken at daytime, sunset and night, while the database images were only taken at daytime as they originate from Google Street View as described above. To form the train/val sets we collected additional Google Street View panoramas of Tokyo using the Time Machine feature, and name this set TokyoTM; Tokyo 24/7 (=test) and TokyoTM train/val are all geographically disjoint. Further details on the splits are given in appendix B. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_35", "text": " We follow the standard place recognition evaluation procedure (4, 24, 65, 81, 80). The query image is deemed correctly localized if at least one of the top N𝑁N retrieved database images is within d=25𝑑25d=25 meters from the ground truth position of the query. The percentage of correctly recognized queries (Recall) is then plotted for different values of N𝑁N. For Tokyo 24/7 we follow  and perform spatial non-maximal suppression on ranked database images before evaluation. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_36", "text": " We use two base architectures which are extended with Max pooling (fm​a​xsubscript𝑓𝑚𝑎𝑥f_{max}) and our NetVLAD (fV​L​A​Dsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD}) layers: AlexNet and VGG-16 ; both are cropped at the last convolutional layer (conv5), before ReLU. For NetVLAD we use K=64𝐾64K=64 resulting in 16k and 32k-D image representations for the two base architectures, respectively. The initialization procedure, parameters used for training, procedure for sampling training tuples and other implementation details are given in appendix A. All training and evaluation code, as well as our trained networks, are online at . ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_37", "text": " To assess benefits of our approach we compare our representations trained for place recognition against “off-the-shelf” networks pretrained on other tasks. Namely, given a base network cropped at conv5, the baselines either use Max pooling (fm​a​xsubscript𝑓𝑚𝑎𝑥f_{max}), or aggregate the descriptors into VLAD (fV​L​A​Dsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD}), but perform no further task-specific training. The three base networks are: AlexNet , VGG-16 , both are pretrained for ImageNet classification , and Places205 , reusing the same architecture as AlexNet but pretrained for scene classification . Pretrained networks have been recently used as off-the-shelf dense descriptor extractors for instance retrieval (6, 7, 22, 60, 62) and the untrained fm​a​xsubscript𝑓𝑚𝑎𝑥f_{max} network corresponds to the method of (6, 62). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_38", "text": " Furthermore we compare our CNN representations trained for place recognition against the state-of-the-art local feature based compact descriptor, which consists of VLAD pooling with intra-normalization on top of densely extracted RootSIFTs (43, 2). The descriptor is optionally reduced to 4096 dimensions using PCA (learnt on the training set) combined with whitening and L2-normalization ; this setup together with view synthesis yields the state-of-the-art results on the challenging Tokyo 24/7 dataset (c.f. ). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_39", "text": " In the following we discuss figure 5, which compares place recognition performance of our method to the baselines outlined above on the Pittsburgh and Tokyo 24/7 benchmarks. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_40", "text": " We follow the standard state-of-the-art procedure to perform dimensionality reduction of VLAD, as described earlier, i.e. the reduction into 4096-D is performed using PCA with whitening followed by L2-normalization (25, 80). Figure 5 shows that the lower dimensional fV​L​A​Dsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} (-∗∗\\ast-) performs similarly to the full size vector (-o-). ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_41", "text": " Representations trained on the end-task of place recognition consistently outperform by a large margin off-the-shelf CNNs on both benchmarks. For example, on the Pitts250k-test our trained AlexNet with (trained) NetVLAD aggregation layer achieves recall@1 of 81.0% compared to only 55.0% obtained by off-the-shelf AlexNet with standard VLAD aggregation, i.e. a relative improvement in recall of 47%. Similar improvements can be observed on all three datasets. This confirms two important premises of this work: (i) our approach can learn rich yet compact image representations for place recognition, and (ii) the popular idea of using pretrained networks “off-the-shelf” (60, 6, 22, 7, 62) is sub-optimal as the networks trained for object or scene classification are not necessary suitable for the end-task of place recognition. We believe this could be attributed to the fact that “off-the-shelf ” conv5 activations are not trained to be comparable using Euclidean distance. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_42", "text": " Figure 5 also shows that our trained fV​L​A​Dsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} representation with whitening based on VGG-16 ( magenta -∗∗\\ast-) convincingly outperforms RootSIFT+VLAD+whitening, as well as the method of Torii et al. , and therefore sets the state-of-the-art for compact descriptors on all benchmarks. Note that these are strong baselines that outperform most off-the-shelf CNN descriptors on the place recognition task. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_43", "text": " By comparing fV​L​A​Dsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} (-o-) methods with their corresponding fm​a​xsubscript𝑓𝑚𝑎𝑥f_{max} (-x-) counterparts it is clear that VLAD pooling is much better than Max pooling for both off-the-shelf and trained representations. NetVLAD performance decreases gracefully with dimensionality: 128-D NetVLAD performs similarly to 512-D Max (42.9% vs 38.4% recall@1 on Tokyo 24/7), resulting in four times more compact representation for the same performance. Furthermore, NetVLAD+whitening outperforms Max pooling convincingly when reduced to the same dimensionality (60%). See appendix C for more details. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_44", "text": " In Table 1 we study the benefits of training different layers for the end-task of place recognition. The largest improvements are thanks to training the NetVLAD layer, but training other layers results in further improvements, with some overfitting occurring below conv2. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_45", "text": " Here we examine whether the network can be trained without the Time Machine (TM) data. In detail, we have modified the training query set for Pitts30k-train to be sampled from the same set as the training database images, i.e. the tuples of query and database images used in training were captured at the same time. Recall@1 with fm​a​xsubscript𝑓𝑚𝑎𝑥f_{max} on Pitts30k-val for the off-the-shelf AlexNet is 33.5%, and training without TM improves this to 38.7%. However, training with TM obtains 68.5% showing that Time Machine data is crucial for good place recognition accuracy as without it the network does not generalize well. The network learns, for example, that recognizing cars is important for place recognition, as the same parked cars appear in all images of a place. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_46", "text": " To visualize what is being learnt by our place recognition architectures, we adapt the method of Zeiler and Fergus for examining occlusion sensitivity of classification networks. It can be seen in figure 6 that off-the-shelf AlexNet (pretrained on ImageNet) focuses very much on categories it has been trained to recognize (e.g. cars) and certain shapes, such as circular blobs useful for distinguishing 12 different ball types in the ImageNet categories. The Place205 network is fairly unresponsive to all occlusions as it does not aim to recognize specific places but scene-level categories, so even if an important part of the image is occluded, such as a characteristic part of a building façade, it still provides a similar output feature which corresponds to an uninformative “a building façade” image descriptor. In contrast to these two, our network trained for specific place recognition automatically learns to ignore confusing features, such as cars and people, which are not discriminative for specific locations, and instead focuses on describing building façades and skylines. More qualitative examples are provided in appendix C. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_47", "text": " We use our best performing network (VGG-16, fV​L​A​Dsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} with whitening down to 256-D) trained completely on Pittsburgh, to extract image representations for standard object and image retrieval benchmarks. Our representation sets the state-of-the-art for compact image representations (256-D) by a large margin on all three datasets, obtaining an mAP of 63.5%, 73.5% and 79.9% on Oxford 5k , Paris 6k , Holidays , respectively; for example, this is a +20% relative improvement on Oxford 5k. Appendix C contains more detailed results. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_48", "text": " We have designed a new convolutional neural network architecture that is trained for place recognition in an end-to-end manner from weakly supervised Street View Time Machine data. Our trained representation significantly outperforms off-the-shelf CNN models and significantly improves over the state-of-the-art on the challenging 24/7 Tokyo dataset, as well as on the Oxford and Paris image retrieval benchmarks. The two main components of our architecture – (i) the NetVLAD pooling layer and (ii) weakly supervised ranking loss – are generic CNN building blocks applicable beyond the place recognition task. The NetVLAD layer offers a powerful pooling mechanism with learnable parameters that can be easily plugged into any other CNN architecture. The weakly supervised ranking loss opens up the possibility of end-to-end learning for other ranking tasks where large amounts of weakly labelled data are available, for example, images described with natural language . ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" }, { "id": "1511.07247_all_49", "text": " This work was partly supported by RVO13000 - Conceptual development of research organization, the ERC grant LEAP (no. 336845), ANR project Semapolis (ANR-13-CORD-0003), JSPS KAKENHI Grant Number 15H05313, the Inria CityLab IPL, and the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory, contract FA8650-12-C-7212. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government. ", "title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition" } ]
Why does author experiment the quantized linear supernet design even though Radosavovic et al. already provided similar result?
The previous study shows the linear design is beneficial in terms of computational complexity, while the author shows the result in terms of latency [34].
[ 34 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a challenging problem in various areas. A popular hardware solution is to develop a hardware accelerator, called neural processing unit (NPU), that achieves higher performance per watt than CPUs or GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_1", "text": " For a given hardware platform, several software techniques have been proposed to accelerate CNNs by approximate computing since deep learning applications can tolerate a certain range of computation inaccuracy. Some examples in this software approach are filter pruning (Li et al., 2016), quantization (Park et al., 2017), low-rank approximation (Kim et al., 2015). Accelerating CNNs is helpful to improve the accuracy by running a more compute-intensive CNN with higher accuracy within a given time budget. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_2", "text": " On the other hand, various algorithmic solutions have been proposed to improve the CNN architecture by introducing new operations, optimizing the hyper-parameters, or searching for better network architecture. New operations such as depth-wise convolution(DWConv) (Chollet, 2017) and mobile inverted bottleneck (MBConv) (Sandler et al., 2018) have been developed to replace the regular full convolution. Recently, automated neural architecture search (NAS) emerges as the default technique to find a CNN architecture with higher accuracy than manually-designed architectures, particularly image classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_3", "text": " A NAS technique explores a predefined search space and estimates the performance for each candidate architecture to find an optimal one with the highest accuracy under a given latency constraint. Thus there are three factors that affect the performance of NAS, as shown in Figure 1: search space, search strategy, and performance estimation. The search space of a NAS technique is usually restricted by a supernet that defines the topology of the largest network to explore. Since the performance of a network depends on the hardware platform, the NAS technique needs to be customized to a given hardware platform. While numerous NAS techniques have been proposed with various search strategies recently, their assumed hardware platforms are mostly GPUs. In this paper, we present a customized NAS technique for an NPU, which produces a CNN architecture with a better accuracy-latency tradeoff than existing models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_4", "text": " One of the most closely related work is the recently proposed NAS technique tailored for Google’s Edge-TPU (Gupta and Akin, 2020). While MBConv is widely used for GPU-aware NAS techniques, they prefer to use a single full convolution by fusing expansion layer and DWConv layer in some parts of the network, observing that the Edge-TPU runs the fused full convolution faster even though the required number of MAC (multiply-accumulate) operations is much larger. It confirms that the number of MAC operations is not a proper measure of latency, and platform-specific performance estimation is required. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_5", "text": " Since an NPU is much faster than a GPU, it enables us to explore the wider search space for NAS under a given latency constraint. Since there are many factors to define the search space, such as the number of layers, channels, kernel sizes, and so on, the search space grows exponentially as the allowed computation complexity grows. Hence, reducing the search space, as well as the search time, is very challenging for NPU-aware NAS techniques. While the aforementioned work for Google’s Edge TPU trains each architecture candidate from scratch to estimate the performance, it is not computationally efficient. In contrast, we adopt a fast differentiable hardware-aware One-Shot NAS, called Single-Path NAS (Stamoulis et al., 2019), in order to reduce the search time. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_6", "text": " Figure 2 shows an overview of the proposed NAS methodology that consists of three steps. In the first step, we change the supernet structure of the Single-Path NAS, which has a hierarchical structure based on MobileNetV2 (Sandler et al., 2018): A supernet structure consists of a series of stages that contain a series of blocks containing an MBConv micro-architecture inside. Since the network accuracy depends on the supernet structure, we make two extensions on the supernet structure to widen the search space. First, we allow stages to have a different number of blocks, called depth of the stage, considering the effect of stage depth on the accuracy and the latency. Second, we add parallel layers with different kernel sizes in each block, adopting the idea of mixed depthwise convolution (Tan and Le, 2019b) (MixConv). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_7", "text": " With the extended supernet structure, we apply the Single-Path NAS, which is also extended to support the extended supernet structure. In this step, we assume a shorter latency constraint than the required to reduce the search space and the search time. The last step is to scale up the baseline CNN adopting the compound scaling technique proposed in  (Tan and Le, 2019a) until the latency constraint is met. The proposed NAS methodology is named as S3NAS since it consists of 3 steps: Supernet design, SinglePath NAS, and Scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_8", "text": " For accurate latency estimation, an analytical latency estimator is devised, based on a cycle-level NPU simulator that runs an entire CNN considering the memory access overhead accurately. Since the NPU assumed in this paper can execute depth-wise separable convolution (DWConv), squeeze-and-excitation (SE), and h-swish activation function efficiently, the proposed supernet prefers DWConv to regular convolution. Observing that the accuracy is improved by around 1% if SE and h-swish activation function are used, we add a post-processing phase after a CNN network is found by NAS to add SE layers and to replace ReLU to h-swish activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_9", "text": " Experiments show that the proposed NAS technique could improve the accuracy-latency tradeoff over existing SoTA CNN models. Our best model achieves 82.72% top-1 accuracy on ImageNet with 11.66ms latency without any special data augmentation. Note that the latency is estimated by cycle-accurate simulation. For a fair comparison with the related work, the latency of each compared network is also estimated with the same simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_10", "text": " After an automated NAS technique based on reinforcement learning successfully found a better CNN architecture than manually-designed architectures (Zoph and Le, 2016), extensive research has been conducted to develop various NAS techniques based on reinforcement learning (Zoph et al., 2018; Tan et al., 2019). However, these NAS techniques are computationally intensive because they train each candidate architectures from scratch to estimate the goodness of it. Thus, one-shot neural architecture search approach (Pham et al., 2018) was introduced to reduce the search cost. In this approach, an over-parameterized super-model network is defined, and architecture search is performed by parameter optimization to reduce the complexity of the network. Gradient-based differentiable search has gained increasing popularity, and various NAS techniques have been proposed with different super-models and hyper-parameters (Pham et al., 2018; Guo et al., 2019; Chu et al., 2019; Liu et al., 2018; Cai et al., 2018). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_11", "text": " Among diverse techniques to decrease the search cost, Single-Path NAS (Stamoulis et al., 2019) was recently proposed to find a good architecture faster than the existing differentiable NAS techniques. This technique is extended to broaden the search space by including the squeeze-and-excitation (SE) block in the search space (Stamoulis et al., 2020). Our work is grounded on the original Single-Path NAS technique. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_12", "text": " Finding a hardware-friendly neural architecture has been facilitated as NAS algorithm improved. MNASNet (Tan et al., 2019) added a latency term in the objective function to discover better architectures with a given latency constraint on their target hardware platform. EfficientNet (Tan and Le, 2019a), whose search method is similar to MNASNet, introduced a novel scaling method, called compound scaling, to find more accurate networks as the latency constraint or FLOPS increases. Instead of finding a network directly for a given long latency constraint, they scale up the depth and the width of a small network with shorter latency and the input image size in a balanced way. They could achieve a set of networks with state-of-the-art performance over a range of latency constraints. They removed SE blocks and swish activation function from their search space for hardware platforms that do not support them efficiently to name the resultant network as EfficientNet-lite. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_13", "text": " While EfficientNet searches a set of networks over a range of latency constraints by scaling up, Once-For-All (Cai et al., 2019) network takes an opposite approach, scaling down. They first train a super-graph architecture by a novel method called progressive shrinking and search a sub-graph network that achieves good accuracy for a given latency constraint without re-training but cheap fine-tuning. They claim that a scaled-down network from the super-graph gives better accuracy than a network that is trained from scratch. They could find more accurate networks than EfficientNet for small latency constraints. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_14", "text": " To explore more efficient neural architectures on specific hardware, some NAS methods have proposed to define the design space of architecture exploration, tailored for the hardware platform. Gupta et al. (Gupta and Akin, 2020) devised a building block named fused inverted bottleneck convolution block and showed that this block is often more efficient than MBConv on their target NPU, Edge-TPU. They adopted compound scaling method to find high-performing architectures on Edge-TPU. Our work is closely related to this method. We devise a building block that consists of parallel DWConv layers with different kernel sizes, based on a preliminary experiment to find that it is better than the other alternative building blocks in terms of performance per latency (Tan and Le, 2019b). And we increase the search space by allowing stages to have a different number of blocks in the baseline supernet. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_15", "text": " A neural network typically consists of multiple stages, a sequence of blocks with the same number of output channels (width). There are studies on how to assign the number of blocks (depth) to each stage. Meng et al. (Meng et al., 2020) observed that the way of assigning depth to each stage affects the accuracy. Moreover, they argued that the good depth assignment of each stage could be inherited from the shallow ones as the total depth is increased, and proposed a layer-growing NAS method that could significantly reduce the search space. Furthermore, Radosavovic et al. (Radosavovic et al., 2020) discovered that among neural architectures with similar computational complexity, the ones whose stage width and depth have a quantized linear relationship tend to have higher accuracy. Based on similar observations, we apply this design principle to change the structure of the conventional One-Shot NAS supernet. In addition, we argue that placing more blocks in a stage with a larger width is beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_16", "text": " While the original DWConv block uses a single kernel size for depthwise convolution, mixing multiple kernel sizes for depthwise convolution was recently proposed, named as MixConv (Tan and Le, 2019b). Mixing multiple kernel sizes can be understood as having parallel branches inside a block. It is shown that MixConv is more efficient than ordinary DWConv (Tan and Le, 2019b). There exist some recent NAS methods (Mei et al., 2019; Chu et al., 2020) that also broaden their search space using DWConv with multiple kernel sizes to find better neural architectures. We adopt this approach in the supernet and formulate a differentiable latency model of this operation, enabling a latency-aware differentiable One-Shot NAS with MixConv. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_17", "text": " In this section, we will briefly review the Single-Path NAS technique and our target NPU. Before going further, we define some terminologies used in this paper, as shown in Figure 3. A neural architecture consists of stages at the top level. A stage consists of a sequence of blocks whose output feature maps have the same dimension. In the proposed supernet, a block is defined as MBConv that typically starts with 1×1 conv (expansion layer) and ends with 1×1 conv. Adopting the MixConv approach, the depthwise convolution layer consists of parallel superkernels whose kernel size will be determined during the NAS process. The width of block denotes the number of channels in the final output feature map of the block, and the width of stage is the width of the final block in the stage. We will call the total number of blocks starting from the very first block in the network up to the last block in a specific stage S, as the cumulative depth up to stage S. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_18", "text": " Differentiable NAS methods usually define architecture parameters to choose which convolution layer to use in the block, training each convolution layer independently. Single-Path NAS (Stamoulis et al., 2019) reduce the search cost by decreasing the number of trainable parameters by sharing the kernel weights between convolution layers. The key idea is designing an over-parameterized depthwise convolution kernel named superkernel, and letting each depthwise convolution kernel of candidate MBConvs directly inherit the weights of this superkernel. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_19", "text": " Let 𝐰k,esubscript𝐰𝑘𝑒\\mathbf{w}_{k,e} denote the depthwise convolution kernel of candidate MBConv with kernel size k and expansion ratio e (MBConvk,e). First, they introduce a large 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6}, which is the DWConv kernel of MBConv5,6. Then, the inner core of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} can be considered as 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6}, a DWConv kernel of MBConv3,6. A superkernel containing these two kernel size options can be expressed as Figure 4: (1) 𝐰∗,6=𝐰3,6+𝟙​(use​kernel​size​ 5)⋅𝐰5\\3,6subscript𝐰6subscript𝐰36⋅1usekernelsize5subscript𝐰\\536\\mathbf{w}_{*,6}=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\rm{use\\leavevmode\\nobreak\\ kernel\\leavevmode\\nobreak\\ size\\leavevmode\\nobreak\\ 5})\\cdot\\mathbf{w}_{5\\backslash 3,6} where 𝐰5\\3,esubscript𝐰\\53𝑒\\mathbf{w}_{5\\backslash 3,e} means the outer part, 𝐰5,e−𝐰3,esubscript𝐰5𝑒subscript𝐰3𝑒\\mathbf{w}_{5,e}-\\mathbf{w}_{3,e}. Next, they formulate conditions to determine the kernel size. They define a certain threshold value t𝑡t and compare the norm of the kernel weights with the threshold. If the norm of a subset weight is larger than the threshold, it remains in the supernet. To this end, Eq. (1) is changed as follows: (2) 𝐰∗,6​(tk=5)=𝐰3,6+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅𝐰5\\3,6subscript𝐰6subscript𝑡𝑘5subscript𝐰36⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscript𝐰\\536\\mathbf{w}_{*,6}(t_{k=5})=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot\\mathbf{w}_{5\\backslash 3,6} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_20", "text": " The threshold value is also trainable to be automatically chosen during training. To enable back-propagation, they relax 𝟙​(x>t)1𝑥𝑡\\mathbbm{1}(x>t) to σ​(x−t)𝜎𝑥𝑡\\sigma(x-t) when computing gradients. In addition, they optimize kernel weights and threshold values simultaneously. For a given tight search time, this method is shown to be more effective than the other methods (Stamoulis et al., 2020). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_21", "text": " Moreover, we can vary the number of channels by varying the expansion ratio of each block: we can use only the first half channels of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} and 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6} as 𝐰5,3subscript𝐰53\\mathbf{w}_{5,3} and 𝐰3,3subscript𝐰33\\mathbf{w}_{3,3}, respectively. By defining another set of trainable thresholds, the following formula is defined to determine the expansion ratio: (3) 𝐰∗,∗​(te=3,te=6,tk=5)=𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝐰∗,3​(tk=5)+𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝟙​(∥𝐰∗,6\\3​(tk=5)∥2>te=6)⋅𝐰∗,6\\3​(tk=5)subscript𝐰subscript𝑡𝑒3subscript𝑡𝑒6subscript𝑡𝑘5⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒3subscript𝐰3subscript𝑡𝑘5⋅⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒31superscriptdelimited-∥∥subscript𝐰\\63subscript𝑡𝑘52subscript𝑡𝑒6subscript𝐰\\63subscript𝑡𝑘5\\mathbf{w}_{*,*}(t_{e=3},t_{e=6},t_{k=5})=\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbf{w}_{*,3}(t_{k=5})+\\\\ \\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}(t_{k=5})\\rVert^{2}>t_{e=6})\\cdot\\mathbf{w}_{*,6\\backslash 3}(t_{k=5}) where 𝐰k,6\\3subscript𝐰𝑘\\63\\mathbf{w}_{k,6\\backslash 3} means the remaining half of channels, 𝐰k,6−𝐰k,3subscript𝐰𝑘6subscript𝐰𝑘3\\mathbf{w}_{k,6}-\\mathbf{w}_{k,3}. Note that if te=3subscript𝑡𝑒3t_{e=3} is sufficiently large, all channels can be removed to make the block a plain skip connection. Thus, they replace the original depthwise convolution kernel of MBConv5,6 with 𝐰∗,∗subscript𝐰\\mathbf{w}_{*,*}, yielding a differentiable and searchable MBConv with respect to the kernel size and expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_22", "text": " They also design a differentiable latency-aware loss function to consider hardware latency in the search algorithm. To this end, they define a function to estimate latency as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_23", "text": " (4) Lel=𝟙(∥𝐰∗,3∥2>te=3)⋅(P5,3l+𝟙(∥𝐰∗,6\\3∥2>te=6)⋅(P5,6l−P5,3l))subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰32subscript𝑡𝑒3subscriptsuperscript𝑃𝑙53⋅1superscriptdelimited-∥∥subscript𝐰\\632subscript𝑡𝑒6subscriptsuperscript𝑃𝑙56subscriptsuperscript𝑃𝑙53\\begin{split}L^{l}_{e}=&\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}\\rVert^{2}>t_{e=3})\\cdot(P^{l}_{5,3}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}\\rVert^{2}>t_{e=6})\\cdot(P^{l}_{5,6}-P^{l}_{5,3}))\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_24", "text": " (5) Ll=P3,6l/P5,6l⋅Lel+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅Lel⋅(1−P3,6l/P5,6l)superscript𝐿𝑙⋅subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscriptsuperscript𝐿𝑙𝑒1subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56\\begin{split}L^{l}=&P^{l}_{3,6}/P^{l}_{5,6}\\cdot L^{l}_{e}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot L^{l}_{e}\\cdot(1-P^{l}_{3,6}/P^{l}_{5,6})\\end{split} where Pk,elsubscriptsuperscript𝑃𝑙𝑘𝑒P^{l}_{k,e} is a profiled latency value for MBConvk,e for the l𝑙lth block in the supernet. Note that they used P3,6lsubscriptsuperscript𝑃𝑙36P^{l}_{3,6}, P5,3lsubscriptsuperscript𝑃𝑙53P^{l}_{5,3}, and P5,6lsubscriptsuperscript𝑃𝑙56P^{l}_{5,6} only to formulate Llsuperscript𝐿𝑙L^{l}, and the latency for MBConv3,3 is approximated using these values. Here is the latency-aware loss function designed: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_25", "text": " (6) C​E+λ⋅l​o​g​(∑lLl)𝐶𝐸⋅𝜆𝑙𝑜𝑔subscript𝑙superscript𝐿𝑙CE+\\lambda\\cdot log(\\sum_{l}L^{l}) Finally, they search for a neural architecture in two phases. First, they train the supernet by randomly choosing one of the candidate subgraphs in each training step. In this phase, they use CrossEntropy loss only. Next, they enable latency-aware loss function and train the supernet with the loss function, to decide the threshold values. By doing this, they could get a high-quality neural architecture with only eight epochs of ImageNet training set.111In our implementation, we changed the probability of selecting each candidate MBConvs to be equal. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_26", "text": " Even though the proposed methodology can be applied to any type of NPU, the current implementation is made for an adder-tree type NPU, called MIDAP (Kang et al., 2019). It has a fully-pipelined micro-architecture that consists of separate hardware modules and memory modules for convolution, activation function, and various reduction operations. Since it enables us to make a fully static schedule of operations without resource contention in the data path, we can estimate the end-to-end latency of a CNN quite accurately analytically. Unexpected delay may incur from off-chip DRAM delay that is not fully hidden by double buffering. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_27", "text": " Another good feature of MIDAP is that it efficiently supports the following operations that would lower the MAC (multiply-accumulate) utilization in other NPUs that have many MAC units: pooling, DWConv, and squeeze-and-excitation (SE). For DWConv operation, it does not use an adder tree but an alternative hardware logic that consists of a set of individual accumulators connected to the multiply units. For pooling and SE operations, reduction logic is included in the pipeline. Note that MIDAP has not been implemented as a real hardware chip yet but as a virtual prototype with a cycle-accurate simulator. Thanks to the cycle-accurate simulator that considers the DRAM access contention and parametrized DRAM access delay, we could build an accurate analytical model for end-to-end latency estimation, based on the profiling result with the simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_28", "text": " Inverted bottleneck with depth-wise convolution (MBConv) (Sandler et al., 2018) is a popular building block in recent mobile-friendly networks. However, it is not efficiently supported in existing NPUs that do not have specialized hardware units for DWConv (Gholami et al., 2018; Gupta and Akin, 2020). Thus Gupta et al. (Gupta and Akin, 2020) replaced an MBConv block with a fused building block that fuses an expansion layer and DWConv in MBConv into a single full convolution. Even though the fused block increases the number of multiplications significantly, it improves the MAC utilization larger so that the fused block is observed faster than MBConv on their target NPU, EdgeTPU. By adding this building block to their search space, they could successfully obtain different neural architectures for EdgeTPU from those for GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_29", "text": " Since DWConv is efficiently supported in MIDAP, however, the improvement of MAC utilization by fusing does not outweigh the increased computation complexity, which is observed in preliminary experiments. The experiment setup is similar to main experiment setup that will be explained in section 5.2. The experimental result is shown in Table 1. The latency constraint for fused block experiment is set to 7.0ms, while others are set to 2.15ms. In the combined experiment, we use the fused block in the 1st and the 2nd stages, and MBConv for the remaining stages since the latency gap between two building blocks is too high. As shown in the table, MBConv block shows the best tradeoff between accuracy and latency. Hence we prefer MBConv to the fused building block as the basic building block in the supernet for MIDAP. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_30", "text": " In this section, we explain the proposed S3NAS methodology that consists of three steps as displayed in Figure 2. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_31", "text": " The number of blocks is one of the key parameters in neural networks. It is observed that the total number of blocks affects the accuracy of neural architecture (He et al., 2016; Tan and Le, 2019a). In conventional One-Shot NAS methods, each stage in the supernet has the same number of blocks (Cai et al., 2018; Stamoulis et al., 2019; Wu et al., 2019). On the other hand, some recent studies (Meng et al., 2020; Radosavovic et al., 2020) report that the way of assigning the number of blocks in each stage has a noticeable impact on the accuracy, even with the same number of blocks in total. Hence we allow stages in the supernet to have a different number of blocks. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_32", "text": " We investigate the impact of assigning the number of blocks in the supernet with another preliminary experiment. We construct a network based on MobileNetV2, which has four blocks in every stage, and observe the change of accuracy as we reduce two blocks in a different stage in each experiment. Figure 5 shows that MBConvs with larger width has more impact on accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_33", "text": " As the number of multiplications in a DWConv is W×H×C×K2𝑊𝐻𝐶superscript𝐾2W\\times H\\times C\\times K^{2}, the later stage of DWConv tends to have shorter latency since the reduction of H×W𝐻𝑊H\\times W is larger than the increase of C𝐶C. Thus the impact on the latency by increasing the number of blocks in a later stage is not significant as displayed in Figure 5. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_34", "text": " Thus, we place more blocks to stages with larger width in the supernet, making the cumulative depth up to a specific stage is proportional to the width of the stage, which is similar to PyramidNet (Han et al., 2017). A recent study (Radosavovic et al., 2020) also claims that neural architectures with a linear relationship between the cumulative depth and the width tend to have higher accuracy with a similar amount of computation complexity. Our experiment shows that our modification to supernet enhances the efficiency of the search result in terms of accuracy as well as latency (Table 4). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_35", "text": " Another feature of the proposed supernet is to use mixed convolution (MixConv) that mixes different kernel sizes in the depth-wise convolution layer (Tan and Le, 2019b). Some recent NAS methods (Mei et al., 2019; Chu et al., 2020) also broaden their search space using DWConv with various kernel sizes and could find better neural architectures. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_36", "text": " Figure 6 depicts our building block structure. This block starts and ends with 1×1 convolution, with N𝑁N searchable superkernels in the middle. Each searchable superkernel is designed similarly to Eq. (3), while we may use different threshold values in each superkernel. The kernel sizes and expansion ratios are selected among predetermined values. If the j𝑗j-th searchable superkernel chooses an expansion ratio ejsubscript𝑒𝑗e_{j}, the j𝑗j-th kernel has ejsubscript𝑒𝑗e_{j} times more channels than the first 1×1 convolution. Compared with the original MixConv suggested in (Tan and Le, 2019b), the proposed building block supports more diverse combinations of kernel sizes and expansion ratios. It enhances the efficiency of search results on our target NPU (Table 5). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_37", "text": " We finish this subsection by highlighting the merit of Single-Path NAS on building a MixConv-based differentiable NAS. Conventional multi-path NAS methods would have difficulties when adding inverted bottleneck convolution with MixConv to their search space. Since the number of possible choices of such blocks grows proportionally to the partition number, multi-path NAS methods would introduce a significant increase in memory requirements and the search time. On the contrary, MixConv can be efficiently supported in Single-Path NAS, as explained below. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_38", "text": " We use a different latency estimation model, and a loss formula from the original SinglePath NAS technique explained in section 3.1. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_39", "text": " Suppose we concatenate N𝑁N searchable superkernels to build a MixConv-based building block, and let k→=(k1,⋯,kN),e→=(e1,⋯,eN)formulae-sequence→𝑘subscript𝑘1⋯subscript𝑘𝑁→𝑒subscript𝑒1⋯subscript𝑒𝑁\\vec{k}=(k_{1},\\cdots,k_{N}),\\vec{e}=(e_{1},\\cdots,e_{N}) where kj,ejsubscript𝑘𝑗subscript𝑒𝑗k_{j},e_{j} denote the kernel size and the expansion ratio of the j𝑗jth searchable superkernel. The estimated latency of a DWConv operation depends on the kernel size and the expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_40", "text": " For latency formulation, we first define two condition variables, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} and Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}}, that denote whether the j𝑗jth searchable superkernel chooses the kernel size kjsubscript𝑘𝑗k_{j} and the expansion ratio ejsubscript𝑒𝑗e_{j}, respectively; For example, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} is 1 if and only if the j𝑗jth searchable superkernel chooses kjsubscript𝑘𝑗k_{j}, and 0 otherwise. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_41", "text": " Let κ1<⋯<κKsubscript𝜅1⋯subscript𝜅𝐾\\kappa_{1}<\\cdots<\\kappa_{K} be the candidate kernel sizes, and 0=ϵ1<⋯<ϵE0subscriptitalic-ϵ1⋯subscriptitalic-ϵ𝐸0=\\epsilon_{1}<\\cdots<\\epsilon_{E} denote the candidate expansion ratios of the j𝑗jth searchable superkernel, respectively. Suppose kj=κcsubscript𝑘𝑗subscript𝜅𝑐k_{j}=\\kappa_{c}, then Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} can be formulated as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_42", "text": " (7) Fj,kj=(∏2≤i≤c𝟙​(∥𝐰j,κi\\κi−1,ϵE∥2>tj,κi))⋅fj,kj​, wherefj,kj={𝟙​(∥𝐰j,κc+1\\κc,ϵE∥2<tj,κc+1),if ​c<K1,if ​c=Ksubscript𝐹𝑗subscript𝑘𝑗⋅subscriptproduct2𝑖𝑐1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑖subscript𝜅𝑖1subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑖subscript𝑓𝑗subscript𝑘𝑗, wheresubscript𝑓𝑗subscript𝑘𝑗cases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑐1subscript𝜅𝑐subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑐1if 𝑐𝐾1if 𝑐𝐾\\begin{split}F_{j,k_{j}}&=\\left(\\prod_{2\\leq i\\leq c}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{i}\\backslash\\kappa_{i-1},\\epsilon_{E}}\\rVert^{2}>t_{j,\\kappa_{i}})\\right)\\cdot f_{j,k_{j}}\\text{, where}\\\\ f_{j,k_{j}}&=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{c+1}\\backslash\\kappa_{c},\\epsilon_{E}}\\rVert^{2}<t_{j,\\kappa_{c+1}}),&\\text{if }c<K\\\\ 1,&\\text{if }c=K\\end{cases}\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_43", "text": " Figure 7 depicts an example of this formula when the j𝑗jth searchable superkernel that has four candidate kernel sizes κ1<⋯<κ4subscript𝜅1⋯subscript𝜅4\\kappa_{1}<\\cdots<\\kappa_{4} chooses κ2subscript𝜅2\\kappa_{2} as the kernel size: kj=κ2subscript𝑘𝑗subscript𝜅2k_{j}=\\kappa_{2}. It means that weight 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} and 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}} are used, but the remaining weights starting from 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} are not used. Since 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} is always used, it is not included in the formula. To use 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}}, the norm of it has to be larger than tj,κ2subscript𝑡𝑗subscript𝜅2t_{j,\\kappa_{2}} while the norm of 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} should not be larger than tj,κ3subscript𝑡𝑗subscript𝜅3t_{j,\\kappa_{3}} to avoid the use of larger kernel sizes. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_44", "text": " We can formulate Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}} similarly: Gj,ejsubscript𝐺𝑗subscript𝑒𝑗\\displaystyle G_{j,e_{j}} =(∏2≤i≤d𝟙​(∥𝐰j,∗,ϵi\\ϵi−1∥2>tj,ϵi))⋅gj,ej​, whereabsent⋅subscriptproduct2𝑖𝑑1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑖subscriptitalic-ϵ𝑖12subscript𝑡𝑗subscriptitalic-ϵ𝑖subscript𝑔𝑗subscript𝑒𝑗, where\\displaystyle=\\left(\\prod_{2\\leq i\\leq d}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{i}\\backslash\\epsilon_{i-1}}\\rVert^{2}>t_{j,\\epsilon_{i}})\\right)\\cdot g_{j,e_{j}}\\text{, where} gj,ejsubscript𝑔𝑗subscript𝑒𝑗\\displaystyle g_{j,e_{j}} ={𝟙​(∥𝐰j,∗,ϵd+1\\ϵd∥2<tj,ϵd+1),if ​d<E1,if ​d=Eabsentcases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑑1subscriptitalic-ϵ𝑑2subscript𝑡𝑗subscriptitalic-ϵ𝑑1if 𝑑𝐸1if 𝑑𝐸\\displaystyle=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{d+1}\\backslash\\epsilon_{d}}\\rVert^{2}<t_{j,\\epsilon_{d+1}}),&\\text{if }d<E\\\\ 1,&\\text{if }d=E\\end{cases} when ej=ϵdsubscript𝑒𝑗subscriptitalic-ϵ𝑑e_{j}=\\epsilon_{d}. Then the condition for a MixConv-based building block to choose k→,e→→𝑘→𝑒\\vec{k},\\vec{e} can be expressed as ∏jNFj,kj​Gj,ejsuperscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_45", "text": " Now, the estimated latency of a single block is formulated as follows: (8) L=∑k→,e→(P​(k→,e→)​∏jNFj,kj​Gj,ej)𝐿subscript→𝑘→𝑒𝑃→𝑘→𝑒superscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗L=\\sum_{\\vec{k},\\vec{e}}(P(\\vec{k},\\vec{e})\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}) where P​(k→,e→)𝑃→𝑘→𝑒P(\\vec{k},\\vec{e}) denotes the profiled latency value of a MixConv-based building block corresponding to k→,e→→𝑘→𝑒\\vec{k},\\vec{e}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_46", "text": " Unlike the original Single-Path NAS that approximates the latency in Eq. (5) in some cases, we use the profiled latency value in all cases. Note that an expansion ratio can be zero, and if only one superkernel has a nonzero expansion ratio, the MixConv block is reduced to a plain MBConv block. Finally, we can estimate the latency by summing up these estimated latencies for all superkernels in the block, ∑L𝐿\\sum L. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_47", "text": " Since each superkernel is treated independently, some superkernels may have the same kernel size and expansion ratio. Then, even if two superkernel configurations express an equivalent block, as illustrated in Figure 8, they may have different estimated latency values, which is an artifact of the proposed profiling-based latency estimation method. To avoid this artifact, we enforce that there is only one kernel for each kernel size in the MixConv block. That is, we merge two kernels of the same size into one; For instance, the left MixConv is translated to the right MixConv in Figure 8 before latency estimation. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_48", "text": " Figure 9 shows the estimated latency and simulated latency of randomly generated 100 models on our search space. It validates the accuracy of the proposed latency model, whose mean absolute percentage error(MAPE) is about 0.16%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_49", "text": " The existing hardware-aware differentiable NAS methods mostly define some hyperparameters to balance between accuracy and latency, including SinglePath NAS, whose loss function is defined as Eq. (6). Since there is no information on the target latency in the loss function, in case there is a strict latency constraint, they have to pay additional search costs for the hyperparameters to let the final architecture have no larger latency than the constraint. In addition, this process needs to be repeated whenever the target latency is changed. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_50", "text": " We propose to modify the loss function to activate the latency-aware loss term only when the estimated latency is larger than the latency constraint as follows: (9) C​E+λ1⋅l​o​g​(1+λ2⋅R​e​L​U​((∑L)−T))𝐶𝐸⋅subscript𝜆1𝑙𝑜𝑔1⋅subscript𝜆2𝑅𝑒𝐿𝑈𝐿𝑇CE+\\lambda_{1}\\cdot log(1+\\lambda_{2}\\cdot ReLU((\\sum L)-T)) Although this is not a panacea, this modification significantly eases the search process, which will be discussed in section 5.2 with various experiments. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_51", "text": " In the second step, we intentionally use shorter latency to reduce the search space for the baseline network. After finding the baseline network with a shorter latency, we apply compound scaling to find an architecture with the final latency constraint. In this step, we conduct post-processing to add SE block and h-swish activation function if beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_52", "text": " It is well known that increasing depth (He et al., 2016), width (Zagoruyko and Komodakis, 2016), or input image size improves accuracy while it increases latency. However, if only one of these three factors is increased, the accuracy improvement is quickly saturated. Observing this fact, Tan et al. (Tan and Le, 2019a) proposed a compound scaling method that increases all three factors together. A scaling coefficient is defined for each factor. By judiciously assigning the scaling coefficients in a balanced fashion, they could improve the accuracy much larger than scaling a single factor only. Adopting this approach, we apply the compound scaling to the baseline architecture obtained in the previous step. Based on the ratio between the true latency constraint and the assumed latency constraint in the second step, we find the scaling coefficients considering the estimated latency increment. To keep the linear relationship between the width and cumulative depth, we use the same scaling coefficient for width and depth, differently from (Tan and Le, 2019a). Note that how to realize scaling depends on the baseline architecture. While the baseline architecture assumed in (Tan and Le, 2019a) has a series of identical blocks in each stage, a stage consists of heterogeneous blocks in our baseline architecture. Thus depth scaling is not realized by merely adding new blocks in each stage. We need to choose what types of blocks to add in each stage. We increase the number of blocks with more parameters first. To compute how many blocks to add in a stage, we multiply the depth of the stage by depth coefficient and round the multiplication result. Width scaling is applied to all blocks equally. Finally, we consider latency when we scale. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_53", "text": " In addition to compound scaling, we add two components in the post-processing step: h-swish activation function and squeeze-and-excitation (SE) block. A recent study (Park and Yoo, 2020) reports that SE and the h-swish activation function are no hurdles for 8-bit quantization. They could quantize a network with SE and h-swish without noticeable accuracy loss. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_54", "text": " Extensive studies have been conducted to find a better activation function than ReLU, and the swish activation function (Ramachandran et al., 2017) was found. Several neural networks (Tan and Le, 2019b; Mei et al., 2019; Tan and Le, 2019a) use swish activation function instead of ReLU to improve accuracy. Howard et al. (Howard et al., 2019) proposed a quantization-friendly version of the swish activation function called h-swish that has a similar impact on accuracy. So, we replace ReLU with h-swish (Howard et al., 2019) activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_55", "text": " Squeeze-and-Excitation(SE) is a lightweight operation which is shown to be beneficial to accuracy (Hu et al., 2018). Figure 10 depicts the structure of a SE block. For a given input feature map, it first computes the importance of the feature channels a representative value for global spatial information of each feature channel by global average pooling. After such squeeze operation generates channel-wise statistics, excitation operation captures channel-wise dependencies by two cascaded fully-connected layers to produce activation values, which represents the importance of each feature channel. Finally, channel-wise multiplication is performed between the activation values induced by the excitation operation and the input feature map for each channel. SE block is used in many recent architectures (Tan and Le, 2019a; Howard et al., 2019; Radosavovic et al., 2020). By adding SE blocks to the baseline network, we also observe the accuracy improvement. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_56", "text": " Figure 11 depicts an example distribution of activation values produced by two different SE blocks for three different images. The authors of the original paper (Hu et al., 2018) conjectured that if such distribution from a SE block does not differ widely between image classes, the SE block is not important. Thus, after training, they obtained averaged activation values of a SE block over multiple images in the same class. They compared the distributions of the averaged values over different image classes. They observed that removing the SE blocks that have similar distributions over different image classes incurs only a marginal loss in accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_57", "text": " Inspired by this observation, we propose to remove SE blocks selectively to minimize the additional computation cost caused by SE blocks. We obtain activation values from a SE block for each input image and measure how the distribution of activation values varies over different input images. For each channel c, we calculate the standard deviation σcsubscript𝜎𝑐\\sigma_{c} of activation values over different images. If σcsubscript𝜎𝑐\\sigma_{c} is small in most channels, the activation values from the SE block does not differ much over images. Conceptually, it implies that the SE block does not help to discriminate further which channel is more influential. From the engineering perspective, it means that channel-wise multiplication of a SE block is similar to constant multiplication, which can be handled by the following convolutional layer. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_58", "text": " We define a metric as the average of standard deviation values σcsubscript𝜎𝑐\\sigma_{c} over all channels that represent the diverseness of the activation distribution over different images. If the metric value is small, we remove the SE block. For example, in Figure 11, our metric of the SE block on the left side has a value of 0.021, while the right side has a value of 0.118, more than 5x larger than the left side; The left side is a better candidate for SE block removal. When we remove SE blocks according to this metric, the accuracy is found to be similar, while the latency got shorter (Table 6). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_59", "text": " We evaluate the proposed NAS technique for image classification with the ImageNet dataset. The current implementation is made for MIDAP (Kang et al., 2019) that can perform DWConv and SE operations efficiently so that MBConv is preferred to full 3-D convolution as the basic building block, as explained above. Latencies on the target NPU are obtained with the cycle-accurate simulator222https://github.com/cap-lab/MidapSim. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_60", "text": " A superkernel has two parameters to search: expansion ratio and kernel size. To limit the search space, we choose the expansion ratio among 0, 2, 4, and 6, and the kernel size between 3 and 5 when MBConv or full convolution is used as the building block. In the case of the MixConv-based building block, we use N𝑁N=3 superkenels whose expansion ratio is 0 or 2; The sum of the expansion ratio of three superkernels has the same range as the expansion ratio of a single MBConv block. To allow three superkernels to have different kernel sizes, we let one of three superkernels be able to have 7 as the kernel size. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_61", "text": " In the first phase of the neural architecture search, we train the supernet by randomly choosing one of the candidate subgraphs in each training step. We train the supernet for 8 epochs, with λ1=0subscript𝜆10\\lambda_{1}=0 in the loss function of Eq. 9, focusing only on the accuracy. We decrease the learning rate by 0.97 every 2.4 epochs, starting from 0.064. The other setting for network training is displayed in Table 4. Gradient clipping with a value of 10 is used in this phase. In the second phase, we set λ1=15,λ2=100formulae-sequencesubscript𝜆115subscript𝜆2100\\lambda_{1}=15,\\lambda_{2}=100 to consider latency in the loss function, and optimize the weights and threshold values of supernet for 2 epochs. After this second phase finishes, the final architecture topology is decided. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_62", "text": " Next, we train the final architecture again to determine the filter weights for 350 epochs with the ImageNet again, using the same setting described in Table 4. Unlike the search phase, the learning rate is increased from 0 to 0.064 in the first 5 epochs, then decayed by 0.97 every 2.4 epochs. Since we observed that the batch size is critical to accuracy when using the EfficientNet training code, we use a large batch size. Both network architecture search and final training are conducted on Google Cloud TPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_63", "text": " In the proposed NAS technique, two major extensions are made to the supernet, compared with the original SinglePath NAS technique. Table 3 shows the proposed supernet architecture with configuration parameters, block types and depths. It starts with a 7x7 convolution layer, followed by 5 stages that have a different number of blocks for feature extraction and 2 fully-connected networks for classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_64", "text": " The first extension is to allow stages to have a different number of blocks. To verify the goodness of this extension, we design two kinds of MBConv-based supernet with 20 blocks in total: a supernet with constant depth(baseline), a supernet with linear depth where the cumulative depth up to a specific stage is proportional to the width of the stage. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_65", "text": " As shown in Table 4, a supernet with linear depth outperforms a supernet with constant depth in terms of accuracy with similar latency. It confirms that this simple change of block assignment in supernet gives notable accuracy boost with the same latency constraint, without any additional optimization techniques. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_66", "text": " The second extension is to use multiple parallel superkernels in an MBConv block. To verify the benefit of it, we compare two different supernets with the same number of blocks in each stage. The accuracy and latency performance of the baseline supernet is the same as the previous experimental result shown in Table 4. Table 5 shows that the extended supernet with MixConv-based building blocks gives a better accuracy-latency tradeoff. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_67", "text": " We apply the proposed NAS method with the supernet architecture described above. The depth of 5 stages is set to 3,4,7,4,113474113,4,7,4,11, respectively. The latency constraint is set to 2.5 ms that corresponds to the latency of EfficientNet-B1 on our target NPU, MIDAP. Table 6 compares our search results with the state-of-the-art models: EdgeTPU (Gupta and Akin, 2020), EfficientNet (Tan and Le, 2019a), Once-For-All (Cai et al., 2019). The latency of the other models is obtained by running the network on the MIDAP cycle-accurate simulator. We compare the accuracy without quantization, assuming that quantization effects will be similar to all models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_68", "text": " As shown in Table 6, the baseline model, ours-M, found by the proposed NAS technique has higher accuracy than the other models on our target NPU; ours-M achieves more than 1.7% higher top-1 accuracy than EfficientNet-lite2 with similar latency. Moreover, it is 0.5% higher than EfficientNet-B1, even without using SE and h-swish activation function. Note that the number of parameters and the number of FLOPS in ours-M is larger than EfficientNet-B1. It implies that the complexity of the network is not a direct indicator of the end-to-end latency of the network. The end-to-end latency depends on the NPU architecture, and the proposed NAS technique could find a larger network with shorter latency by adding the latency factor to the loss function directly. The main benefit comes from different block assignment to stages. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_69", "text": " We improve the baseline network by adding the h-swish activation function and squeeze-and-excitation(SE) block to get the ours-M+ model. Figure 12 shows the topology of ours-M+ architecture in which the height of each block is proportional to the expansion ratio of the block. Compared with the baseline network, ours-M, we achieve around 1% accuracy boost with ours-M+, paying the cost of 16% latency increase. This model outperforms the other models, 0.5% higher accuracy and 14% faster than EfficientNet-B2. Since EfficientNet-B2 is too large to run with the default configuration on MIDAP, we increase the memory size for filter weights. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_70", "text": " Next, we applied compound scaling (Tan and Le, 2019a) to ours-M+ to obtain ours-L+ and ours-XL+. When we determine scaling coefficients, we keep the linear relationship between the cumulative depth and width of each stage, and scale the input image size more aggressively than (Tan and Le, 2019a). We make the number of filters to be multiples of 16 to maximize the MAC unit utilization on MIDAP. When we train our scaled model, we set the dropout ratio to 0.4, similar to EfficientNet-B4 training. The accuracy of ours-L+ is higher than EfficientNet-B3 and EfficientNet-lite4, while the accuracy of ours-XL+ is similar to EfficientNet-B4. Note that the difference between the searched network and the EfficientNet decreases as the network size increases. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_71", "text": " Finally, we selectively removed SE blocks from ours-XL+, resulting in ours-XL-rmSE+. We collected the activation values using randomly sampled 10K images from the training dataset and calculated the metric explained in Sec. 4.3.3. After removing SE blocks from ours-XL+ based on the metric, only about 60% of the blocks in the network have SE blocks. As a result, we could make the latency shorter, while the accuracy was slightly improved than ours-XL+. This model achieves 82.72% top-1 accuracy with only 11.66ms latency. It is much better than EfficientNet-EdgeTPU-L (Gupta and Akin, 2020) that achieves 80.62% FP32 top-1 accuracy with more than 20ms on EdgeTPU. Our architecture on MIDAP is about 2 times faster with 2.1% higher accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_72", "text": " Finally, we compare the search time. Since the TPU is faster than GPU, we report the wall clock time and the estimated GPU time (in parenthesis) that is 10 times longer than the wall clock time in the last column of Table 6 Our method takes 3 hours, which is much faster than the other methods. Note that we compare the total time to get one architecture from scratch without trained weights. Once-For-All (Cai et al., 2019) would require only short fine-tuning time after a neural architecture is searched. In contrast, we need to train the network after a network architecture is found. It took 40 hours on TPUv3 to train ours-M+. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_73", "text": " While most NAS techniques are not compared with a random search method, the authors (Li and Talwalkar, 2019) reported that a random search method is highly competitive. So we conducted an experiment to compare the proposed NAS technique with two random search methods, exploring the same search space defined by the supernet structure of ours-M. First, we designed a simple random search method that has the similar time complexity of the proposed technique. In this method, we randomly generate 15 models having a similar latency with ours-M, from the same search space. Then we train each of them for 1 epoch with cosine learning rate decay. After evaluating each of them, we choose the architecture with the topmost top-1 accuracy and fully train it. In the second method, called random selection, we randomly generate 20 models having a similar latency with ours-M and train them fully and take the architecture with the highest top-1 accuracy. Since the random selection method performs search and training simultaneously, it is slower than the proposed technique by the number of randomly generated models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_74", "text": " Comparison results are reported in Table 6. It is confirmed that both random selection and random search are quite competitive, but noticeably inferior to ours-M in terms of accuracy. In detail, the worst case of random selection showed 0.8% lower accuracy than ours-M. The best performance obtained from 20 randomly generated models is 79.19%, still lower than the accuracy of ours-M. Note that random search and random selection show similar performance that is no smaller than the other networks. It means that the search space defined by the supernet architecture has a more significant effect on the accuracy than the search method. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_75", "text": " There are two methods to find an architecture with a loose latency constraint. One is to use compound scaling that scales a small network with shorter latency, and the other is to search a network directly. To compare these two methods, we first scaled ours-M using the same scaling coefficients that we used to scale ours-M+ to ours-L+ and trained it. When conducting a direct search, we scaled the depth and width of the supernet and the input image size first and applied the proposed NAS technique for the scaled supernet. We used batch size 512 instead of 1024 during the architecture search due to the memory limitation of TPU. The comparison result is shown in Table 7 in terms of top-1 accuracy(%) and the latency on the target NPU(ms). Two results were similar while direct search needed 10 hours on TPUv3; It means that compound scaling is an effective method to find a large network fast. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_76", "text": " To examine how SE and h-swish impact accuracy individually, we compare four combinations as displayed in Table 8. The baseline is ours-M that does not use SE and h-swish activation function. Replacing ReLU with h-swish gives a marginal improvement on accuracy while adding SE blocks improves the accuracy noticeably. Adding both SE and h-swish activation function improves the accuracy by around 1%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_77", "text": " In this work, we propose a fast NPU-aware NAS methodology extending the Single-Path NAS technique (Stamoulis et al., 2019). We modify the supernet architecture by varying the number of blocks in stages and adding mixed depthwise convolution (Tan and Le, 2019b) to the search space. By modifying the loss function to directly include the target latency estimated by a cycle-accurate simulator of the target NPU, we could find a better baseline architecture with a shorter latency than the latency constraint. Using a tight latency constraint, we can reduce the search space to find the baseline network fast. Afterward, we apply compound scaling to find a larger network than the baseline network, and add SE blocks and h-swish activation functions in the post-processing step. Through the proposed NAS methodology, we could obtain a network with 82.72% accuracy with 11.66ms latency on our target NPU, without special data augmentation in training. It dominates the existing network models on the target NPU. It confirms the importance of supernet architecture design for a given NPU and effectiveness of the three-step approach in the proposed NAS methodology: supernet design, SinglePath NAS with a tighter latency constraint, and compound scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" } ]
Why did the dot scoring function perform well for global attention while the general scoring function performed well for local attention?
Authors thinks it's interesting to observe that "dot" works well for the global attention and "general" is better for local attention, however this question can't be answered within this paper information [38].
[ 38 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is conceptually simple. The model by ?) reads through all the source words until the end-of-sentence symbol <<eos>> is reached. It then starts emitting one target word at a time, as illustrated in Figure 1. NMT is often a large neural network that is trained in an end-to-end fashion and has the ability to generalize well to very long word sequences. This means the model does not have to explicitly store gigantic phrase tables and language models as in the case of standard MT; hence, NMT has a small memory footprint. Lastly, implementing NMT decoders is easy unlike the highly intricate decoders in standard MT (Koehn et al., 2003). ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_1", "text": " In parallel, the concept of “attention” has gained popularity recently in training neural networks, allowing models to learn alignments between different modalities, e.g., between image objects and agent actions in the dynamic control problem (Mnih et al., 2014), between speech frames and text in the speech recognition task (jan14), or between visual features of a picture and its text description in the image caption generation task (Xu et al., 2015). In the context of NMT, ?) has successfully applied such attentional mechanism to jointly translate and align words. To the best of our knowledge, there has not been any other work exploring the use of attention-based architectures for NMT. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_2", "text": " In this work, we design, with simplicity and effectiveness in mind, two novel types of attention-based models: a global approach in which all source words are attended and a local one whereby only a subset of source words are considered at a time. The former approach resembles the model of (Bahdanau et al., 2015) but is simpler architecturally. The latter can be viewed as an interesting blend between the hard and soft attention models proposed in (Xu et al., 2015): it is computationally less expensive than the global model or the soft attention; at the same time, unlike the hard attention, the local attention is differentiable almost everywhere, making it easier to implement and train.222There is a recent work by ?), which is very similar to our local attention and applied to the image generation task. However, as we detail later, our model is much simpler and can achieve good performance for NMT. Besides, we also examine various alignment functions for our attention-based models. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_3", "text": " Experimentally, we demonstrate that both of our approaches are effective in the WMT translation tasks between English and German in both directions. Our attentional models yield a boost of up to 5.0 BLEU over non-attentional systems which already incorporate known techniques such as dropout. For English to German translation, we achieve new state-of-the-art (SOTA) results for both WMT’14 and WMT’15, outperforming previous SOTA systems, backed by NMT models and n𝑛n-gram LM rerankers, by more than 1.0 BLEU. We conduct extensive analysis to evaluate our models in terms of learning, the ability to handle long sentences, choices of attentional architectures, alignment quality, and translation outputs. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_4", "text": " A neural machine translation system is a neural network that directly models the conditional probability p​(y|x)𝑝conditional𝑦𝑥p(y|x) of translating a source sentence, x1,…,xnsubscript𝑥1…subscript𝑥𝑛x_{1},\\ldots,x_{n}, to a target sentence, y1,…,ymsubscript𝑦1…subscript𝑦𝑚y_{1},\\ldots,y_{m}.333All sentences are assumed to terminate with a special “end-of-sentence” token <<eos>>. A basic form of NMT consists of two components: (a) an encoder which computes a representation 𝒔𝒔s for each source sentence and (b) a decoder which generates one target word at a time and hence decomposes the conditional probability as: log⁡p​(y|x)=∑j=1mlog⁡p​(yj|y<j,𝒔)𝑝conditional𝑦𝑥superscriptsubscript𝑗1𝑚𝑝conditionalsubscript𝑦𝑗subscript𝑦absent𝑗𝒔\\log p(y|x)=\\sum_{j=1}^{m}\\nolimits\\log p\\left(y_{j}|y_{<j},\\mbox{\\boldmath{$s$}}\\right) (1) ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_5", "text": " A natural choice to model such a decomposition in the decoder is to use a recurrent neural network (RNN) architecture, which most of the recent NMT work such as (Kalchbrenner and Blunsom, 2013, Sutskever et al., 2014, Cho et al., 2014, Bahdanau et al., 2015, Luong et al., 2015, Jean et al., 2015) have in common. They, however, differ in terms of which RNN architectures are used for the decoder and how the encoder computes the source sentence representation 𝒔𝒔s. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_6", "text": " ?) used an RNN with the standard hidden unit for the decoder and a convolutional neural network for encoding the source sentence representation. On the other hand, both ?) and ?) stacked multiple layers of an RNN with a Long Short-Term Memory (LSTM) hidden unit for both the encoder and the decoder. ?), ?), and ?) all adopted a different version of the RNN with an LSTM-inspired hidden unit, the gated recurrent unit (GRU), for both components.444They all used a single RNN layer except for the latter two works which utilized a bidirectional RNN for the encoder. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_7", "text": " In more detail, one can parameterize the probability of decoding each word yjsubscript𝑦𝑗y_{j} as: p​(yj|y<j,𝒔)=softmax⁡(g​(𝒉j))𝑝conditionalsubscript𝑦𝑗subscript𝑦absent𝑗𝒔softmax𝑔subscript𝒉𝑗p\\left(y_{j}|y_{<j},\\mbox{\\boldmath{$s$}}\\right)=\\operatorname{softmax}\\left(g\\left(\\mbox{\\boldmath{$h$}}_{j}\\right)\\right) (2) with g𝑔g being the transformation function that outputs a vocabulary-sized vector.555One can provide g𝑔g with other inputs such as the currently predicted word yjsubscript𝑦𝑗y_{j} as in (Bahdanau et al., 2015). Here, 𝒉jsubscript𝒉𝑗\\mbox{\\boldmath{$h$}}_{j} is the RNN hidden unit, abstractly computed as: 𝒉j=f​(𝒉j−1,𝒔),subscript𝒉𝑗𝑓subscript𝒉𝑗1𝒔\\mbox{\\boldmath{$h$}}_{j}=f(\\mbox{\\boldmath{$h$}}_{j-1},\\mbox{\\boldmath{$s$}}), (3) where f𝑓f computes the current hidden state given the previous hidden state and can be either a vanilla RNN unit, a GRU, or an LSTM unit. In (Kalchbrenner and Blunsom, 2013, Sutskever et al., 2014, Cho et al., 2014, Luong et al., 2015), the source representation 𝒔𝒔s is only used once to initialize the decoder hidden state. On the other hand, in (Bahdanau et al., 2015, Jean et al., 2015) and this work, 𝒔𝒔s, in fact, implies a set of source hidden states which are consulted throughout the entire course of the translation process. Such an approach is referred to as an attention mechanism, which we will discuss next. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_8", "text": " In this work, following (Sutskever et al., 2014, Luong et al., 2015), we use the stacking LSTM architecture for our NMT systems, as illustrated in Figure 1. We use the LSTM unit defined in (Zaremba et al., 2015). Our training objective is formulated as follows: Jt=∑(x,y)∈𝔻−log⁡p​(y|x)subscript𝐽𝑡subscript𝑥𝑦𝔻𝑝conditional𝑦𝑥J_{t}=\\sum_{(x,y)\\in\\mathbb{D}}\\nolimits-\\log p(y|x) (4) with 𝔻𝔻\\mathbb{D} being our parallel training corpus. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_9", "text": " Our various attention-based models are classifed into two broad categories, global and local. These classes differ in terms of whether the “attention” is placed on all source positions or on only a few source positions. We illustrate these two model types in Figure 2 and 3 respectively. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_10", "text": " Common to these two types of models is the fact that at each time step t𝑡t in the decoding phase, both approaches first take as input the hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} at the top layer of a stacking LSTM. The goal is then to derive a context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t} that captures relevant source-side information to help predict the current target word ytsubscript𝑦𝑡y_{t}. While these models differ in how the context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t} is derived, they share the same subsequent steps. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_11", "text": " Specifically, given the target hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} and the source-side context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t}, we employ a simple concatenation layer to combine the information from both vectors to produce an attentional hidden state as follows: 𝒉~t=tanh⁡(𝑾𝒄​(𝒄t;𝒉t))subscriptbold-~𝒉𝑡subscript𝑾𝒄subscript𝒄𝑡subscript𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t}=\\tanh(\\mbox{\\boldmath{$W_{c}$}}(\\mbox{\\boldmath{$c$}}_{t};\\mbox{\\boldmath{$h$}}_{t})) (5) ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_12", "text": " The attentional vector 𝒉~tsubscriptbold-~𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t} is then fed through the softmax layer to produce the predictive distribution formulated as: p​(yt|y<t,x)=softmax⁡(𝑾𝒔𝒉~t)𝑝conditionalsubscript𝑦𝑡subscript𝑦absent𝑡𝑥softmaxsubscript𝑾𝒔𝒉~𝑡p(y_{t}|y_{<t},x)=\\operatorname{softmax}(\\mbox{\\boldmath{$W_{s}$}}\\mbox{\\boldmath{$\\tilde{h}$}}_{t}) (6) ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_13", "text": " We now detail how each model type computes the source-side context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t}. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_14", "text": " The idea of a global attentional model is to consider all the hidden states of the encoder when deriving the context vector ctsubscript𝑐𝑡c_{t}. In this model type, a variable-length alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t}, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} with each source hidden state 𝒉¯ssubscriptbold-¯𝒉𝑠\\mbox{\\boldmath{$\\bar{h}$}}_{s}: 𝒂t​(s)subscript𝒂𝑡𝑠\\displaystyle\\mbox{\\boldmath{$a$}}_{t}(s) =align⁡(𝒉t,𝒉¯s)absentalignsubscript𝒉𝑡subscriptbold-¯𝒉𝑠\\displaystyle=\\operatorname{align}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s}) (7) =exp⁡(score⁡(𝒉t,𝒉¯s))∑s′exp⁡(score⁡(𝒉t,𝒉¯s′))absentscoresubscript𝒉𝑡subscriptbold-¯𝒉𝑠subscriptsuperscript𝑠′scoresubscript𝒉𝑡subscriptbold-¯𝒉superscript𝑠′\\displaystyle=\\frac{\\exp\\left(\\operatorname{score}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\right)}{\\sum_{s^{\\prime}}\\exp\\left(\\operatorname{score}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s^{\\prime}})\\right)} Here, scorescore\\operatorname{score} is referred as a content-based function for which we consider three different alternatives: score⁡(𝒉t,𝒉¯s)={𝒉t⊤​𝒉¯sdot𝒉t⊤​𝑾𝒂𝒉¯sgeneral𝒗a⊤​tanh⁡(𝑾𝒂​(𝒉t;𝒉¯s))concatscoresubscript𝒉𝑡subscriptbold-¯𝒉𝑠casessuperscriptsubscript𝒉𝑡topsubscriptbold-¯𝒉𝑠dotsuperscriptsubscript𝒉𝑡topsubscript𝑾𝒂𝒉¯𝑠generalsuperscriptsubscript𝒗𝑎topsubscript𝑾𝒂subscript𝒉𝑡subscriptbold-¯𝒉𝑠concat\\operatorname{score}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\!=\\!\\begin{cases}\\mbox{\\boldmath{$h$}}_{t}^{\\top}\\mbox{\\boldmath{$\\bar{h}$}}_{s}&\\mbox{{\\it dot}}\\\\ \\mbox{\\boldmath{$h$}}_{t}^{\\top}\\mbox{\\boldmath{$W_{a}$}}\\mbox{\\boldmath{$\\bar{h}$}}_{s}&\\mbox{{\\it general}}\\\\ \\mbox{\\boldmath{$v$}}_{a}^{\\top}\\tanh\\left(\\mbox{\\boldmath{$W_{a}$}}(\\mbox{\\boldmath{$h$}}_{t};\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\right)&\\mbox{{\\it concat}}\\end{cases} ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_15", "text": " Besides, in our early attempts to build attention-based models, we use a location-based function in which the alignment scores are computed from solely the target hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} as follows: 𝒂t=softmax⁡(𝑾𝒂𝒉t)​                locationsubscript𝒂𝑡softmaxsubscript𝑾𝒂𝒉𝑡                location\\mbox{\\boldmath{$a$}}_{t}=\\operatorname{softmax}(\\mbox{\\boldmath{$W_{a}$}}\\mbox{\\boldmath{$h$}}_{t})\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{{\\it location}} (8) Given the alignment vector as weights, the context vector ctsubscript𝑐𝑡c_{t} is computed as the weighted average over all the source hidden states.666Eq. (8) implies that all alignment vectors 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} are of the same length. For short sentences, we only use the top part of 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} and for long sentences, we ignore words near the end. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_16", "text": " Comparison to (Bahdanau et al., 2015) – While our global attention approach is similar in spirit to the model proposed by ?), there are several key differences which reflect how we have both simplified and generalized from the original model. First, we simply use hidden states at the top LSTM layers in both the encoder and decoder as illustrated in Figure 2. ?), on the other hand, use the concatenation of the forward and backward source hidden states in the bi-directional encoder and target hidden states in their non-stacking uni-directional decoder. Second, our computation path is simpler; we go from 𝒉t→𝒂t→𝒄t→𝒉~t→subscript𝒉𝑡subscript𝒂𝑡→subscript𝒄𝑡→subscriptbold-~𝒉𝑡\\mbox{\\boldmath{$h$}}_{t}\\rightarrow\\mbox{\\boldmath{$a$}}_{t}\\rightarrow\\mbox{\\boldmath{$c$}}_{t}\\rightarrow\\mbox{\\boldmath{$\\tilde{h}$}}_{t} then make a prediction as detailed in Eq. (5), Eq. (6), and Figure 2. On the other hand, at any time t𝑡t, ?) build from the previous hidden state 𝒉t−1→𝒂t→𝒄t→𝒉t→subscript𝒉𝑡1subscript𝒂𝑡→subscript𝒄𝑡→subscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t-1}\\rightarrow\\mbox{\\boldmath{$a$}}_{t}\\rightarrow\\mbox{\\boldmath{$c$}}_{t}\\rightarrow\\mbox{\\boldmath{$h$}}_{t}, which, in turn, goes through a deep-output and a maxout layer before making predictions.777We will refer to this difference again in Section 3.3. Lastly, ?) only experimented with one alignment function, the concat product; whereas we show later that the other alternatives are better. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_17", "text": " The global attention has a drawback that it has to attend to all words on the source side for each target word, which is expensive and can potentially render it impractical to translate longer sequences, e.g., paragraphs or documents. To address this deficiency, we propose a local attentional mechanism that chooses to focus only on a small subset of the source positions per target word. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_18", "text": " This model takes inspiration from the tradeoff between the soft and hard attentional models proposed by ?) to tackle the image caption generation task. In their work, soft attention refers to the global attention approach in which weights are placed “softly” over all patches in the source image. The hard attention, on the other hand, selects one patch of the image to attend to at a time. While less expensive at inference time, the hard attention model is non-differentiable and requires more complicated techniques such as variance reduction or reinforcement learning to train. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_19", "text": " Our local attention mechanism selectively focuses on a small window of context and is differentiable. This approach has an advantage of avoiding the expensive computation incurred in the soft attention and at the same time, is easier to train than the hard attention approach. In concrete details, the model first generates an aligned position ptsubscript𝑝𝑡p_{t} for each target word at time t𝑡t. The context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t} is then derived as a weighted average over the set of source hidden states within the window (pt−D,pt+D)subscript𝑝𝑡𝐷subscript𝑝𝑡𝐷(p_{t}-D,p_{t}+D); D𝐷D is empirically selected.888If the window crosses the sentence boundaries, we simply ignore the outside part and consider words in the window. Unlike the global approach, the local alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is now fixed-dimensional, i.e., ∈ℝ2​D+1absentsuperscriptℝ2𝐷1\\in\\mathbb{R}^{2D+1}. We consider two variants of the model as below. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_20", "text": " Monotonic alignment (local-m) – we simply set pt=tsubscript𝑝𝑡𝑡p_{t}\\!=\\!t assuming that source and target sequences are roughly monotonically aligned. The alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is defined according to Eq. (7).999local-m is the same as the global model except that the vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is fixed-length and shorter. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_21", "text": " Predictive alignment (local-p) – instead of assuming monotonic alignments, our model predicts an aligned position as follows: pt=S⋅sigmoid⁡(𝒗p⊤​tanh⁡(𝑾𝒑𝒉t)),subscript𝑝𝑡⋅𝑆sigmoidsuperscriptsubscript𝒗𝑝topsubscript𝑾𝒑𝒉𝑡p_{t}=S\\cdot\\operatorname{sigmoid}(\\mbox{\\boldmath{$v$}}_{p}^{\\top}\\tanh(\\mbox{\\boldmath{$W_{p}$}}\\mbox{\\boldmath{$h$}}_{t})), (9) 𝑾𝒑subscript𝑾𝒑W_{p} and 𝒗psubscript𝒗𝑝\\mbox{\\boldmath{$v$}}_{p} are the model parameters which will be learned to predict positions. S𝑆S is the source sentence length. As a result of sigmoidsigmoid\\operatorname{sigmoid}, pt∈(0,S)subscript𝑝𝑡0𝑆p_{t}\\in(0,S). To favor alignment points near ptsubscript𝑝𝑡p_{t}, we place a Gaussian distribution centered around ptsubscript𝑝𝑡p_{t} . Specifically, our alignment weights are now defined as: 𝒂t​(s)=align⁡(𝒉t,𝒉¯s)​exp⁡(−(s−pt)22​σ2)subscript𝒂𝑡𝑠alignsubscript𝒉𝑡subscriptbold-¯𝒉𝑠superscript𝑠subscript𝑝𝑡22superscript𝜎2\\mbox{\\boldmath{$a$}}_{t}(s)=\\operatorname{align}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\exp\\left(-\\frac{(s-p_{t})^{2}}{2\\sigma^{2}}\\right) (10) We use the same alignalign\\operatorname{align} function as in Eq. (7) and the standard deviation is empirically set as σ=D2𝜎𝐷2\\sigma\\!=\\!\\frac{D}{2}. Note that ptsubscript𝑝𝑡p_{t} is a real nummber; whereas s𝑠s is an integer within the window centered at ptsubscript𝑝𝑡p_{t}.101010local-p is similar to the local-m model except that we dynamically compute ptsubscript𝑝𝑡p_{t} and use a truncated Gaussian distribution to modify the original alignment weights align⁡(𝒉t,𝒉¯s)alignsubscript𝒉𝑡subscriptbold-¯𝒉𝑠\\operatorname{align}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s}) as shown in Eq. (10). By utilizing ptsubscript𝑝𝑡p_{t} to derive 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t}, we can compute backprop gradients for 𝑾𝒑subscript𝑾𝒑W_{p} and 𝒗psubscript𝒗𝑝\\mbox{\\boldmath{$v$}}_{p}. This model is differentiable almost everywhere. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_22", "text": " Comparison to (Gregor et al., 2015) – have proposed a selective attention mechanism, very similar to our local attention, for the image generation task. Their approach allows the model to select an image patch of varying location and zoom. We, instead, use the same “zoom” for all target positions, which greatly simplifies the formulation and still achieves good performance. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_23", "text": " In our proposed global and local approaches, the attentional decisions are made independently, which is suboptimal. Whereas, in standard MT, a coverage set is often maintained during the translation process to keep track of which source words have been translated. Likewise, in attentional NMTs, alignment decisions should be made jointly taking into account past alignment information. To address that, we propose an input-feeding approach in which attentional vectors 𝒉~tsubscriptbold-~𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t} are concatenated with inputs at the next time steps as illustrated in Figure 4.111111If n𝑛n is the number of LSTM cells, the input size of the first LSTM layer is 2​n2𝑛2n; those of subsequent layers are n𝑛n. The effects of having such connections are two-fold: (a) we hope to make the model fully aware of previous alignment choices and (b) we create a very deep network spanning both horizontally and vertically. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_24", "text": " Comparison to other work – ?) use context vectors, similar to our 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t}, in building subsequent hidden states, which can also achieve the “coverage” effect. However, there has not been any analysis of whether such connections are useful as done in this work. Also, our approach is more general; as illustrated in Figure 4, it can be applied to general stacking recurrent architectures, including non-attentional models. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_25", "text": " ?) propose a doubly attentional approach with an additional constraint added to the training objective to make sure the model pays equal attention to all parts of the image during the caption generation process. Such a constraint can also be useful to capture the coverage set effect in NMT that we mentioned earlier. However, we chose to use the input-feeding approach since it provides flexibility for the model to decide on any attentional constraints it deems suitable. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_26", "text": " We evaluate the effectiveness of our models on the WMT translation tasks between English and German in both directions. newstest2013 (3000 sentences) is used as a development set to select our hyperparameters. Translation performances are reported in case-sensitive BLEU (Papineni et al., 2002) on newstest2014 (2737 sentences) and newstest2015 (2169 sentences). Following (Luong et al., 2015), we report translation quality using two types of BLEU: (a) tokenized121212All texts are tokenized with tokenizer.perl and BLEU scores are computed with multi-bleu.perl. BLEU to be comparable with existing NMT work and (b) NIST131313With the mteval-v13a script as per WMT guideline. BLEU to be comparable with WMT results. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_27", "text": " All our models are trained on the WMT’14 training data consisting of 4.5M sentences pairs (116M English words, 110M German words). Similar to (Jean et al., 2015), we limit our vocabularies to be the top 50K most frequent words for both languages. Words not in these shortlisted vocabularies are converted into a universal token <<unk>>. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_28", "text": " When training our NMT systems, following (Bahdanau et al., 2015, Jean et al., 2015), we filter out sentence pairs whose lengths exceed 50 words and shuffle mini-batches as we proceed. Our stacking LSTM models have 4 layers, each with 1000 cells, and 1000-dimensional embeddings. We follow (Sutskever et al., 2014, Luong et al., 2015) in training NMT with similar settings: (a) our parameters are uniformly initialized in (−0.1,0.1)0.10.1(-0.1,0.1), (b) we train for 10 epochs using plain SGD, (c) a simple learning rate schedule is employed – we start with a learning rate of 1; after 5 epochs, we begin to halve the learning rate every epoch, (d) our mini-batch size is 128, and (e) the normalized gradient is rescaled whenever its norm exceeds 5. Additionally, we also use dropout with probability 0.20.20.2 for our LSTMs as suggested by (Zaremba et al., 2015). For dropout models, we train for 12 epochs and start halving the learning rate after 8 epochs. For local attention models, we empirically set the window size D=10𝐷10D=10. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_29", "text": " Our code is implemented in MATLAB. When running on a single GPU device Tesla K40, we achieve a speed of 1K target words per second. It takes 7–10 days to completely train a model. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_30", "text": " We compare our NMT systems in the English-German task with various other systems. These include the winning system in WMT’14 (Buck et al., 2014), a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. For end-to-end NMT systems, to the best of our knowledge, (Jean et al., 2015) is the only work experimenting with this language pair and currently the SOTA system. We only present results for some of our attention models and will later analyze the rest in Section 5. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_31", "text": " As shown in Table 1, we achieve progressive improvements when (a) reversing the source sentence, +1.31.31.3 BLEU, as proposed in (Sutskever et al., 2014) and (b) using dropout, +1.41.41.4 BLEU. On top of that, (c) the global attention approach gives a significant boost of +2.82.82.8 BLEU, making our model slightly better than the base attentional system of ?) (row RNNSearch). When (d) using the input-feeding approach, we seize another notable gain of +1.31.31.3 BLEU and outperform their system. The local attention model with predictive alignments (row local-p) proves to be even better, giving us a further improvement of +0.90.90.9 BLEU on top of the global attention model. It is interesting to observe the trend previously reported in (Luong et al., 2015) that perplexity strongly correlates with translation quality. In total, we achieve a significant gain of 5.0 BLEU points over the non-attentional baseline, which already includes known techniques such as source reversing and dropout. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_32", "text": " The unknown replacement technique proposed in (Luong et al., 2015, Jean et al., 2015) yields another nice gain of +1.91.91.9 BLEU, demonstrating that our attentional models do learn useful alignments for unknown works. Finally, by ensembling 8 different models of various settings, e.g., using different attention approaches, with and without dropout etc., we were able to achieve a new SOTA result of 23.023.023.0{} BLEU, outperforming the existing best system (Jean et al., 2015) by +1.41.41.4 BLEU. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_33", "text": " Latest results in WMT’15 – despite the fact that our models were trained on WMT’14 with slightly less data, we test them on newstest2015 to demonstrate that they can generalize well to different test sets. As shown in Table 2, our best system establishes a new SOTA performance of 25.925.925.9{} BLEU, outperforming the existing best system backed by NMT and a 5-gram LM reranker by +1.01.01.0 BLEU. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_34", "text": " We carry out a similar set of experiments for the WMT’15 translation task from German to English. While our systems have not yet matched the performance of the SOTA system, we nevertheless show the effectiveness of our approaches with large and progressive gains in terms of BLEU as illustrated in Table 3. The attentional mechanism gives us +2.22.22.2 BLEU gain and on top of that, we obtain another boost of up to +1.01.01.0 BLEU from the input-feeding approach. Using a better alignment function, the content-based dot product one, together with dropout yields another gain of +2.72.72.7 BLEU. Lastly, when applying the unknown word replacement technique, we seize an additional +2.12.12.1 BLEU, demonstrating the usefulness of attention in aligning rare words. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_35", "text": " We conduct extensive analysis to better understand our models in terms of learning, the ability to handle long sentences, choices of attentional architectures, and alignment quality. All results reported here are on English-German newstest2014. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_36", "text": " We compare models built on top of one another as listed in Table 1. It is pleasant to observe in Figure 5 a clear separation between non-attentional and attentional models. The input-feeding approach and the local attention model also demonstrate their abilities in driving the test costs lower. The non-attentional model with dropout (the blue + curve) learns slower than other non-dropout models, but as time goes by, it becomes more robust in terms of minimizing test errors. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_37", "text": " We follow (Bahdanau et al., 2015) to group sentences of similar lengths together and compute a BLEU score per group. Figure 6 shows that our attentional models are more effective than the non-attentional one in handling long sentences: the quality does not degrade as sentences become longer. Our best model (the blue + curve) outperforms all other systems in all length buckets. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_38", "text": " We examine different attention models (global, local-m, local-p) and different alignment functions (location, dot, general, concat) as described in Section 3. Due to limited resources, we cannot run all the possible combinations. However, results in Table 4 do give us some idea about different choices. The location-based function does not learn good alignments: the global (location) model can only obtain a small gain when performing unknown word replacement compared to using other alignment functions.141414There is a subtle difference in how we retrieve alignments for the different alignment functions. At time step t𝑡t in which we receive yt−1subscript𝑦𝑡1y_{t-1} as input and then compute 𝒉t,𝒂t,𝒄tsubscript𝒉𝑡subscript𝒂𝑡subscript𝒄𝑡\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$a$}}_{t},\\mbox{\\boldmath{$c$}}_{t}, and 𝒉~tsubscriptbold-~𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t} before predicting ytsubscript𝑦𝑡y_{t}, the alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is used as alignment weights for (a) the predicted word ytsubscript𝑦𝑡y_{t} in the location-based alignment functions and (b) the input word yt−1subscript𝑦𝑡1y_{t-1} in the content-based functions. For content-based functions, our implementation concat does not yield good performances and more analysis should be done to understand the reason.151515With concat, the perplexities achieved by different models are 6.7 (global), 7.1 (local-m), and 7.1 (local-p). Such high perplexities could be due to the fact that we simplify the matrix 𝑾𝒂subscript𝑾𝒂W_{a} to set the part that corresponds to 𝒉¯ssubscriptbold-¯𝒉𝑠\\mbox{\\boldmath{$\\bar{h}$}}_{s} to identity. It is interesting to observe that dot works well for the global attention and general is better for the local attention. Among the different models, the local attention model with predictive alignments (local-p) is best, both in terms of perplexities and BLEU. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_39", "text": " A by-product of attentional models are word alignments. While (Bahdanau et al., 2015) visualized alignments for some sample sentences and observed gains in translation quality as an indication of a working attention model, no work has assessed the alignments learned as a whole. In contrast, we set out to evaluate the alignment quality using the alignment error rate (AER) metric. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_40", "text": " Given the gold alignment data provided by RWTH for 508 English-German Europarl sentences, we “force” decode our attentional models to produce translations that match the references. We extract only one-to-one alignments by selecting the source word with the highest alignment weight per target word. Nevertheless, as shown in Table 6, we were able to achieve AER scores comparable to the one-to-many alignments obtained by the Berkeley aligner (Liang et al., 2006).161616We concatenate the 508 sentence pairs with 1M sentence pairs from WMT and run the Berkeley aligner. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_41", "text": " We also found that the alignments produced by local attention models achieve lower AERs than those of the global one. The AER obtained by the ensemble, while good, is not better than the local-m AER, suggesting the well-known observation that AER and translation scores are not well correlated (Fraser and Marcu, 2007). We show some alignment visualizations in Appendix A. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_42", "text": " We show in Table 5 sample translations in both directions. It it appealing to observe the effect of attentional models in correctly translating names such as “Miranda Kerr” and “Roger Dow”. Non-attentional models, while producing sensible names from a language model perspective, lack the direct connections from the source side to make correct translations. We also observed an interesting case in the second example, which requires translating the doubly-negated phrase, “not incompatible”. The attentional model correctly produces “nicht ……\\dots unvereinbar”; whereas the non-attentional model generates “nicht vereinbar”, meaning “not compatible”.171717The reference uses a more fancy translation of “incompatible”, which is “im Widerspruch zu etwas stehen”. Both models, however, failed to translate “passenger experience”. The attentional model also demonstrates its superiority in translating long sentences as in the last example. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_43", "text": " In this paper, we propose two simple and effective attentional mechanisms for neural machine translation: the global approach which always looks at all source positions and the local one that only attends to a subset of source positions at a time. We test the effectiveness of our models in the WMT translation tasks between English and German in both directions. Our local attention yields large gains of up to 5.05.05.0{} BLEU over non-attentional models which already incorporate known techniques such as dropout. For the English to German translation direction, our ensemble model has established new state-of-the-art results for both WMT’14 and WMT’15, outperforming existing best systems, backed by NMT models and n𝑛n-gram LM rerankers, by more than 1.0 BLEU. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" }, { "id": "1508.04025_all_44", "text": " We have compared various alignment functions and shed light on which functions are best for which attentional models. Our analysis shows that attention-based NMT models are superior to non-attentional ones in many cases, for example in translating names and handling long sentences. ", "title": "Effective Approaches to Attention-based Neural Machine Translation" } ]
Is it true that the first-order approximation led to roughly 33% speed-up in network computation?
According to the paper, eliminating the Hessian calculation increases the overall calculation speed by 33% [31].
[ 31 ]
[ { "id": "1703.03400_all_0", "text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from only a few examples, and continuing to adapt as more data becomes available. This kind of fast and flexible learning is challenging, since the agent must integrate its prior experience with a small amount of new information, while avoiding overfitting to the new data. Furthermore, the form of prior experience and new data will depend on the task. As such, for the greatest applicability, the mechanism for learning to learn (or meta-learning) should be general to the task and the form of computation required to complete the task. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_1", "text": " In this work, we propose a meta-learning algorithm that is general and model-agnostic, in the sense that it can be directly applied to any learning problem and model that is trained with a gradient descent procedure. Our focus is on deep neural network models, but we illustrate how our approach can easily handle different architectures and different problem settings, including classification, regression, and policy gradient reinforcement learning, with minimal modification. In meta-learning, the goal of the trained model is to quickly learn a new task from a small amount of new data, and the model is trained by the meta-learner to be able to learn on a large number of different tasks. The key idea underlying our method is to train the model’s initial parameters such that the model has maximal performance on a new task after the parameters have been updated through one or more gradient steps computed with a small amount of data from that new task. Unlike prior meta-learning methods that learn an update function or learning rule (Schmidhuber, 1987; Bengio et al., 1992; Andrychowicz et al., 2016; Ravi & Larochelle, 2017), our algorithm does not expand the number of learned parameters nor place constraints on the model architecture (e.g. by requiring a recurrent model (Santoro et al., 2016) or a Siamese network (Koch, 2015)), and it can be readily combined with fully connected, convolutional, or recurrent neural networks. It can also be used with a variety of loss functions, including differentiable supervised losses and non-differentiable reinforcement learning objectives. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_2", "text": " The process of training a model’s parameters such that a few gradient steps, or even a single gradient step, can produce good results on a new task can be viewed from a feature learning standpoint as building an internal representation that is broadly suitable for many tasks. If the internal representation is suitable to many tasks, simply fine-tuning the parameters slightly (e.g. by primarily modifying the top layer weights in a feedforward model) can produce good results. In effect, our procedure optimizes for models that are easy and fast to fine-tune, allowing the adaptation to happen in the right space for fast learning. From a dynamical systems standpoint, our learning process can be viewed as maximizing the sensitivity of the loss functions of new tasks with respect to the parameters: when the sensitivity is high, small local changes to the parameters can lead to large improvements in the task loss. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_3", "text": " The primary contribution of this work is a simple model- and task-agnostic algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task. We demonstrate the algorithm on different model types, including fully connected and convolutional networks, and in several distinct domains, including few-shot regression, image classification, and reinforcement learning. Our evaluation shows that our meta-learning algorithm compares favorably to state-of-the-art one-shot learning methods designed specifically for supervised classification, while using fewer parameters, but that it can also be readily applied to regression and can accelerate reinforcement learning in the presence of task variability, substantially outperforming direct pretraining as initialization. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_4", "text": " We aim to train models that can achieve rapid adaptation, a problem setting that is often formalized as few-shot learning. In this section, we will define the problem setup and present the general form of our algorithm. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_5", "text": " The goal of few-shot meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations. To accomplish this, the model or learner is trained during a meta-learning phase on a set of tasks, such that the trained model can quickly adapt to new tasks using only a small number of examples or trials. In effect, the meta-learning problem treats entire tasks as training examples. In this section, we formalize this meta-learning problem setting in a general manner, including brief examples of different learning domains. We will discuss two different learning domains in detail in Section 3. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_6", "text": " We consider a model, denoted f𝑓f, that maps observations 𝐱𝐱\\mathbf{x} to outputs 𝐚𝐚\\mathbf{a}. During meta-learning, the model is trained to be able to adapt to a large or infinite number of tasks. Since we would like to apply our framework to a variety of learning problems, from classification to reinforcement learning, we introduce a generic notion of a learning task below. Formally, each task 𝒯={ℒ​(𝐱1,𝐚1,…,𝐱H,𝐚H),q​(𝐱1),q​(𝐱t+1|𝐱t,𝐚t),H}𝒯ℒsubscript𝐱1subscript𝐚1…subscript𝐱𝐻subscript𝐚𝐻𝑞subscript𝐱1𝑞conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡𝐻\\mathcal{T}=\\{\\mathcal{L}(\\mathbf{x}_{1},\\mathbf{a}_{1},\\dots,\\mathbf{x}_{H},\\mathbf{a}_{H}),q(\\mathbf{x}_{1}),q(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}),H\\} consists of a loss function ℒℒ\\mathcal{L}, a distribution over initial observations q​(𝐱1)𝑞subscript𝐱1q(\\mathbf{x}_{1}), a transition distribution q​(𝐱t+1|𝐱t,𝐚t)𝑞conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡q(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}), and an episode length H𝐻H. In i.i.d. supervised learning problems, the length H=1𝐻1H\\!=\\!1. The model may generate samples of length H𝐻H by choosing an output 𝐚tsubscript𝐚𝑡\\mathbf{a}_{t} at each time t𝑡t. The loss ℒ​(𝐱1,𝐚1,…,𝐱H,𝐚H)→ℝ→ℒsubscript𝐱1subscript𝐚1…subscript𝐱𝐻subscript𝐚𝐻ℝ\\mathcal{L}(\\mathbf{x}_{1},\\mathbf{a}_{1},\\dots,\\mathbf{x}_{H},\\mathbf{a}_{H})\\rightarrow\\mathbb{R}, provides task-specific feedback, which might be in the form of a misclassification loss or a cost function in a Markov decision process. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_7", "text": " In our meta-learning scenario, we consider a distribution over tasks p​(𝒯)𝑝𝒯p(\\mathcal{T}) that we want our model to be able to adapt to. In the K𝐾K-shot learning setting, the model is trained to learn a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} drawn from p​(𝒯)𝑝𝒯p(\\mathcal{T}) from only K𝐾K samples drawn from qisubscript𝑞𝑖q_{i} and feedback ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} generated by 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. During meta-training, a task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} is sampled from p​(𝒯)𝑝𝒯p(\\mathcal{T}), the model is trained with K𝐾K samples and feedback from the corresponding loss ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} from 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, and then tested on new samples from 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. The model f𝑓f is then improved by considering how the test error on new data from qisubscript𝑞𝑖q_{i} changes with respect to the parameters. In effect, the test error on sampled tasks 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} serves as the training error of the meta-learning process. At the end of meta-training, new tasks are sampled from p​(𝒯)𝑝𝒯p(\\mathcal{T}), and meta-performance is measured by the model’s performance after learning from K𝐾K samples. Generally, tasks used for meta-testing are held out during meta-training. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_8", "text": " In contrast to prior work, which has sought to train recurrent neural networks that ingest entire datasets (Santoro et al., 2016; Duan et al., 2016b) or feature embeddings that can be combined with nonparametric methods at test time (Vinyals et al., 2016; Koch, 2015), we propose a method that can learn the parameters of any standard model via meta-learning in such a way as to prepare that model for fast adaptation. The intuition behind this approach is that some internal representations are more transferrable than others. For example, a neural network might learn internal features that are broadly applicable to all tasks in p​(𝒯)𝑝𝒯p(\\mathcal{T}), rather than a single individual task. How can we encourage the emergence of such general-purpose representations? We take an explicit approach to this problem: since the model will be fine-tuned using a gradient-based learning rule on a new task, we will aim to learn a model in such a way that this gradient-based learning rule can make rapid progress on new tasks drawn from p​(𝒯)𝑝𝒯p(\\mathcal{T}), without overfitting. In effect, we will aim to find model parameters that are sensitive to changes in the task, such that small changes in the parameters will produce large improvements on the loss function of any task drawn from p​(𝒯)𝑝𝒯p(\\mathcal{T}), when altered in the direction of the gradient of that loss (see Figure 1). We make no assumption on the form of the model, other than to assume that it is parametrized by some parameter vector θ𝜃\\theta, and that the loss function is smooth enough in θ𝜃\\theta that we can use gradient-based learning techniques. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_9", "text": " Formally, we consider a model represented by a parametrized function fθsubscript𝑓𝜃f_{\\theta} with parameters θ𝜃\\theta. When adapting to a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, the model’s parameters θ𝜃\\theta become θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime}. In our method, the updated parameter vector θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime} is computed using one or more gradient descent updates on task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. For example, when using one gradient update, θi′=θ−α​∇θℒ𝒯i​(fθ).superscriptsubscript𝜃𝑖′𝜃𝛼subscript∇𝜃subscriptℒsubscript𝒯𝑖subscript𝑓𝜃\\vspace{-0.15cm}\\theta_{i}^{\\prime}=\\theta-\\alpha\\nabla_{\\theta}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta}). The step size α𝛼\\alpha may be fixed as a hyperparameter or meta-learned. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_10", "text": " The model parameters are trained by optimizing for the performance of fθi′subscript𝑓superscriptsubscript𝜃𝑖′f_{\\theta_{i}^{\\prime}} with respect to θ𝜃\\theta across tasks sampled from p​(𝒯)𝑝𝒯p(\\mathcal{T}). More concretely, the meta-objective is as follows: minθ​∑𝒯i∼p​(𝒯)ℒ𝒯i​(fθi′)=∑𝒯i∼p​(𝒯)ℒ𝒯i​(fθ−α​∇θℒ𝒯i​(fθ))subscript𝜃subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓superscriptsubscript𝜃𝑖′subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓𝜃𝛼subscript∇𝜃subscriptℒsubscript𝒯𝑖subscript𝑓𝜃\\displaystyle\\vspace{-0.2cm}\\min_{\\theta}\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta_{i}^{\\prime}})=\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta-\\alpha\\nabla_{\\theta}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta})}) Note that the meta-optimization is performed over the model parameters θ𝜃\\theta, whereas the objective is computed using the updated model parameters θ′superscript𝜃′\\theta^{\\prime}. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_11", "text": " The meta-optimization across tasks is performed via stochastic gradient descent (SGD), such that the model parameters θ𝜃\\theta are updated as follows: θ←θ−β​∇θ​∑𝒯i∼p​(𝒯)ℒ𝒯i​(fθi′)←𝜃𝜃𝛽subscript∇𝜃subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓superscriptsubscript𝜃𝑖′\\vspace{-0.2cm}\\theta\\leftarrow\\theta-\\beta\\nabla_{\\theta}\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta_{i}^{\\prime}}) (1) where β𝛽\\beta is the meta step size. The full algorithm, in the general case, is outlined in Algorithm 1. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_12", "text": " The MAML meta-gradient update involves a gradient through a gradient. Computationally, this requires an additional backward pass through f𝑓f to compute Hessian-vector products, which is supported by standard deep learning libraries such as TensorFlow (Abadi et al., 2016). In our experiments, we also include a comparison to dropping this backward pass and using a first-order approximation, which we discuss in Section 5.2. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_13", "text": " In this section, we discuss specific instantiations of our meta-learning algorithm for supervised learning and reinforcement learning. The domains differ in the form of loss function and in how data is generated by the task and presented to the model, but the same basic adaptation mechanism can be applied in both cases. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_14", "text": " Few-shot learning is well-studied in the domain of supervised tasks, where the goal is to learn a new function from only a few input/output pairs for that task, using prior data from similar tasks for meta-learning. For example, the goal might be to classify images of a Segway after seeing only one or a few examples of a Segway, with a model that has previously seen many other types of objects. Likewise, in few-shot regression, the goal is to predict the outputs of a continuous-valued function from only a few datapoints sampled from that function, after training on many functions with similar statistical properties. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_15", "text": " To formalize the supervised regression and classification problems in the context of the meta-learning definitions in Section 2.1, we can define the horizon H=1𝐻1H=1 and drop the timestep subscript on 𝐱tsubscript𝐱𝑡\\mathbf{x}_{t}, since the model accepts a single input and produces a single output, rather than a sequence of inputs and outputs. The task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} generates K𝐾K i.i.d. observations 𝐱𝐱\\mathbf{x} from qisubscript𝑞𝑖q_{i}, and the task loss is represented by the error between the model’s output for 𝐱𝐱\\mathbf{x} and the corresponding target values 𝐲𝐲\\mathbf{y} for that observation and task. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_16", "text": " Two common loss functions used for supervised classification and regression are cross-entropy and mean-squared error (MSE), which we will describe below; though, other supervised loss functions may be used as well. For regression tasks using mean-squared error, the loss takes the form: ℒ𝒯i​(fϕ)=∑𝐱(j),𝐲(j)∼𝒯i∥fϕ​(𝐱(j))−𝐲(j)∥22,subscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscriptsimilar-tosuperscript𝐱𝑗superscript𝐲𝑗subscript𝒯𝑖superscriptsubscriptdelimited-∥∥subscript𝑓italic-ϕsuperscript𝐱𝑗superscript𝐲𝑗22\\displaystyle\\vspace{-0.2cm}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=\\!\\!\\!\\!\\!\\!\\sum_{\\mathbf{x}^{(j)},\\mathbf{y}^{(j)}\\sim\\mathcal{T}_{i}}\\lVert f_{\\phi}(\\mathbf{x}^{(j)})-\\mathbf{y}^{(j)}\\rVert_{2}^{2}, (2) where 𝐱(j),𝐲(j)superscript𝐱𝑗superscript𝐲𝑗\\mathbf{x}^{(j)},\\mathbf{y}^{(j)} are an input/output pair sampled from task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. In K𝐾K-shot regression tasks, K𝐾K input/output pairs are provided for learning for each task. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_17", "text": " Similarly, for discrete classification tasks with a cross-entropy loss, the loss takes the form: ℒ𝒯i​(fϕ)=∑𝐱(j),𝐲(j)∼𝒯isubscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscriptsimilar-tosuperscript𝐱𝑗superscript𝐲𝑗subscript𝒯𝑖\\displaystyle\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=\\!\\!\\!\\!\\!\\!\\sum_{\\mathbf{x}^{(j)},\\mathbf{y}^{(j)}\\sim\\mathcal{T}_{i}} 𝐲(j)​log⁡fϕ​(𝐱(j))superscript𝐲𝑗subscript𝑓italic-ϕsuperscript𝐱𝑗\\displaystyle\\mathbf{y}^{(j)}\\log f_{\\phi}(\\mathbf{x}^{(j)}) (3) +(1−𝐲(j))​log⁡(1−fϕ​(𝐱(j)))1superscript𝐲𝑗1subscript𝑓italic-ϕsuperscript𝐱𝑗\\displaystyle+(1-\\mathbf{y}^{(j)})\\log(1-f_{\\phi}(\\mathbf{x}^{(j)})) According to the conventional terminology, K𝐾K-shot classification tasks use K𝐾K input/output pairs from each class, for a total of N​K𝑁𝐾NK data points for N𝑁N-way classification. Given a distribution over tasks p​(𝒯i)𝑝subscript𝒯𝑖p(\\mathcal{T}_{i}), these loss functions can be directly inserted into the equations in Section 2.2 to perform meta-learning, as detailed in Algorithm 2. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_18", "text": " In reinforcement learning (RL), the goal of few-shot meta-learning is to enable an agent to quickly acquire a policy for a new test task using only a small amount of experience in the test setting. A new task might involve achieving a new goal or succeeding on a previously trained goal in a new environment. For example, an agent might learn to quickly figure out how to navigate mazes so that, when faced with a new maze, it can determine how to reliably reach the exit with only a few samples. In this section, we will discuss how MAML can be applied to meta-learning for RL. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_19", "text": " Each RL task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} contains an initial state distribution qi​(𝐱1)subscript𝑞𝑖subscript𝐱1q_{i}(\\mathbf{x}_{1}) and a transition distribution qi​(𝐱t+1|𝐱t,𝐚t)subscript𝑞𝑖conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡q_{i}(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}), and the loss ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} corresponds to the (negative) reward function R𝑅R. The entire task is therefore a Markov decision process (MDP) with horizon H𝐻H, where the learner is allowed to query a limited number of sample trajectories for few-shot learning. Any aspect of the MDP may change across tasks in p​(𝒯)𝑝𝒯p(\\mathcal{T}). The model being learned, fθsubscript𝑓𝜃f_{\\theta}, is a policy that maps from states 𝐱tsubscript𝐱𝑡\\mathbf{x}_{t} to a distribution over actions 𝐚tsubscript𝐚𝑡\\mathbf{a}_{t} at each timestep t∈{1,…,H}𝑡1…𝐻t\\in\\{1,...,H\\}. The loss for task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} and model fϕsubscript𝑓italic-ϕf_{\\phi} takes the form ℒ𝒯i​(fϕ)=−𝔼𝐱t,𝐚t∼fϕ,q𝒯i​(∑t=1HRi​(𝐱t,𝐚t)).subscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscript𝔼formulae-sequencesimilar-tosubscript𝐱𝑡subscript𝐚𝑡subscript𝑓italic-ϕsubscript𝑞subscript𝒯𝑖delimited-()superscriptsubscript𝑡1𝐻subscript𝑅𝑖subscript𝐱𝑡subscript𝐚𝑡\\displaystyle\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=-\\mathbb{E}_{\\mathbf{x}_{t},\\mathbf{a}_{t}\\sim f_{\\phi},q_{\\mathcal{T}_{i}}}\\left(\\sum_{t=1}^{H}R_{i}(\\mathbf{x}_{t},\\mathbf{a}_{t})\\right). (4) In K𝐾K-shot reinforcement learning, K𝐾K rollouts from fθsubscript𝑓𝜃f_{\\theta} and task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, (𝐱1,𝐚1,…​𝐱H)subscript𝐱1subscript𝐚1…subscript𝐱𝐻(\\mathbf{x}_{1},\\mathbf{a}_{1},...\\mathbf{x}_{H}), and the corresponding rewards R​(𝐱t,𝐚t)𝑅subscript𝐱𝑡subscript𝐚𝑡R(\\mathbf{x}_{t},\\mathbf{a}_{t}), may be used for adaptation on a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. Since the expected reward is generally not differentiable due to unknown dynamics, we use policy gradient methods to estimate the gradient both for the model gradient update(s) and the meta-optimization. Since policy gradients are an on-policy algorithm, each additional gradient step during the adaptation of fθsubscript𝑓𝜃f_{\\theta} requires new samples from the current policy fθi′subscript𝑓subscript𝜃superscript𝑖′f_{\\theta_{i^{\\prime}}}. We detail the algorithm in Algorithm 3. This algorithm has the same structure as Algorithm 2, with the principal difference being that steps 5 and 8 require sampling trajectories from the environment corresponding to task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. Practical implementations of this method may also use a variety of improvements recently proposed for policy gradient algorithms, including state or action-dependent baselines and trust regions (Schulman et al., 2015). ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_20", "text": " The method that we propose in this paper addresses the general problem of meta-learning (Thrun & Pratt, 1998; Schmidhuber, 1987; Naik & Mammone, 1992), which includes few-shot learning. A popular approach for meta-learning is to train a meta-learner that learns how to update the parameters of the learner’s model (Bengio et al., 1992; Schmidhuber, 1992; Bengio et al., 1990). This approach has been applied to learning to optimize deep networks (Hochreiter et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2017), as well as for learning dynamically changing recurrent networks (Ha et al., 2017). One recent approach learns both the weight initialization and the optimizer, for few-shot image recognition (Ravi & Larochelle, 2017). Unlike these methods, the MAML learner’s weights are updated using the gradient, rather than a learned update; our method does not introduce additional parameters for meta-learning nor require a particular learner architecture. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_21", "text": " Few-shot learning methods have also been developed for specific tasks such as generative modeling (Edwards & Storkey, 2017; Rezende et al., 2016) and image recognition (Vinyals et al., 2016). One successful approach for few-shot classification is to learn to compare new examples in a learned metric space using e.g. Siamese networks (Koch, 2015) or recurrence with attention mechanisms (Vinyals et al., 2016; Shyam et al., 2017; Snell et al., 2017). These approaches have generated some of the most successful results, but are difficult to directly extend to other problems, such as reinforcement learning. Our method, in contrast, is agnostic to the form of the model and to the particular learning task. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_22", "text": " Another approach to meta-learning is to train memory-augmented models on many tasks, where the recurrent learner is trained to adapt to new tasks as it is rolled out. Such networks have been applied to few-shot image recognition (Santoro et al., 2016; Munkhdalai & Yu, 2017) and learning “fast” reinforcement learning agents (Duan et al., 2016b; Wang et al., 2016). Our experiments show that our method outperforms the recurrent approach on few-shot classification. Furthermore, unlike these methods, our approach simply provides a good weight initialization and uses the same gradient descent update for both the learner and meta-update. As a result, it is straightforward to finetune the learner for additional gradient steps. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_23", "text": " Our approach is also related to methods for initialization of deep networks. In computer vision, models pretrained on large-scale image classification have been shown to learn effective features for a range of problems (Donahue et al., 2014). In contrast, our method explicitly optimizes the model for fast adaptability, allowing it to adapt to new tasks with only a few examples. Our method can also be viewed as explicitly maximizing sensitivity of new task losses to the model parameters. A number of prior works have explored sensitivity in deep networks, often in the context of initialization (Saxe et al., 2014; Kirkpatrick et al., 2016). Most of these works have considered good random initializations, though a number of papers have addressed data-dependent initializers (Krähenbühl et al., 2016; Salimans & Kingma, 2016), including learned initializations (Husken & Goerick, 2000; Maclaurin et al., 2015). In contrast, our method explicitly trains the parameters for sensitivity on a given task distribution, allowing for extremely efficient adaptation for problems such as K𝐾K-shot learning and rapid reinforcement learning in only one or a few gradient steps. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_24", "text": " The goal of our experimental evaluation is to answer the following questions: (1) Can MAML enable fast learning of new tasks? (2) Can MAML be used for meta-learning in multiple different domains, including supervised regression, classification, and reinforcement learning? (3) Can a model learned with MAML continue to improve with additional gradient updates and/or examples? ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_25", "text": " All of the meta-learning problems that we consider require some amount of adaptation to new tasks at test-time. When possible, we compare our results to an oracle that receives the identity of the task (which is a problem-dependent representation) as an additional input, as an upper bound on the performance of the model. All of the experiments were performed using TensorFlow (Abadi et al., 2016), which allows for automatic differentiation through the gradient update(s) during meta-learning. The code is available online111Code for the regression and supervised experiments is at github.com/cbfinn/maml and code for the RL experiments is at github.com/cbfinn/maml_rl. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_26", "text": " We start with a simple regression problem that illustrates the basic principles of MAML. Each task involves regressing from the input to the output of a sine wave, where the amplitude and phase of the sinusoid are varied between tasks. Thus, p​(𝒯)𝑝𝒯p(\\mathcal{T}) is continuous, where the amplitude varies within (0.1,5.0)0.15.0(0.1,5.0) and the phase varies within (0,π)0𝜋(0,\\pi), and the input and output both have a dimensionality of 111. During training and testing, datapoints 𝐱𝐱\\mathbf{x} are sampled uniformly from (−5.0,5.0)5.05.0(-5.0,5.0). The loss is the mean-squared error between the prediction f​(𝐱)𝑓𝐱f(\\mathbf{x}) and true value. The regressor is a neural network model with 222 hidden layers of size 404040 with ReLU nonlinearities. When training with MAML, we use one gradient update with K=10𝐾10K=10 examples with a fixed step size α=0.01𝛼0.01\\alpha=0.01, and use Adam as the meta-optimizer (Kingma & Ba, 2015). The baselines are likewise trained with Adam. To evaluate performance, we fine-tune a single meta-learned model on varying numbers of K𝐾K examples, and compare performance to two baselines: (a) pretraining on all of the tasks, which entails training a network to regress to random sinusoid functions and then, at test-time, fine-tuning with gradient descent on the K𝐾K provided points, using an automatically tuned step size, and (b) an oracle which receives the true amplitude and phase as input. In Appendix C, we show comparisons to additional multi-task and adaptation methods. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_27", "text": " We evaluate performance by fine-tuning the model learned by MAML and the pretrained model on K={5,10,20}𝐾51020K=\\{5,10,20\\} datapoints. During fine-tuning, each gradient step is computed using the same K𝐾K datapoints. The qualitative results, shown in Figure 2 and further expanded on in Appendix B show that the learned model is able to quickly adapt with only 555 datapoints, shown as purple triangles, whereas the model that is pretrained using standard supervised learning on all tasks is unable to adequately adapt with so few datapoints without catastrophic overfitting. Crucially, when the K𝐾K datapoints are all in one half of the input range, the model trained with MAML can still infer the amplitude and phase in the other half of the range, demonstrating that the MAML trained model f𝑓f has learned to model the periodic nature of the sine wave. Furthermore, we observe both in the qualitative and quantitative results (Figure 3 and Appendix B) that the model learned with MAML continues to improve with additional gradient steps, despite being trained for maximal performance after one gradient step. This improvement suggests that MAML optimizes the parameters such that they lie in a region that is amenable to fast adaptation and is sensitive to loss functions from p​(𝒯)𝑝𝒯p(\\mathcal{T}), as discussed in Section 2.2, rather than overfitting to parameters θ𝜃\\theta that only improve after one step. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_28", "text": " To evaluate MAML in comparison to prior meta-learning and few-shot learning algorithms, we applied our method to few-shot image recognition on the Omniglot (Lake et al., 2011) and MiniImagenet datasets. The Omniglot dataset consists of 20 instances of 1623 characters from 50 different alphabets. Each instance was drawn by a different person. The MiniImagenet dataset was proposed by Ravi & Larochelle (2017), and involves 64 training classes, 12 validation classes, and 24 test classes. The Omniglot and MiniImagenet image recognition tasks are the most common recently used few-shot learning benchmarks (Vinyals et al., 2016; Santoro et al., 2016; Ravi & Larochelle, 2017). We follow the experimental protocol proposed by Vinyals et al. (2016), which involves fast learning of N𝑁N-way classification with 1 or 5 shots. The problem of N𝑁N-way classification is set up as follows: select N𝑁N unseen classes, provide the model with K𝐾K different instances of each of the N𝑁N classes, and evaluate the model’s ability to classify new instances within the N𝑁N classes. For Omniglot, we randomly select 120012001200 characters for training, irrespective of alphabet, and use the remaining for testing. The Omniglot dataset is augmented with rotations by multiples of 909090 degrees, as proposed by Santoro et al. (2016). ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_29", "text": " Our model follows the same architecture as the embedding function used by Vinyals et al. (2016), which has 4 modules with a 3×3333\\times 3 convolutions and 646464 filters, followed by batch normalization (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and 2×2222\\times 2 max-pooling. The Omniglot images are downsampled to 28×28282828\\times 28, so the dimensionality of the last hidden layer is 646464. As in the baseline classifier used by Vinyals et al. (2016), the last layer is fed into a softmax. For Omniglot, we used strided convolutions instead of max-pooling. For MiniImagenet, we used 323232 filters per layer to reduce overfitting, as done by (Ravi & Larochelle, 2017). In order to also provide a fair comparison against memory-augmented neural networks (Santoro et al., 2016) and to test the flexibility of MAML, we also provide results for a non-convolutional network. For this, we use a network with 444 hidden layers with sizes 256256256, 128128128, 646464, 646464, each including batch normalization and ReLU nonlinearities, followed by a linear layer and softmax. For all models, the loss function is the cross-entropy error between the predicted and true class. Additional hyperparameter details are included in Appendix A.1. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_30", "text": " We present the results in Table 1. The convolutional model learned by MAML compares well to the state-of-the-art results on this task, narrowly outperforming the prior methods. Some of these existing methods, such as matching networks, Siamese networks, and memory models are designed with few-shot classification in mind, and are not readily applicable to domains such as reinforcement learning. Additionally, the model learned with MAML uses fewer overall parameters compared to matching networks and the meta-learner LSTM, since the algorithm does not introduce any additional parameters beyond the weights of the classifier itself. Compared to these prior methods, memory-augmented neural networks (Santoro et al., 2016) specifically, and recurrent meta-learning models in general, represent a more broadly applicable class of methods that, like MAML, can be used for other tasks such as reinforcement learning (Duan et al., 2016b; Wang et al., 2016). However, as shown in the comparison, MAML significantly outperforms memory-augmented networks and the meta-learner LSTM on 5-way Omniglot and MiniImagenet classification, both in the 111-shot and 555-shot case. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_31", "text": " A significant computational expense in MAML comes from the use of second derivatives when backpropagating the meta-gradient through the gradient operator in the meta-objective (see Equation (1)). On MiniImagenet, we show a comparison to a first-order approximation of MAML, where these second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime}, which provides for effective meta-learning. Surprisingly however, the performance of this method is nearly the same as that obtained with full second derivatives, suggesting that most of the improvement in MAML comes from the gradients of the objective at the post-update parameter values, rather than the second order updates from differentiating through the gradient update. Past work has observed that ReLU neural networks are locally almost linear (Goodfellow et al., 2015), which suggests that second derivatives may be close to zero in most cases, partially explaining the good performance of the first-order approximation. This approximation removes the need for computing Hessian-vector products in an additional backward pass, which we found led to roughly 33%percent3333\\% speed-up in network computation. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_32", "text": " To evaluate MAML on reinforcement learning problems, we constructed several sets of tasks based off of the simulated continuous control environments in the rllab benchmark suite (Duan et al., 2016a). We discuss the individual domains below. In all of the domains, the model trained by MAML is a neural network policy with two hidden layers of size 100100100, with ReLU nonlinearities. The gradient updates are computed using vanilla policy gradient (REINFORCE) (Williams, 1992), and we use trust-region policy optimization (TRPO) as the meta-optimizer (Schulman et al., 2015). In order to avoid computing third derivatives, we use finite differences to compute the Hessian-vector products for TRPO. For both learning and meta-learning updates, we use the standard linear feature baseline proposed by Duan et al. (2016a), which is fitted separately at each iteration for each sampled task in the batch. We compare to three baseline models: (a) pretraining one policy on all of the tasks and then fine-tuning, (b) training a policy from randomly initialized weights, and (c) an oracle policy which receives the parameters of the task as input, which for the tasks below corresponds to a goal position, goal direction, or goal velocity for the agent. The baseline models of (a) and (b) are fine-tuned with gradient descent with a manually tuned step size. Videos of the learned policies can be viewed at sites.google.com/view/maml ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_33", "text": " 2D Navigation. In our first meta-RL experiment, we study a set of tasks where a point agent must move to different goal positions in 2D, randomly chosen for each task within a unit square. The observation is the current 2D position, and actions correspond to velocity commands clipped to be in the range (−0.1,0.1)0.10.1(-0.1,0.1). The reward is the negative squared distance to the goal, and episodes terminate when the agent is within 0.010.010.01 of the goal or at the horizon of H=100𝐻100H=100. The policy was trained with MAML to maximize performance after 111 policy gradient update using 202020 trajectories. Additional hyperparameter settings for this problem and the following RL problems are in Appendix A.2. In our evaluation, we compare adaptation to a new task with up to 4 gradient updates, each with 404040 samples. The results in Figure 4 show the adaptation performance of models that are initialized with MAML, conventional pretraining on the same set of tasks, random initialization, and an oracle policy that receives the goal position as input. The results show that MAML can learn a model that adapts much more quickly in a single gradient update, and furthermore continues to improve with additional updates. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_34", "text": " Locomotion. To study how well MAML can scale to more complex deep RL problems, we also study adaptation on high-dimensional locomotion tasks with the MuJoCo simulator  (Todorov et al., 2012). The tasks require two simulated robots – a planar cheetah and a 3D quadruped (the “ant”) – to run in a particular direction or at a particular velocity. In the goal velocity experiments, the reward is the negative absolute value between the current velocity of the agent and a goal, which is chosen uniformly at random between 0.00.00.0 and 2.02.02.0 for the cheetah and between 0.00.00.0 and 3.03.03.0 for the ant. In the goal direction experiments, the reward is the magnitude of the velocity in either the forward or backward direction, chosen at random for each task in p​(𝒯)𝑝𝒯p(\\mathcal{T}). The horizon is H=200𝐻200H=200, with 202020 rollouts per gradient step for all problems except the ant forward/backward task, which used 404040 rollouts per step. The results in Figure 5 show that MAML learns a model that can quickly adapt its velocity and direction with even just a single gradient update, and continues to improve with more gradient steps. The results also show that, on these challenging tasks, the MAML initialization substantially outperforms random initialization and pretraining. In fact, pretraining is in some cases worse than random initialization, a fact observed in prior RL work (Parisotto et al., 2016). ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_35", "text": " We introduced a meta-learning method based on learning easily adaptable model parameters through gradient descent. Our approach has a number of benefits. It is simple and does not introduce any learned parameters for meta-learning. It can be combined with any model representation that is amenable to gradient-based training, and any differentiable objective, including classification, regression, and reinforcement learning. Lastly, since our method merely produces a weight initialization, adaptation can be performed with any amount of data and any number of gradient steps, though we demonstrate state-of-the-art results on classification with only one or five examples per class. We also show that our method can adapt an RL agent using policy gradients and a very modest amount of experience. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" }, { "id": "1703.03400_all_36", "text": " Reusing knowledge from past tasks may be a crucial ingredient in making high-capacity scalable models, such as deep neural networks, amenable to fast training with small datasets. We believe that this work is one step toward a simple and general-purpose meta-learning technique that can be applied to any problem and any model. Further research in this area can make multitask initialization a standard ingredient in deep learning and reinforcement learning. ", "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" } ]
Why didn't the authors intend a "chord" to represent a more meaningful unit in music, such as a beat?
The authors intend a "chord" to represent simultaneous notes to intuitively models a polyphonic structure of piano performance that is defined by its temporal progression [5]. More fine-grained resolution than the beat-based resolution can reflect trivial changes in expression that varies by simultaneous note groups, such as a syncopation [6].
[ 5, 6 ]
[ { "id": "2208.14867_all_0", "text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and timing of musical notes that fits to the given score. Most of the conventional studies have used musical scores of Western piano music that includes sufficient amount of guidelines for musical expressions (3, 4, 5, 6). Recent studies using deep learning methods have successfully rendered plausible piano performances that are comparable to those of professional pianists from the given Classical scores (7, 8, 9). ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_1", "text": " More recently, it has increased attention to controlling music performance by manipulating one or more disentangled representations from a generative model. These representations are sensitive to the variation of certain factors while invariant to other factors . Maezawa et al. aimed to control a performer’s interpretation through a conditional variational recurrent neural network (CVRNN) . They intended to disentangle a time-variant representation of the personal interpretation. In the acoustic domain, Tan et al. proposed a generative model based on a Gaussian mixture variational autoencoder (GM-VAE) that separately controlled dynamics and articulations of the notes . Their novelty lied in learning multiple representations of high-level attributes from the low-level spectrogram. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_2", "text": " However, these studies have constrained musical creativity. Maezawa et al. controlled musical expression only through quantized features from the musical scores. Tan et al. did not consider controlling tempo or timing with a latent representation. These methods may have restricted any potential for rendering piano performances with flexible musical expression. Musical creativity can be expanded not only by composers but also by performers who can elastically choose various strategies to highlight multiple nuances or emotions (13, 14, 15). Moreover, the music generation field can be also broadened if static music created by automatic composition systems can be easily colored with realistic and elastic expression . ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_3", "text": " Therefore, we attempt a new approach that renders piano performances with flexible musical expressions. We disregard a typical assumption from previous studies that a performer must follow a composer’s intent (17, 18, 19, 4). According to the literature, performers learn to identify or imitate \"expressive models\", or explicit planning, of existing piano performances . We focus on this attribute, defining it as a higher-level sketch of the expressive attributes (i.e. dynamics, articulation, and tempo ) that the performer draws based on a personal interpretation of the musical piece (20, 4, 11). We also assume that the remaining attribute represents common performing strategies that are connected to certain musical patterns, while these strategies slightly differ across performers (22, 23). We call this attribute as a structural attribute that belongs to given note structures of a musical piece. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_4", "text": " In this study, we propose a generative model that can flexibly control the entire musical expression, or the explicit planning, of symbolic piano performance111https://github.com/rsy1026/sketching_piano_expression. Our system is based on a conditional variational autoencoder (CVAE) that is modified for sequential data (24, 11). The system generates multiple parameters of piano performance from a note structure of a musical passage, using disentangled representations for the explicit planning and structural attribute. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_5", "text": " We employ a self-supervised learning framework to force the latent representations to learn our target attributes (25, 26, 24). In addition, we facilitate independent control of the three expressive attributes–dynamics, articulation, and tempo–by utilizing an existing method that aligns the latent code with a target attribute (27, 28). Finally, we design a novel mechanism that intuitively models a polyphonic structure of piano performance. In particular, we insert intermediate steps for chordwise encoding and decoding of the piano performance to our encoder-decoder architecture, where a chord denotes a group of simultaneous notes. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_6", "text": " Our approach has several contributions as follows: 1) Our system aims to control musical expression while maintaining any characteristics induced by a given musical structure; 2) We use self-supervised learning where new supervisory signals are involved in regularizing the latent representations effectively; 3) Our system aims to control multiple expressive attributes independently of each other; 4) Lastly, we leverage an intermediate step that projects a notewise representation into the chordwise in the middle of our system to intuitively model the polyphonic structure of piano performance. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_7", "text": " We aim to build a generative model that factorizes expressive piano performance as the explicit planning and structural attribute. The model is based on a conditional variational autoencoder (CVAE) that reproduces performance parameters based on a given musical structure. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_8", "text": " We extract features that represent a human performance and the corresponding musical score, following the conventional studies (19, 29, 11). ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_9", "text": " Performance Features. We extract three features that represent the expressive attributes of each performed note, respectively: MIDIVelocity is a MIDI velocity value that ranges from 24 to 104. IOIRatio represents an instantaneous variation in tempo. We compute an inter-onset-interval (IOI) between the onset of a note and the mean onset of the previous chord for both a performed note and the corresponding score note. Then, a ratio of performed IOI to score IOI is calculated, clipped between 0.125 and 8, and converted into a logarithmic scale . Articulation represents how much a note is shortened or lengthened compared to the instantaneous tempo. It is a ratio of a performed duration to an IOI value between the onset of a note and mean onset of the next chord . It is clipped between 0.25 and 4 and converted into a logarithmic scale. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_10", "text": " Score Features. The features for a musical score represent eight categorical attributes for how the notes are composed: Pitch is a MIDI index number that ranges from 21 to 108. RelDuration and RelIOI are 11-class attributes of a quantized duration and IOI between a note onset and a previous chord, respectively. They range from 1 to 11, and each class represents a multiple of a 16th note’s length with respect to a given tempo (30, 31). IsTopVoice is a binary attribute of whether the note is the uppermost voice. It is heuristically computed regarding pitches and durations of surrounding notes. PositionInChord and NumInChord are 11-class attributes of a positional index of a note within its chord and the total number of notes in that chord, respectively, that range from 1 to 11. An index 1 for PositionInChord denotes the most bottom position. Staff is a binary attribute of the staff of a note, either of the G clef or F clef. IsDownbeat is a binary attribute of whether a note is at a downbeat or not. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_11", "text": " Inspired by previous studies (4, 9, 8, 32), we build a two-step encoder and decoder: An encoder models both notewise and chordwise dependencies of the inputs, and a decoder reconstructs the notewise dependency from the chordwise representation and the notewise condition. We denote a chord as a group of notes that are hit simultaneously, regardless of the staff, so that they sound together at an instant time . Thus, learning the chordwise dependency is analogous to direct modeling of the temporal progression of the piano performance. Let ℳ∈ℝC×Nℳsuperscriptℝ𝐶𝑁\\mathcal{M}\\in\\mathbb{R}^{C\\times N} be a matrix that aligns serialized notes to their polyphonic structure, where C𝐶C and N𝑁N are the number of chords and the number of notes, respectively. Within the encoder, the notewise representation is sequentially average-pooled by ℳℳ\\mathcal{M} with dynamic kernel sizes where each size represents the number of notes in each chord. We denote this operation as N2C. In this way, we can directly model chord-level dependency of the note-level expressive parameters . In contrast, the decoder extends the chordwise representation from the encoder back to the notewise using the transposed alignment matrix ℳTsuperscriptℳ𝑇\\mathcal{M}^{T}, of which process we denote as C2N. Along this, the notewise embedding of the score features replenishes the notewise information for the output. Consequently, notes in the same chord share any information of their corresponding chord, while maintaining their differences by the conditional score features: N2C​(e)=ℳ⋅e∑n=1Nℳn,1:C,C2N​(e)=ℳT⋅eformulae-sequenceN2C𝑒⋅ℳ𝑒superscriptsubscript𝑛1𝑁subscriptℳ:𝑛1𝐶C2N𝑒⋅superscriptℳT𝑒\\text{N2C}(e)=\\frac{\\mathcal{M}\\cdot e}{\\textstyle\\sum_{n=1}^{N}\\mathcal{M}_{n,1:C}},\\quad\\text{C2N}(e)=\\mathcal{M}^{\\text{T}}\\cdot e (1) where e𝑒e denotes a notewise or chordwise representation. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_12", "text": " Our proposed network is generally based on the conditional VAE framework (34, 35). Concretely, we use the sequential VAE that is modified for generation of sequential data (36, 24, 11). Let x={xn}n=1N𝑥superscriptsubscriptsubscript𝑥𝑛𝑛1𝑁x=\\{x_{n}\\}_{n=1}^{N} be a sequence of the performance features, and y={yn}n=1N𝑦superscriptsubscriptsubscript𝑦𝑛𝑛1𝑁y=\\{y_{n}\\}_{n=1}^{N} be a sequence of the conditional score features. Our network has two chordwise latent variables z(pln)={zc(pln)}c=1C∈ℝC×d(pln)superscript𝑧plnsuperscriptsubscriptsubscriptsuperscript𝑧pln𝑐𝑐1𝐶superscriptℝ𝐶superscript𝑑plnz^{(\\text{pln})}=\\{z^{(\\text{pln})}_{c}\\}_{c=1}^{C}\\in\\mathbb{R}^{C\\times d^{(\\text{pln})}} and z(str)={zc(str)}c=1C∈ℝC×d(str)superscript𝑧strsuperscriptsubscriptsubscriptsuperscript𝑧str𝑐𝑐1𝐶superscriptℝ𝐶superscript𝑑strz^{(\\text{str})}=\\{z^{(\\text{str})}_{c}\\}_{c=1}^{C}\\in\\mathbb{R}^{C\\times d^{(\\text{str})}} that represent explicit planning and structural attribute, where d(pln)superscript𝑑plnd^{(\\text{pln})} and d(str)superscript𝑑strd^{(\\text{str})} are the sizes of z(pln)superscript𝑧plnz^{(\\text{pln})} and z(str)superscript𝑧strz^{(\\text{str})}, respectively. Our network generates notewise performance parameters x𝑥x from these latent variables and given score features y𝑦y. The overall architecture of our proposed system is illustrated in Figure 1. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_13", "text": " Generation. A probabilistic generator parameterized by θ𝜃\\theta produces the note-level performance parameters x𝑥x from the two latent variables z(pln)superscript𝑧plnz^{(\\text{pln})} and z(str)superscript𝑧strz^{(\\text{str})} with the given condition y𝑦y. We note that the latent variables are in chord-level. This decreases a computational cost and also enables intuitive modeling of polyphonic piano performance where each time step represents a stack of notes and the simultaneous notes share common characteristics : pθ​(x,y,z(pln),z(str))=pθ​(x|z(pln),z(str),y)pθ​(z(pln))∏c=1Cpθ​(zc(str)|z<c(str),y≤c(chd))subscript𝑝𝜃𝑥𝑦superscript𝑧plnsuperscript𝑧strsubscript𝑝𝜃conditional𝑥superscript𝑧plnsuperscript𝑧str𝑦subscript𝑝𝜃superscript𝑧plnsuperscriptsubscriptproduct𝑐1𝐶subscript𝑝𝜃conditionalsubscriptsuperscript𝑧str𝑐subscriptsuperscript𝑧strabsent𝑐subscriptsuperscript𝑦chdabsent𝑐\\begin{split}p_{\\theta}(x,y,z^{(\\text{pln})},z^{(\\text{str})})=&p_{\\theta}(x|z^{(\\text{pln})},z^{(\\text{str})},y)\\\\ p_{\\theta}(z^{(\\text{pln})})&\\prod_{c=1}^{C}p_{\\theta}(z^{(\\text{str})}_{c}|z^{(\\text{str})}_{<c},y^{(\\text{chd})}_{\\leq c})\\end{split} (2) where y(chd)=N2C​(ey)superscript𝑦chdN2Csubscript𝑒𝑦y^{(\\text{chd})}=\\text{N2C}(e_{y}) is the chordwise embedding, and eysubscript𝑒𝑦e_{y} is the notewise embedding for y𝑦y. We assume that the prior of zc(pln)subscriptsuperscript𝑧pln𝑐z^{(\\text{pln})}_{c} is a standard normal distribution. In contrast, zc(str)subscriptsuperscript𝑧str𝑐z^{(\\text{str})}_{c} is sampled from a sequential prior (37, 36, 24), conditioned on both previous latent variables and chordwise score features: zc(str)∼𝒩(μ(prior),diag(σ(prior)2)z^{(\\text{str})}_{c}\\sim\\mathcal{N}(\\mu^{(\\text{prior})},\\text{diag}(\\sigma^{(\\text{prior})^{2}}), where (μ(prior),σ(prior))=f(prior)​(z<c(str),y≤c(chd))superscript𝜇priorsuperscript𝜎priorsuperscript𝑓priorsubscriptsuperscript𝑧strabsent𝑐subscriptsuperscript𝑦chdabsent𝑐(\\mu^{(\\text{prior})},\\sigma^{(\\text{prior})})=f^{(\\text{prior})}(z^{(\\text{str})}_{<c},y^{(\\text{chd})}_{\\leq c}), and f(prior)superscript𝑓priorf^{(\\text{prior})} is a unidirectional recurrent neural network. The latent representations and y(chd)superscript𝑦chdy^{(\\text{chd})} pass through the decoder as shown in Figure 1. During training, the model predicts the intermediate chordwise output that is computed as N2C​(x)N2C𝑥\\text{N2C}(x). This is to enhance reconstruction power of our system, propagating accurate information of chord-level attributes to the final decoder. The intermediate activation is then extended to the notewise through the C2N operation. The note-level parameters are generated autoregressively based on this activation and the notewise score feature. We use teacher forcing during training . ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_14", "text": " Inference. A probabilistic encoder parameterized by ϕitalic-ϕ\\phi approximates the posterior distibutions of the latent representations z(pln)superscript𝑧plnz^{(\\text{pln})} and z(str)superscript𝑧strz^{(\\text{str})} from the performance input x𝑥x and conditional score input y𝑦y: qϕ​(z(pln),z(str)|x,y)=subscript𝑞italic-ϕsuperscript𝑧plnconditionalsuperscript𝑧str𝑥𝑦absent\\displaystyle q_{\\phi}(z^{(\\text{pln})},z^{(\\text{str})}|x,y)= qϕ​(z(pln)|x(chd))subscript𝑞italic-ϕconditionalsuperscript𝑧plnsuperscript𝑥chd\\displaystyle q_{\\phi}(z^{(\\text{pln})}|x^{(\\text{chd})}) (3) ∏c=1Cqϕ​(zc(str)|x≤c(chd),y≤c(chd))superscriptsubscriptproduct𝑐1𝐶subscript𝑞italic-ϕconditionalsubscriptsuperscript𝑧str𝑐subscriptsuperscript𝑥chdabsent𝑐subscriptsuperscript𝑦chdabsent𝑐\\displaystyle\\prod_{c=1}^{C}q_{\\phi}(z^{(\\text{str})}_{c}|x^{(\\text{chd})}_{\\leq c},y^{(\\text{chd})}_{\\leq c}) where x(chd)=N2C​(ex)superscript𝑥chdN2Csubscript𝑒𝑥x^{(\\text{chd})}=\\text{N2C}(e_{x}) is the chordwise embedding, and exsubscript𝑒𝑥e_{x} is the notewise embedding for x𝑥x. The posterior distributions of zc(pln)subscriptsuperscript𝑧pln𝑐z^{(\\text{pln})}_{c} and zc(str)subscriptsuperscript𝑧str𝑐z^{(\\text{str})}_{c} are approximated by distribution parameters encoded by f(pln)​(x(chd))superscript𝑓plnsuperscript𝑥chdf^{(\\text{pln})}(x^{(\\text{chd})}) and f(str)​(x(chd),y(chd))superscript𝑓strsuperscript𝑥chdsuperscript𝑦chdf^{(\\text{str})}(x^{(\\text{chd})},y^{(\\text{chd})}), where f(pln)superscript𝑓plnf^{(\\text{pln})} and f(str)superscript𝑓strf^{(\\text{str})} are bidirectional and unidirectional recurrent neural networks, respectively. We note that z(pln)superscript𝑧plnz^{(\\text{pln})} is independent of the score features y𝑦y. This allows a flexible transfer of the explicit planning among other musical pieces. On the other hand, z(str)superscript𝑧strz^{(\\text{str})} is constrained by y𝑦y since the structural attributes are dependent on the note structure. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_15", "text": " Training. We train the models pθsubscript𝑝𝜃p_{\\theta} and qϕsubscript𝑞italic-ϕq_{\\phi} by approximating marginal distributions of the performance features x𝑥x conditioned on the score features y𝑦y. This requires to maximize negative evidence lower bound (ELBO) that includes regularization force by Kullback–Leibler divergence : ℒVAE=𝔼qϕ​(z(pln),z(str)|x,y)​(log⁡pθ​(x|z(pln),z(str),y))subscriptℒVAEsubscript𝔼subscript𝑞italic-ϕsuperscript𝑧plnconditionalsuperscript𝑧str𝑥𝑦delimited-()subscript𝑝𝜃conditional𝑥superscript𝑧plnsuperscript𝑧str𝑦\\displaystyle\\mathcal{L}_{\\text{VAE}}=\\mathbb{E}_{q_{\\phi}(z^{(\\text{pln})},z^{(\\text{str})}|x,y)}\\left(\\log p_{\\theta}(x|z^{(\\text{pln})},z^{(\\text{str})},y)\\right) (4) +𝔼qϕ​(z(pln),z(str)|x,y)​(log⁡pθ​(k|z(pln),z(str),y))subscript𝔼subscript𝑞italic-ϕsuperscript𝑧plnconditionalsuperscript𝑧str𝑥𝑦delimited-()subscript𝑝𝜃conditional𝑘superscript𝑧plnsuperscript𝑧str𝑦\\displaystyle+\\mathbb{E}_{q_{\\phi}(z^{(\\text{pln})},z^{(\\text{str})}|x,y)}\\left(\\log p_{\\theta}(k|z^{(\\text{pln})},z^{(\\text{str})},y)\\right) −KL​(qϕ​(z(pln)|x)∥pθ​(z(pln)))KLconditionalsubscript𝑞italic-ϕconditionalsuperscript𝑧pln𝑥subscript𝑝𝜃superscript𝑧pln\\displaystyle-\\text{KL}(q_{\\phi}(z^{(\\text{pln})}|x)\\|p_{\\theta}(z^{(\\text{pln})})) −∑c=1CKL(qϕ(zc(str)|x≤c(chd),y≤c(chd))∥pθ(zc(str)|z<c(str),y≤c(chd)))\\displaystyle-\\sum_{c=1}^{C}\\text{KL}(q_{\\phi}(z^{(\\text{str})}_{c}|x^{(\\text{chd})}_{\\leq c},y^{(\\text{chd})}_{\\leq c})\\|p_{\\theta}(z^{(\\text{str})}_{c}|z^{(\\text{str})}_{<c},y^{(\\text{chd})}_{\\leq c})) where k=N2C​(x)𝑘N2C𝑥k=\\text{N2C}(x) is the chordwise performance features. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_16", "text": " We enhance disentanglement of the latent representations z(pln)superscript𝑧plnz^{(\\text{pln})} and z(str)superscript𝑧strz^{(\\text{str})} using four regularization tasks . ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_17", "text": " Prediction Tasks. We extract new supervisory signals for additional prediction tasks from the input data . We define a signal of explicit planning I(pln)superscript𝐼plnI^{(\\text{pln})} as a set of smoothed contours of the expressive parameters. It is extracted as a polynomial function predicted from the chordwise performance parameters k𝑘k. We also derive a signal of structural attribute as I(str)=sign​(k−I(pln))superscript𝐼strsign𝑘superscript𝐼plnI^{(\\text{str})}=\\text{sign}(k-I^{(\\text{pln})}) which represents normalized directions of the performance parameters. We train two discriminators D(pln)superscript𝐷plnD^{(\\text{pln})} and D(str)superscript𝐷strD^{(\\text{str})} that directly receive z(pln)superscript𝑧plnz^{(\\text{pln})} and z(str)superscript𝑧strz^{(\\text{str})}, respectively. D(pln)superscript𝐷plnD^{(\\text{pln})} is composed of A𝐴A sub-discriminators where each discriminator Da(pln)subscriptsuperscript𝐷pln𝑎D^{(\\text{pln})}_{a} predicts a signal Ia(pln)subscriptsuperscript𝐼pln𝑎I^{(\\text{pln})}_{a} for each expressive attribute a𝑎a from za(pln)∈ℝC×(d(pln)/A)subscriptsuperscript𝑧pln𝑎superscriptℝ𝐶superscript𝑑pln𝐴z^{(\\text{pln})}_{a}\\in\\mathbb{R}^{C\\times(d^{(\\text{pln})}/A)}, where za(pln)subscriptsuperscript𝑧pln𝑎z^{(\\text{pln})}_{a} is a constituent part of z(pln)superscript𝑧plnz^{(\\text{pln})}, and A𝐴A is the number of expressive attributes. This setting is for a clear disentanglement among the expressive attributes. On the other hand, D(str)superscript𝐷strD^{(\\text{str})} predicts the signal I(str)superscript𝐼strI^{(\\text{str})} at once for all expressive attributes that belong to the same musical structure. All discriminators are jointly trained with the generative model, and the costs ℒplnsubscriptℒpln\\mathcal{L}_{\\text{pln}} and ℒstrsubscriptℒstr\\mathcal{L}_{\\text{str}} are minimized as ℒpln=1A​∑aMSE​(Da(pln)​(za(pln)),Ia(pln))subscriptℒpln1𝐴subscript𝑎MSEsubscriptsuperscript𝐷pln𝑎subscriptsuperscript𝑧pln𝑎subscriptsuperscript𝐼pln𝑎\\mathcal{L}_{\\text{pln}}=\\frac{1}{A}\\sum_{a}\\text{MSE}(D^{(\\text{pln})}_{a}(z^{(\\text{pln})}_{a}),I^{(\\text{pln})}_{a}) and ℒstr=MSE​(D(str)​(z(str)),I(str))subscriptℒstrMSEsuperscript𝐷strsuperscript𝑧strsuperscript𝐼str\\mathcal{L}_{\\text{str}}=\\text{MSE}(D^{(\\text{str})}(z^{(\\text{str})}),I^{(\\text{str})}), respectively. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_18", "text": " Factorizing Latent Variables. We further constrain a generator to guarantee that z(pln)superscript𝑧plnz^{(\\text{pln})} delivers correct information regardless of z(str)superscript𝑧strz^{(\\text{str})} . During training, we sample a new output x~~𝑥\\tilde{x} using z(pln)∼qϕ​(z(pln)|x)similar-tosuperscript𝑧plnsubscript𝑞italic-ϕconditionalsuperscript𝑧pln𝑥z^{(\\text{pln})}\\sim q_{\\phi}(z^{(\\text{pln})}|x) and z~(str)∼pθ​(z(str))similar-tosuperscript~𝑧strsubscript𝑝𝜃superscript𝑧str\\tilde{z}^{(\\text{str})}\\sim p_{\\theta}(z^{(\\text{str})}). Then, we re-infer z~(pln)∼qϕ​(z~(pln)|x~)similar-tosuperscript~𝑧plnsubscript𝑞italic-ϕconditionalsuperscript~𝑧pln~𝑥\\tilde{z}^{(\\text{pln})}\\sim q_{\\phi}(\\tilde{z}^{(\\text{pln})}|\\tilde{x}) to estimate the superversory signal I(pln)superscript𝐼plnI^{(\\text{pln})}. This prediction loss is backpropagated only through the generator: ℒfac=1A​∑aMSE​(Da(pln)​(z~a(pln)),Ia(pln))subscriptℒfac1𝐴subscript𝑎MSEsubscriptsuperscript𝐷pln𝑎subscriptsuperscript~𝑧pln𝑎subscriptsuperscript𝐼pln𝑎\\mathcal{L}_{\\text{fac}}=\\frac{1}{A}\\sum_{a}\\text{MSE}(D^{(\\text{pln})}_{a}(\\tilde{z}^{(\\text{pln})}_{a}),I^{(\\text{pln})}_{a}) (5) ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_19", "text": " Aligning Latent Variables with Factors. Finally, we enable the \"sliding-fader\" control of the expressive attributes . To this end, we employ the regularization loss proposed by Pati et al. that aligns specific dimensions of z(pln)superscript𝑧plnz^{(\\text{pln})} with the target expressive attributes. This method assumes that a latent representation can be disentangled through its monotonic relationship with a target attribute. Let disubscript𝑑𝑖d_{i} and djsubscript𝑑𝑗d_{j} be a target dimension d𝑑d of i𝑖ith and j𝑗jth latent representations, respectively, where d∈za(pln)𝑑subscriptsuperscript𝑧pln𝑎d\\in z^{(\\text{pln})}_{a}, i,j∈(1,M)𝑖𝑗1𝑀i,j\\in(1,M), and M𝑀M is the size of a mini-batch. A distance matrix 𝒟dsubscript𝒟𝑑\\mathcal{D}_{d} is computed between disubscript𝑑𝑖d_{i} and djsubscript𝑑𝑗d_{j} within a mini-batch, where 𝒟d=di−djsubscript𝒟𝑑subscript𝑑𝑖subscript𝑑𝑗\\mathcal{D}_{d}=d_{i}-d_{j}. A similar distance matrix 𝒟asubscript𝒟𝑎\\mathcal{D}_{a} is computed for the two target attribute values aisubscript𝑎𝑖a_{i} and ajsubscript𝑎𝑗a_{j}. We minimize a MSE between 𝒟dsubscript𝒟𝑑\\mathcal{D}_{d} and 𝒟asubscript𝒟𝑎\\mathcal{D}_{a} as follows: ℒreg=MSE​(tanh​(𝒟d),sign​(𝒟a))subscriptℒregMSEtanhsubscript𝒟𝑑signsubscript𝒟𝑎\\mathcal{L}_{\\text{reg}}=\\text{MSE}(\\text{tanh}(\\mathcal{D}_{d}),\\text{sign}(\\mathcal{D}_{a})) (6) ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_20", "text": " The overall objective of our proposed network aims to generate realistic performance features with properly disentangled representations for the intended factors: ℒ=ℒVAE+λpln​ℒpln+λstr​ℒstr+λfac​ℒfac+λreg​ℒregℒsubscriptℒVAEsubscript𝜆plnsubscriptℒplnsubscript𝜆strsubscriptℒstrsubscript𝜆facsubscriptℒfacsubscript𝜆regsubscriptℒreg\\mathcal{L}=\\mathcal{L}_{\\text{VAE}}+\\lambda_{\\text{pln}}\\mathcal{L}_{\\text{pln}}+\\lambda_{\\text{str}}\\mathcal{L}_{\\text{str}}+\\lambda_{\\text{fac}}\\mathcal{L}_{\\text{fac}}+\\lambda_{\\text{reg}}\\mathcal{L}_{\\text{reg}} (7) where λplnsubscript𝜆pln\\lambda_{\\text{pln}}, λstrsubscript𝜆str\\lambda_{\\text{str}}, λfacsubscript𝜆fac\\lambda_{\\text{fac}}, and λregsubscript𝜆reg\\lambda_{\\text{reg}} are hyperparameters for balancing the importance of the loss terms. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_21", "text": " We use Yamaha e-Competition Dataset and Vienna 4x22 Piano Corpus . From these datasets, we collect 356 performances of 34 pieces by Frédéric Chopin, which have been representative research subjects for analyzing the Western musical expression (22, 41, 6, 42). We use 30 pieces (108,738 batches) for training and the rest for testing. To verify the generality of model performances, we also collect the external dataset from ASAP dataset . We use 116 performances for 23 pieces by 10 composers who represent various eras of Western music. For subjective evaluation, we collect 42 songs of non-Classical songs from online source222http://www.ambrosepianotabs.com/page/library which are less constrained to written expression than most Classical excerpts. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_22", "text": " We basically follow Jeong et al. to compute the input features from the aligned pairs of performance and score data. We set MIDI velocities and Beat Per Minute (BPM) of all notes in the score data to be 64 and 120, respectively. We also remove any grace notes for simplicity and manually correct any errors. The performance features are further normalized into a range from -1 to 1 for training. We use an ADAM optimizer with an initial learning rate of 1e-5, which is reduced by 5% every epoch during backpropagation. We empirically set λplnsubscript𝜆pln\\lambda_{\\text{pln}}, λstrsubscript𝜆str\\lambda_{\\text{str}}, λfacsubscript𝜆fac\\lambda_{\\text{fac}}, and λregsubscript𝜆reg\\lambda_{\\text{reg}} to be 1000, 100, 1, 10, respectively. We set a degree of the polynomial function computing I(pln)superscript𝐼plnI^{(\\text{pln})} as 4 through an ablation study described in the supplementary material. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_23", "text": " To the best of our knowledge, there is no existing method that does not intentionally follow the written guidelines in the musical score. Therefore, we use variants of our proposed network as comparing methods that differ in model architecture: Notewise denotes the proposed model without the hierarchical learning. CVAE denotes a variant of Notewise where z(pln)superscript𝑧plnz^{(\\text{pln})} is substituted with the supervisory signal I(pln)superscript𝐼plnI^{(\\text{pln})}. We also conduct an ablation study that investigates necessity of the four loss terms. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_24", "text": " We evaluate the proposed network in terms of both objective and subjective criteria. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_25", "text": " We compute Pearson’s correlation coefficients between the reconstructed or generated samples and human piano performances (19, 6, 9, 11). We first measure the reconstruction quality of the test samples (\"RreconsubscriptRrecon\\text{R}_{\\text{recon}}\"). Then, we evaluate the samples generated from z~(str)∼pθ​(z(str))similar-tosuperscript~𝑧strsubscript𝑝𝜃superscript𝑧str\\tilde{z}^{(\\text{str})}\\sim p_{\\theta}(z^{(\\text{str})}) and either of : 1) z(pln)∼qϕ​(z(pln)|x)similar-tosuperscript𝑧plnsubscript𝑞italic-ϕconditionalsuperscript𝑧pln𝑥z^{(\\text{pln})}\\sim q_{\\phi}(z^{(\\text{pln})}|x) (\"Rx|plnsubscriptRconditional𝑥pln\\text{R}_{x|\\text{pln}}\") and 2) z0(pln)∼qϕ​(z0(pln)|x0)similar-tosubscriptsuperscript𝑧pln0subscript𝑞italic-ϕconditionalsubscriptsuperscript𝑧pln0subscript𝑥0z^{(\\text{pln})}_{0}\\sim q_{\\phi}(z^{(\\text{pln})}_{0}|x_{0}) (\"Rx|pln0subscriptRconditional𝑥subscriptpln0\\text{R}_{x|\\text{pln}_{0}}\"), where x0subscript𝑥0x_{0} is a zero matrix. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_26", "text": " The results are shown in Table 1. Notewise shows the best scores in both datasets, and our method outperforms CVAE in RreconsubscriptRrecon\\text{R}_{\\text{recon}}. It indicates that our proposed architecture where a latent representation is used instead of a direct condition is generally good at reconstructing the human data. When using the randomly sampled z~(str)superscript~𝑧str\\tilde{z}^{(\\text{str})}, our method and the model without ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} show stable scores compared to other baseline models. The model without ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} also shows the highest scores in Rx|plnsubscriptRconditional𝑥pln\\text{R}_{x|\\text{pln}} for both datasets. It indicates that ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} may contribute the least to generation power among other loss terms. CVAE and the model only with ℒ(pln)superscriptℒpln\\mathcal{L}^{(\\text{pln})} also show high scores in Rx|pln0subscriptRconditional𝑥subscriptpln0\\text{R}_{x|\\text{pln}_{0}}. This may be due to the posterior collapse that makes the decoder depends mostly on the score condition , which is demonstrated in the supplementary material. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_27", "text": " We verify whether the latent representations are well-disentangled by appropriate information. To this end, each model infers the latent representations z(pln)superscript𝑧plnz^{(\\text{pln})} and z(str)superscript𝑧strz^{(\\text{str})} from the test sets. Each model also randomly samples z~(str)superscript~𝑧str\\tilde{z}^{(\\text{str})} and infers z0(pln)∼qϕ​(z(pln)|x0)similar-tosubscriptsuperscript𝑧pln0subscript𝑞italic-ϕconditionalsuperscript𝑧plnsubscript𝑥0z^{(\\text{pln})}_{0}\\sim q_{\\phi}(z^{(\\text{pln})}|x_{0}). We use z0(pln)subscriptsuperscript𝑧pln0z^{(\\text{pln})}_{0} to measure the structural attribute, since z0(pln)subscriptsuperscript𝑧pln0z^{(\\text{pln})}_{0} represents a flat expression where the structural attribute can be solely exposed. Each model generates new outputs as x(pln)∼pθ​(x(pln)|z(pln),z~(str),y)similar-tosuperscript𝑥plnsubscript𝑝𝜃conditionalsuperscript𝑥plnsuperscript𝑧plnsuperscript~𝑧str𝑦x^{(\\text{pln})}\\sim p_{\\theta}(x^{(\\text{pln})}|z^{(\\text{pln})},\\tilde{z}^{(\\text{str})},y) and x(str)∼pθ​(x(str)|z0(pln),z(str),y)similar-tosuperscript𝑥strsubscript𝑝𝜃conditionalsuperscript𝑥strsubscriptsuperscript𝑧pln0superscript𝑧str𝑦x^{(\\text{str})}\\sim p_{\\theta}(x^{(\\text{str})}|z^{(\\text{pln})}_{0},z^{(\\text{str})},y). Then, we compute a new signal I~(pln)superscript~𝐼pln\\tilde{I}^{(\\text{pln})} from x(pln)superscript𝑥plnx^{(\\text{pln})} using the polynomial regression. The MSE values are calculated as MSEp=MSE​(I~(pln),I(pln))subscriptMSEpMSEsuperscript~𝐼plnsuperscript𝐼pln\\text{MSE}_{\\text{p}}=\\text{MSE}(\\tilde{I}^{(\\text{pln})},I^{(\\text{pln})}) and MSEs=MSE​(x(str),k−I(pln))subscriptMSEsMSEsuperscript𝑥str𝑘superscript𝐼pln\\text{MSE}_{\\text{s}}=\\text{MSE}(x^{(\\text{str})},k-I^{(\\text{pln})}). ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_28", "text": " Table 2 shows that our method achieves the best scores in all metrics for both datasets. This confirms that our proposed system can learn the latent representations that reflect the intended attributes. Notewise and the model without ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} also show the robust scores compared to other baseline models. It indicates that using the notewise modeling alone is still relevant for achieving appropriate representations. It also implies that ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} may not contribute to the disentanglement as much as other loss terms. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_29", "text": " We sample a new input x¯¯𝑥\\bar{x} where entries of each feature are constant across time. Then, each model infers z¯(pln)∼qϕ​(z¯(pln)|x¯)similar-tosuperscript¯𝑧plnsubscript𝑞italic-ϕconditionalsuperscript¯𝑧pln¯𝑥\\bar{z}^{(\\text{pln})}\\sim q_{\\phi}(\\bar{z}^{(\\text{pln})}|\\bar{x}). We control each attribute by varying dimension values of z¯(pln)superscript¯𝑧pln\\bar{z}^{(\\text{pln})} following Tan et al. and examine the new samples generated from z¯(pln)superscript¯𝑧pln\\bar{z}^{(\\text{pln})}. We leverage the existing metrics to measure the controllability of each model : Consistency (\"C\") measures consistency across samples in terms of their controlled attributes; restrictiveness (\"R\") measures how much the uncontrolled attributes maintain their flatness over time; and linearity (\"L\") measures how much the controlled attributes are correlated with the corresponding latent dimensions. We average over the three expressive attributes–dynamics, articulation, and tempo–into one score for each metric. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_30", "text": " Table 3 demonstrates that our system shows the best scores in consistency and linearity in both internal and external datasets. This indicates that our proposed method can robustly control the latent representation z(pln)superscript𝑧plnz^{(\\text{pln})} in intended way. The model without ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} outperforms our method in restrictiveness. It indicates that the uncontrolled attributes by this model are the least interfered by the controlled attribute. However, its scores on consistency and linearity are lower than ours. It confirms that ℒregsubscriptℒreg\\mathcal{L}_{\\text{reg}} promotes linear control of the target attributes. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_31", "text": " We conduct a listening test to compare the proposed model architecture to Notewise and CVAE. We qualitatively evaluate the base quality of the samples that have flat expressions, so that quality judgments are independent of any preference of arbitrary explicit planning. We generate each sample using z0(pln)subscriptsuperscript𝑧pln0z^{(\\text{pln})}_{0}. A listening test is composed of 30 trials where each participant chooses a more \"human-like\" sample out of the generated sample and its plain MIDI . Both samples have the same length which is a maximum of 15 seconds, rendered with TiMidity++333https://sourceforge.net/projects/timidity/ without any pedal effect. Human-likeness denotes how similar the sample is to an actual piano performance that commonly appears in popular music. A total of 28 participants are involved, and 6 participants are professionally trained in music. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_32", "text": " The results are demonstrated in Table 4. We measure a winning rate, a rate of winning over the plain MIDI, and a top-ranking rate, a rate of being the highest rank among the three models in terms of winning rate. These metrics are further explained in the supplementary material. The results show that musically trained (\"T\") and untrained (\"UT\") groups show the different tendency of each other: in the trained group, CVAE shows the best winning rate, and our method gets the best top-ranking rate; in the untrained group, our method shows the highest winning rate, whereas Notewise is top-ranked most frequently. We note that our system reveals smaller variances than those of CVAE and Notewise of the musically trained and untrained groups in the winning rate, respectively. Moreover, our system receives the highest overall scores for both metrics. It indicates that our system can be stably perceived more human-like than the plain MIDI compared to other baseline models. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_33", "text": " Our system can render new piano performances from the scratch given a musical score. It can directly generate expressive parameters from the randomly sampled z~(pln)∼pθ​(z(pln))similar-tosuperscript~𝑧plnsubscript𝑝𝜃superscript𝑧pln\\tilde{z}^{(\\text{pln})}\\sim p_{\\theta}(z^{(\\text{pln})}) and z~(str)∼pθ​(z(str))similar-tosuperscript~𝑧strsubscript𝑝𝜃superscript𝑧str\\tilde{z}^{(\\text{str})}\\sim p_{\\theta}(z^{(\\text{str})}). We note that z~(pln)superscript~𝑧pln\\tilde{z}^{(\\text{pln})} does not have temporal dependency: each z~c(pln)subscriptsuperscript~𝑧pln𝑐\\tilde{z}^{(\\text{pln})}_{c} is sampled independently of z~c−1(pln)subscriptsuperscript~𝑧pln𝑐1\\tilde{z}^{(\\text{pln})}_{c-1}. Hence, we need to insert specific values {α(c)}c=1Csuperscriptsubscriptsuperscript𝛼𝑐𝑐1𝐶\\{\\alpha^{(c)}\\}_{c=1}^{C}, which we call as \"smooth sketches\", into the target dimensions of z(pln)superscript𝑧plnz^{(\\text{pln})} if any temporal dependency of explicit planning is necessary. Figure 2 shows that the controlled parameters are greatly correlated with α𝛼\\alpha, while their local characteristics follow those of the ground truth. In addition, the black and orange lines together demonstrate granular variety in the parameters induced by different z~(str)superscript~𝑧str\\tilde{z}^{(\\text{str})} for the same musical structure. Moreover, Figure 3 shows that our system can estimate explicit planning from arbitrary human performances, indicating that our system can derive relevant information on explicit planning from the unseen data. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_34", "text": " We propose a system that can render expressive piano performance with flexible control of musical expression. We attempt to achieve representations for the explicit planning and structural attribute through self-supervised learning objectives. We also leverage the two-step modeling of two hierarchical units for an intuitive generation. Experimental results confirm that our system shows stable generation quality, disentangles the target representations, and controls all expressive attributes independently of each other. Future work can be improving our system using a larger dataset for various genres and composers. We can also further compare our system with recent piano-rendering models to investigate any connections between a performer’s explicit planning and a composer’s intent. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" }, { "id": "2208.14867_all_35", "text": " We deeply appreciate Dasaem Jeong, Taegyun Kwon, and Juhan Nam for giving technical support to initiate this research. We also especially appreciate Hyeong-Seok Choi for providing critical feedback on the model architecture and evaluation. We greatly appreciate You Jin Choi and all of my colleagues who gave great help with respect to the listening test. ", "title": "Sketching the Expression: Flexible Rendering of Expressive Piano Performance with Self-Supervised Learning" } ]
Who collected the queries from MSMARCO-Passage dataset to make MSMARCO-TRAIN query set?
MARCO-Passage collection is a large-scale publicly available corpus and two query sets derived from this corpus are used in the paper: MSMARCO-TRAIN and MSMARCO-DEV [40].
[ 40 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful capacity in modeling semantic relevance, which attracted a wealth of research studies in the past decade (Guo et al., 2020). Recently, large-scale pre-trained language models (PLMs), e.g. BERT (Devlin et al., 2018), ERNIE (Sun et al., 2019) and RoBERTa (Liu et al., 2019), have dominated many natural language processing tasks, and have also achieved remarkable success on passage re-ranking. For example, PLM based re-rankers (MacAvaney et al., 2019; Li et al., 2020; Dong and Niu, 2021; Dong et al., 2022) have achieved state-of-the-art performance, which takes the concatenation of query-passage pair as input, and applies multi-layer full-attention to model their semantic relevance. Their superiority can be attributed to the expressive transformer structure and the pretrain-then-finetune paradigm, which allow the model to learn useful implicit knowledge (i.e., semantic relevance in the latent space) from massive textual corpus (Fan et al., 2021). ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_1", "text": " However, implicit knowledge still has some inherent weaknesses, which limits the applicability of PLMs based re-rankers. First, queries and passages are usually created by different persons and have different expression ways (Nogueira et al., 2019b), such as word usage and language style. Worse still, the data distributions of search queries and web contents are highly heterogeneous (Liu et al., 2021), where various specialized domains (e.g., bio-medical) may only have few training examples in a general corpus. Domain-specific knowledge can hardly be revealed and captured by the model, and thus the processing of domain-specific queries is often inaccurate. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_2", "text": " To overcome the limitations, it is essential to incorporate the knowledge graph as explicit knowledge to PLM based re-rankers. Thus we propose Knowledge Enhanced Re-ranking Model (KERM), which utilizes external knowledge to explicitly enhance the semantic matching process in PLM based re-rankers. Intuitively, the difference in expression ways can be mitigated by the triplet with \"synonymy\" as relation in knowledge graph, and all the triplets can enrich the domain knowledge. The overall workflow of KERM is depicted in Fig. 1. To the the best of our knowledge, this is the first attempt for knowledge enhanced PLMs for passage re-ranking. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_3", "text": " Despite the knowledge graph is a desirable source of explicit knowledge, it is non-trivial to take advantage of explicit knowledge directly for passage re-ranking due to the following two challenges: ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_4", "text": " • Challenge 1. Existing knowledge graph are not constructed for re-ranking task. They usually contain trivial factual triples, which can hardly bring information gain. The inappropriate selection of external knowledge could even jeopardize the re-ranker performance. How to utilize existing knowledge graph to re-ranking task is remain a challenge. • Challenge 2. The explicit knowledge and implicit knowledge are highly heterogeneous due to the different sources, which makes the aggregation of the two difficult. How to mutually refine each other and effectively aggregate explicit knowledge into implicit knowledge to alleviate the semantic gap between query and passage is still a challenge. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_5", "text": " In general, the workflow of KERM can be divided into knowledge graph distillation and knowledge aggregation to tackle the above challenges. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_6", "text": " For knowledge graph distillation, we propose a novel pipeline to establish knowledge meta graphs, which only retain informative knowledge for passage re-ranking. Specifically, we first distill a graph globally for passage re-ranking scenario from an existing knowledge graph by pruning some unreliable or noisy relations based on TransE embedding. Then for a specific query-passage pair, we extract entities from both the query and passage, and construct a query-document bipartite entity graph based on query and passage entities and their k-hop neighbors, namely knowledge meta graph. Challenge 1. could be addressed in this distillation process. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_7", "text": " For knowledge aggregation, we design a novel interaction module between text and knowledge graph to combine the implicit and explicit knowledge. To derive implicit knowledge from text, we employ PLM as text encoder. To be aligned with implicit knowledge, knowledge meta graph is encoded with a multi-layer graph neural network (i.e. k-hop), namely Graph Meta Network (GMN). Each transformer layer outputs word representations. Each graph meta network layer outputs entity representations. Both word and entity representations are aggregated as the input of the following transformer and GMN layer, respectively in a novelly designed module, namely knowledge injector. Therefore through knowledge aggregation, implicit knowledge from text corpus and explicit knowledge from existing knowledge graph can mutually boost each other to achieve a better re-ranking performance, in which the issues in Challenge 2. could be mitigated. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_8", "text": " Overall, our contributions can be summarized as follows: • It is the first attempt to solve the knowledge enhanced PLMs problem for passage re-ranking. The key motivation lies in that bridging the semantic gap between the query and passage with the help of both kinds of knowledge. • We design a novel knowledge graph distillation method. It refines a reliable knowledge graph from the existing one globally and constructs a knowledge meta graph based on the refined graph locally. • We propose a novel aggregation of PLM and graph neural network framework to model the interaction between explicit knowledge and implicit knowledge. • Experimental results show the effectiveness of KERM on both general and domain specific data, achieving state-of-the-art performance for passage re-ranking. We also conduct a comprehensive study for the effects of each module in our method. The code is available at https://github.com/DQ0408 /KERM. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_9", "text": " In this section, we introduce several recently-proposed PLMs based re-rankers and retrievers. Moreover, we also present the general background of the related techniques involved in this paper, i.e. Knowledge Enhanced Pre-trained Language Models (KE-PLMs) and Graph Neural Network. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_10", "text": " Existing PLMs based re-rankers typically improve ranking performance from two aspects: (1) By optimizing the ranking procedure: monoBERT (Nogueira and Cho, 2019) is the first work that re-purposed BERT as a passage re-ranker and achieves state-of-the-art results. duoBERT (Nogueira et al., 2019a) integrates monoBERT in a multistage ranking architecture and adopts a pairwise classification approach to passage relevance computation. UED (Yan et al., 2021) proposes a cascade pre-training manner that can jointly enhance the retrieval stage through passage expansion with a pre-trained query generator and thus elevate the re-ranking stage with a pre-trained transformer encoder. The two stages can facilitate each other in a unified pre-training framework. H-ERNIE (Chu et al., 2022) proposes a multi-granularity PLM for web search. (2) By designing rational distillation procedure: LM Distill + Fine-Tuning (Gao et al., 2020) explores a variety of distillation methods to equip a smaller re-ranker with both general-purpose language modeling knowledge learned in pre-training and search- s​p​e​c​i​f​i​c𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑐specific relevance modeling knowledge learned in fine-tuning, and produces a faster re-ranker with better ranking performance. CAKD (Hofstätter et al., 2020) proposes a cross-architecture knowledge distillation procedure with a Margin-MSE loss, which can distill knowledge from multiple teachers at the same time. RocketQAv1 (Qu et al., 2021) trains dual-encoder and cross-encoder in a cascade manner, which leverages the powerful cross-encoder to empower the dual-encoder. RocketQAv2 (Ren et al., 2021) proposes a novel approach that jointly trains the dense passage retriever and passage re-ranker. The parameters of RocketQAv2 are inherited from RocketQAv1. Besides, RocketQAv2 utilizes a large PLM for data augmentation and denoising, which can also be regarded as a distillation procedure. Notably, these two types of studies anticipate more insightful information to be captured by the advanced ranking and training procedures, while neglecting the limitations of implicit knowledge extracted from noisy and heterogeneous data. Therefore, in this paper, we proposed the first knowledge-enhanced PLM based re-ranker, which thoughtfully leverages explicit external knowledge that improve the effectiveness of the model. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_11", "text": " The low-dimensional dense representations for query and passage are computed by PLMs based retrievers from the dual-encoder architecture. Afterward, the candidate passage set could be retrieved efficiently via approximate nearest neighbor algorithms. Existing studies could be categorized into two parts: (1) By optimizing the matching stage: DPR (Karpukhin et al., 2020) is the first study to leverage PLM to empower the retriever by a single vector. Other researches, such as RepBERT (Zhan et al., 2020), ColBERT (Khattab and Zaharia, 2020), COIL (Gao et al., 2021) and Interactor (Ye et al., 2022), obtain multiple vectors for query and passage for matching. (2) By optimizing the representation learning module: RocketQAv1 (Qu et al., 2021) and RocketQAv2 (Ren et al., 2021) boost the representation learning of retriever by leveraging the power of cross-encoder in a cascade or joint manner. Other studies boost the representation learning by designed IR-oriented pre-training tasks. ICT (Lee et al., 2019) treats sentences as pseudo-queries and matched them to the passage they originate from. Condenser (Gao and Callan, 2021) utilizes a novel pre-training task, which can produces an information-rich representation to condense an input sequence. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_12", "text": " Existing KE-PLMs can be categorized by the granularity of knowledge they incorporate from knowledge graph (KG), as text-based knowledge, entity knowledge and KG meta-graphs. To integrate text-based knowledge, RAG (Lewis et al., 2020) and KIF (Fan et al., 2020) first retrieve top-k documents from Wikipedia using KNN-based retrieval, and the PLM model is employed to generate the output conditioned on these retrieved documents. Entity-level information can be highly useful for a variety of natural language understanding tasks. Hence, many existing KE-PLMs target this type of simple yet powerful knowledge. ERNIE(BAIDU) (Sun et al., 2019) introduces a new pre-training strategy of language model which masking phrases or entities in order to implicitly learn both synaptic and semantic knowledge from these units. ERNIE(THU) (Zhang et al., 2019) integrates informative entity representations in the knowledge module into the underlying layers of the semantic module based on the alignments between text and entity to equip the model with the ability of knowledge awareness. As knowledge graphs provide richer information than simply entity, more and more researchers start to explore integration of more sophisticated knowledge, such as meta-graphs in KG. CokeBERT (Su et al., 2021) proposes a novel semantic-driven Graph Neural Network (GNN) to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text. CoLake (Sun et al., 2020a) also uses GNN to aggregate information from the constructed meta-graph in both pre-training and inference. CoLake converts the meta-graph into token sequence and appends it to input sequence for PLMs, which is distinctive to CokeBERT. Although extensive research has been proposed up to now to address the knowledge-aware problem, none exists which constrained on how to use knowledge to empower PLMs particularly for re-ranking tasks. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_13", "text": " Existing Graph Neural Networks (GNNs) mainly fall into two categories: graph-based and path-based. Graph-based GNNs learn the structured information by directly passing nodes massage on the graph structure. GCNs (Kipf and Welling, 2016) introduce a novel approach on graph-structured data by aggregating messages from its direct neighbors to learn the graph-structured feature efficiently and effectively. R-GCNs (Schlichtkrull et al., 2018) are developed specifically to encode the highly multi-relational graphs by defining relation-specific weight matrix for each edge type. In contrast, path-based GNNs first decompose the graph into paths and then pass nodes massage on the path level, which can naturally utilize the relationship between neighbors to transmit messages. RNs (Santoro et al., 2017) use MLPs to encode all paths in a graph and then pool the representation of paths to generate a global representation for the graph. KagNet (Lin et al., 2019) is a combination of GCNs, LSTMs and a hierarchical path-based attention mechanism, which forms an architecture for modeling nondegenerate paths in a graph. In this work, we use path-based GNNs to formulate our GMN module for its good scalability on modeling relationship information in heterogeneous graphs. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_14", "text": " Given a query q, passage re-ranking aims at ordering a set of ϰitalic-ϰ\\varkappa passages, i.e., 𝒫={pκ}κ=1ϰ𝒫superscriptsubscriptsubscriptp𝜅𝜅1italic-ϰ\\mathcal{P}=\\left\\{\\textbf{p}_{\\kappa}\\right\\}_{\\kappa=1}^{\\varkappa}, which is usually retrieved from a large-scale passage collection by a retriever, e.g. BM25 (Yang et al., 2017), DPR (Karpukhin et al., 2020) etc. In particular, a passage is a sequence of words p={wp}p=1|p|psuperscriptsubscriptsubscript𝑤𝑝𝑝1p\\textbf{p}=\\{w_{p}\\}_{p=1}^{|\\textbf{p}|}, where |p|p|\\textbf{p}| is the length of passage p. Similarly, a query is a sequence of words q={wq}q=1|q|qsuperscriptsubscriptsubscript𝑤𝑞𝑞1q\\textbf{q}=\\{w_{q}\\}_{q=1}^{|\\textbf{q}|}. Note that a passage p consists of T𝑇T sentences p={sτ}τ=1Tpsuperscriptsubscriptsubscripts𝜏𝜏1𝑇\\textbf{p}=\\{\\textbf{s}_{\\tau}\\}_{\\tau=1}^{T}. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_15", "text": " Following a previous study (Zou et al., 2021), a desirable re-ranker is a scoring function f∗​(⋅,⋅)superscript𝑓⋅⋅f^{*}(\\cdot,\\cdot) that maximizes the consistency between its predictions (denoted as Y^q,𝒫={f​(𝐪,𝐩κ)|𝐩κ∈𝒫}subscript^𝑌q𝒫conditional-set𝑓𝐪subscript𝐩𝜅subscript𝐩𝜅𝒫\\hat{Y}_{\\textbf{q},\\mathcal{P}}=\\{f(\\mathbf{q},\\mathbf{p}_{\\kappa})\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathbf{p}_{\\kappa}\\in\\mathcal{P}\\}) and the ground truth labels (denoted as Y={yκ}κ=1ϰ𝑌superscriptsubscriptsubscript𝑦𝜅𝜅1italic-ϰY=\\{y_{\\kappa}\\}_{\\kappa=1}^{\\varkappa}), i.e., ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_16", "text": " (1) f∗=maxf⁡𝔼{q,𝒫,Y}​ϑ​(Y,Y^q,𝒫),superscript𝑓subscript𝑓subscript𝔼q𝒫𝑌italic-ϑ𝑌subscript^𝑌q𝒫f^{*}=\\max_{f}\\mathbb{E}_{\\{\\textbf{q},\\mathcal{P},Y\\}}{\\vartheta(Y,\\hat{Y}_{\\textbf{q},\\mathcal{P}})}, where ϑitalic-ϑ\\vartheta is a ranking metric (e.g., MRR@10) that measures the consistency between the predictions and the labels. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_17", "text": " A knowledge base is usually represented as a directed graph 𝒢={ℰ,ℛ}𝒢ℰℛ\\mathcal{G}=\\{\\mathcal{E},\\mathcal{R}\\}, where the node set ℰℰ\\mathcal{E} represents entities, and the edge set ℛℛ\\mathcal{R} is composed of relations between entities. A triplet (eh,r,et)subscript𝑒ℎ𝑟subscript𝑒𝑡(e_{h},r,e_{t}) is the basic unit in the knowledge graph, where eh,et∈ℰsubscript𝑒ℎsubscript𝑒𝑡ℰe_{h},e_{t}\\in\\mathcal{E} are head and tail entity respectively, and r∈ℛ𝑟ℛr\\in\\mathcal{R} refers to their relations. For example, (a​p​p​l​e,u​s​e​d​_​f​o​r,e​a​t​i​n​g)𝑎𝑝𝑝𝑙𝑒𝑢𝑠𝑒𝑑_𝑓𝑜𝑟𝑒𝑎𝑡𝑖𝑛𝑔(apple,used\\_{for},eating) means that \"apple is used for eating\". ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_18", "text": " To leverage explicit knowledge in 𝒢𝒢\\mathcal{G} for passage re-ranking, we anticipate building a novel knowledge-enhanced passage re-ranker, whose objective can be defined as (2) f∗=maxf⁡𝔼{q,𝒫,Y}​ϑ​(Y,Y^q,𝒫,𝒢),superscript𝑓subscript𝑓subscript𝔼q𝒫𝑌italic-ϑ𝑌subscript^𝑌q𝒫𝒢f^{*}=\\max_{f}\\mathbb{E}_{\\{\\textbf{q},\\mathcal{P},Y\\}}{\\vartheta(Y,\\hat{Y}_{\\textbf{q},\\mathcal{P},\\mathcal{G}})}, where Y^q,𝒫,𝒢={f​(𝐪,𝐩κ|𝒢)|𝐩κ∈𝒫}subscript^𝑌q𝒫𝒢conditional𝑓𝐪conditionalsubscript𝐩𝜅𝒢subscript𝐩𝜅𝒫\\hat{Y}_{\\textbf{q},\\mathcal{P},\\mathcal{G}}=\\{f(\\mathbf{q},\\mathbf{p}_{\\kappa}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathbf{p}_{\\kappa}\\in\\mathcal{P}\\}, and f​(𝐪,𝐩κ|𝒢)𝑓𝐪conditionalsubscript𝐩𝜅𝒢f(\\mathbf{q},\\mathbf{p}_{\\kappa}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G}) represents the ranking score that is aware of the explicit knowledge extracted from 𝒢𝒢\\mathcal{G}. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_19", "text": " In this section, we introduce Knowledge Enhanced Re-ranking Model (KERM), which leverages explicit knowledge that improves conventional cross-encoder for passage re-ranking. Notably, the main challenges of incorporating explicit knowledge are to 1) distill a knowledge graph that is useful for re-ranking task, and 2) aggregate the explicit knowledge with the current implicit knowledge in an appropriate manner that can improve the overall performance. Hence our proposed approach is mainly composed of two parts, i.e., knowledge graph distillation and knowledge aggregation, to tackle two challenges respectively. In the rest of this section, we first describe how to distill a reliable knowledge graph globally and build a knowledge meta graph locally from it for a specific query-passage pair. Then, we present how to combine the distilled knowledge graph and existing text corpus to derive a knowledge enhanced passage re-ranker. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_20", "text": " Existing knowledge graphs are usually incomplete and noisy. It is unsuitable for direct introduction of them to the current model. Specially, there is no knowledge base particularly for passage re-ranking task. For example, ConceptNet (Speer et al., 2017) is a general knowledge graph that contains common sense knowledge, where the information might not be useful for our passage re-ranking task. Therefore, it is critical for us to propose a knowledge graph distillation process from both global and local perspectives. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_21", "text": " Given a global knowledge graph 𝒢𝒢\\mathcal{G}, the first step is to eliminate those knowledge that might be noisy to be applied. To achieve this, we use TransE (Bordes et al., 2013) to measure the reliability of a given knowledge triplet. In particular, TransE is an unsupervised learning method that learns latent representations for a knowledge triplet (eh,r,et)subscript𝑒ℎ𝑟subscript𝑒𝑡(e_{h},r,e_{t}). Intuitively, it models the latent distribution of knowledge in a given knowledge graph, and those who are out of this distribution can be viewed as less informative knowledge, which should not be used. Based on this, we use the entity embeddings pre-trained by TransE to calculate a distance metric between two linked entities as (3) R​e​le​(eh,r,et)=𝐄​(eh)⋅𝐄​(r)+𝐄​(eh)⋅𝐄​(et)+𝐄​(r)⋅𝐄​(et),𝑅𝑒subscript𝑙𝑒subscript𝑒ℎ𝑟subscript𝑒𝑡⋅𝐄subscript𝑒ℎ𝐄𝑟⋅𝐄subscript𝑒ℎ𝐄subscript𝑒𝑡⋅𝐄𝑟𝐄subscript𝑒𝑡Rel_{e}(e_{h},r,e_{t})=\\mathbf{E}({e_{h}})\\cdot\\mathbf{E}(r)+\\mathbf{E}({e_{h}})\\cdot\\mathbf{E}({e_{t}})+\\mathbf{E}({r})\\cdot\\mathbf{E}({e_{t}}), (4) D​i​s​t​(eh,et)=1R​e​le​(eh,r,et),𝐷𝑖𝑠𝑡subscript𝑒ℎsubscript𝑒𝑡1𝑅𝑒subscript𝑙𝑒subscript𝑒ℎ𝑟subscript𝑒𝑡Dist(e_{h},e_{t})=\\frac{1}{Rel_{e}(e_{h},r,e_{t})}, where 𝐄​(e)𝐄𝑒\\mathbf{E}({e}) and 𝐄​(r)𝐄𝑟\\mathbf{E}({r}) are the TransE embeddings of entity and relation, respectively, and the inner product measures the relevance between two vectors. As the objective of TranE is aligned with minimizing the distance shown in Eq.(4), we can consider those knowledge triplets with small distance values as informative knowledge. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_22", "text": " After measuring the reliability of knowledge, we prune 𝒢𝒢\\mathcal{G} by only keep the top-ΠΠ\\Pi neighboring entities 𝒩​(eh)𝒩subscript𝑒ℎ\\mathcal{N}(e_{h}) of a given entity ehsubscript𝑒ℎe_{h}, which can formally be defined as (5) 𝒩​(eh)=∪π=1Π{etπ},w​h​e​r​e​D​i​s​t​(eh,etπ)≤D​i​s​t​(eh,etπ+1).formulae-sequence𝒩subscript𝑒ℎsuperscriptsubscript𝜋1Πsuperscriptsubscript𝑒𝑡𝜋𝑤ℎ𝑒𝑟𝑒𝐷𝑖𝑠𝑡subscript𝑒ℎsuperscriptsubscript𝑒𝑡𝜋𝐷𝑖𝑠𝑡subscript𝑒ℎsuperscriptsubscript𝑒𝑡𝜋1\\mathcal{N}(e_{h})=\\cup_{\\pi=1}^{\\Pi}\\{e_{t}^{\\pi}\\},where\\,Dist(e_{h},e_{t}^{\\pi})\\leq Dist(e_{h},e_{t}^{\\pi+1}). Thus, the pruned global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}} can be denoted as (6) 𝒢¯={(eh,r,et)|eh,et∈ℰ∧r∈ℛ∧et∈𝒩​(eh)}.¯𝒢conditional-setsubscript𝑒ℎ𝑟subscript𝑒𝑡subscript𝑒ℎsubscript𝑒𝑡ℰ𝑟ℛsubscript𝑒𝑡𝒩subscript𝑒ℎ\\overline{\\mathcal{G}}=\\{(e_{h},r,e_{t})|e_{h},e_{t}\\in\\mathcal{E}\\wedge r\\in\\mathcal{R}\\wedge e_{t}\\in\\mathcal{N}(e_{h})\\}. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_23", "text": " Fig. 2 shows a real case of our global graph pruning method on ConceptNet, i.e., a general knowledge graph. In this case, the entity hepatitis has various relations to disease, infectious disease, adult, etc. From the distance of nodes in Fig. 2, we can clearly observe that the knowledge hepatitis is an infectious disease is more reliable and informative than hepatitis is located at adult. To hepatitis, the concept adult is more general than infectious disease. This indicates that our pruning method can effectively eliminate less informative knowledge. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_24", "text": " Different from existing knowledge-enhanced PLMs for other NLP tasks, our aim for the re-ranking task is particularly on the relevance modeling between query and passage. Thus, we further leverage the knowledge in the global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}} to construct “bridges” between query and passage, which alleviates the semantic gap and improves semantic modeling. More specifically, for a given query-passage pair (i.e., (𝐪,𝐩)𝐪𝐩(\\mathbf{q},\\mathbf{p})), we propose to construct a bipartite meta-graph that connects those entities in the 𝐪𝐪\\mathbf{q} and those in 𝐩𝐩\\mathbf{p}. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_25", "text": " The construction process is shown in Alg. 1, which contains three sub-steps: key sentence selection, target entity recognition and path discovery. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_26", "text": " (1) Key sentence selection. The actual information need of a user usually concentrates on a small part of a relevant passage (Guo et al., 2020). To this end, we mimic human judgment and only focus on the sentence of each passage that is the most related to a query (Zou et al., 2021). In particular, we define the relevance score between a query q and a sentence sisubscripts𝑖\\textbf{s}_{i} as (7) R​e​lq​s​(q,si)=∑q=1|q|E​(wq)|q|⋅∑s=1|si|E​(ws)|si|.𝑅𝑒subscript𝑙𝑞𝑠qsubscripts𝑖⋅superscriptsubscript𝑞1qEsubscript𝑤𝑞qsuperscriptsubscript𝑠1subscripts𝑖Esubscript𝑤𝑠subscripts𝑖Rel_{qs}(\\textbf{q},\\textbf{s}_{i})=\\frac{\\sum_{q=1}^{|\\textbf{q}|}\\textbf{E}(w_{q})}{|\\textbf{q}|}\\cdot\\frac{\\sum_{s=1}^{|\\textbf{s}_{i}|}\\textbf{E}(w_{s})}{|\\textbf{s}_{i}|}. For the sake of efficiency, we initialize E​(w)E𝑤\\textbf{E}(w) from Word2Vec (Mikolov et al., 2013) embedding. Based on Eq.(7), we select the most relevant sentence s∗superscripts\\textbf{s}^{*} in p to build the meta-graph for 𝐪𝐪\\mathbf{q} and 𝐩𝐩\\mathbf{p}. (2) Target entity recognition. Next, we select the entities in q and s∗superscripts\\textbf{s}^{*} to construct the meta-graph. Specifically, we only consider the entities that exactly match in ℰℰ\\mathcal{E}. Meanwhile, we omit those entity phrases that are sub-sequences of other recognized entities. For example, in the query \"what causes low liver enzymes\", both \"liver\" and \"liver enzyme\" are entities, but the entity \"liver enzyme\" is more informative to be recognized as the target entity, and \"liver\" should be omitted. (3) Path discovery. Finally, given the target entities of q and s∗superscripts\\textbf{s}^{*} (denoted as ϕ𝐪subscriptitalic-ϕ𝐪\\phi_{\\mathbf{q}} and ϕ𝐬∗subscriptitalic-ϕsuperscript𝐬\\phi_{\\mathbf{s}^{*}}, respectively), we perform Breadth First Search (BFS) on 𝒢¯¯𝒢\\overline{\\mathcal{G}} to discover the paths within K𝐾K-hop between ϕ𝐪subscriptitalic-ϕ𝐪\\phi_{\\mathbf{q}} and ϕ𝐬∗subscriptitalic-ϕsuperscript𝐬\\phi_{\\mathbf{s}^{*}}. Note that we only keep the within-K𝐾K-hop paths that might be the most useful for the downstream re-ranking task. Meanwhile, the knowledge could be complemented from the K𝐾K-hop paths. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_27", "text": " After taking the series of processes, the meta-graph 𝐆𝐪,𝐩={ℰ𝐪,𝐩,ℛ𝐪,𝐩}subscript𝐆𝐪𝐩subscriptℰ𝐪𝐩subscriptℛ𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}=\\{\\mathcal{E}_{\\mathbf{q},\\mathbf{p}},\\mathcal{R}_{\\mathbf{q},\\mathbf{p}}\\} is constructed with the multi-hop paths discovered between ϕ𝐪subscriptitalic-ϕ𝐪\\phi_{\\mathbf{q}} and ϕ𝐬∗subscriptitalic-ϕsuperscript𝐬\\phi_{\\mathbf{s}^{*}}. Fig. 3 shows an example of the meta-graph, which contains rich knowledge about the semantic relevance between the query and passage. Notably, a better key sentence selector or entity linker such as Sentence-BERT (Reimers and Gurevych, 2019) and DER (Wu et al., 2019) may benefit the ranking performance, but can burden the entire model inference time, which is infeasible to a qualified re-ranker. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_28", "text": " Given a meta-graph 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}, we propose a PLM based re-ranker that performs knowledge-enhanced relevance computation, i.e., f​(𝐪,𝐩|𝒢)𝑓𝐪conditional𝐩𝒢f(\\mathbf{q},\\mathbf{p}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G}). In the following, we first introduce the text encoder, and then present how we inject explicit knowledge from 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} into the encoder. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_29", "text": " We adopt the commonly-used cross-encoder as the text encoder. The input is formulated as the concatenation of a query-passage pair and the input layer converts the token indexes to a set of token embeddings (Vaswani et al., 2017) (i.e., 𝐎0subscript𝐎0\\mathbf{O}_{0}) as (8) 𝐎0=InputLayer​(((C​L​S),{wq}q=1|𝐪|,(S​E​P),{wp}p=1|𝐩|,(S​E​P))).subscript𝐎0InputLayerdelimited-()𝐶𝐿𝑆superscriptsubscriptsubscript𝑤𝑞𝑞1𝐪delimited-()𝑆𝐸𝑃superscriptsubscriptsubscript𝑤𝑝𝑝1𝐩delimited-()𝑆𝐸𝑃\\mathbf{O}_{0}=\\text{InputLayer}(((CLS),\\{w_{q}\\}_{q=1}^{|\\mathbf{q}|},(SEP),\\{w_{p}\\}_{p=1}^{|\\mathbf{p}|},(SEP))). In the l𝑙l-th transformer layer, text context features are extracted via multi-head self-attention and Feed Forward Network (FFN) as (9) 𝐇^l=MultiHeadAttention⁡(𝐎l−1),subscript^𝐇𝑙MultiHeadAttentionsubscript𝐎𝑙1\\hat{\\mathbf{H}}_{l}=\\operatorname{MultiHeadAttention}(\\mathbf{O}_{l-1}), (10) 𝐎l=σ​(𝐇l^​𝐖l1+bl1)​𝐖l2+bl2,subscript𝐎𝑙𝜎^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1superscriptsubscript𝐖𝑙2superscriptsubscript𝑏𝑙2\\mathbf{O}_{l}=\\sigma\\left(\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1}\\right)\\mathbf{W}_{l}^{2}+b_{l}^{2}, where 𝐖lsubscript𝐖𝑙\\mathbf{W}_{l} and blsubscript𝑏𝑙b_{l} are the parameters of FFN and σ𝜎\\sigma is an activation function, and 𝐎lsubscript𝐎𝑙\\mathbf{O}_{l} is the output of layer l𝑙l. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_30", "text": " Based on the text encoder, we develop a knowledge injector that can seamlessly integrate explicit knowledge. Moreover, inspired by CokeBERT (Su et al., 2021), our knowledge injector is equipped with a GMN module to dynamically refine the knowledge context on the basis of text context features learned by text encoder, which further improves the flexibility and usability of the knowledge enhancement. Besides, our method allows the text context and knowledge context to interact and mutually boost each other. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_31", "text": " Knowledge injection. As shown in Fig. 4, the knowledge injector consists of multiple transformer layers, which is the same as the text encoder. Given a query-passage pair (𝐪,𝐩)𝐪𝐩(\\mathbf{q},\\mathbf{p}), we first find the entities in 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} that can be enhanced by external knowledge. For these entities, we define 𝐄𝐄\\mathbf{E} as the knowledge embeddings to be applied in the knowledge injection layers, where 𝐄𝐄\\mathbf{E} is initialized by TransE embeddings extracted from the pruned global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}}. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_32", "text": " Next, we align each entity with the first token of the corresponding phrase in the selected key sentence (Zhang et al., 2019), and define the knowledge injection process as (11) 𝐇^l=MultiHeadAttention⁡(𝐎l−1),subscript^𝐇𝑙MultiHeadAttentionsubscript𝐎𝑙1\\hat{\\mathbf{H}}_{l}=\\operatorname{MultiHeadAttention}(\\mathbf{O}_{l-1}), (12) 𝐅l=σ​((𝐇l^​𝐖l1+bl1)⊕Λ​(𝐄𝐖l3+bl3)),subscript𝐅𝑙𝜎direct-sum^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1Λsuperscriptsubscript𝐄𝐖𝑙3superscriptsubscript𝑏𝑙3\\mathbf{F}_{l}=\\sigma\\left((\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1})\\oplus\\Lambda(\\mathbf{E}\\mathbf{W}_{l}^{3}+b_{l}^{3})\\right), (13) 𝐎l=𝐅l​𝐖l2+bl2.subscript𝐎𝑙subscript𝐅𝑙subscriptsuperscript𝐖2𝑙subscriptsuperscript𝑏2𝑙\\mathbf{O}_{l}=\\mathbf{F}_{l}\\mathbf{W}^{2}_{l}+b^{2}_{l}. In Eq. (12), ⊕direct-sum\\oplus means element-wise addition and Λ​(⋅)Λ⋅\\Lambda(\\cdot) represents the alignment function maps the entities to the corresponding positions of the tokens. By doing this, the external knowledge 𝐄𝐄\\mathbf{E} is integrated in the output 𝐎lsubscript𝐎𝑙\\mathbf{O}_{l} of the knowledge injection layer. The final relevance score of this query-passage pair is defined as (14) f​(𝐪,𝐩|𝒢)=σ​(𝐎M(CLS)​𝐖4+b4).𝑓𝐪conditional𝐩𝒢𝜎superscriptsubscript𝐎M(CLS)superscript𝐖4superscript𝑏4f(\\mathbf{q},\\mathbf{p}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})=\\sigma\\left(\\mathbf{O}_{\\textrm{M}}^{\\textrm{(CLS)}}\\mathbf{W}^{4}+b^{4}\\right). ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_33", "text": " Knowledge propagation via meta-graph. It is worth noting that, the above-defined knowledge injection process only leverages knowledge embeddings learned by TransE on the global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}}. Particularly, it lacks considering the knowledge that bridges the semantics between query and passage. To this end, we introduce a Graph Meta Network (GMN) module that refines knowledge with the constructed meta-graph 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}, The multi-hop paths of 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} allow the knowledge to be propagated between query and passage, which can enhance the relevance signal to be captured by the model, and thus alleviate the semantic gap. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_34", "text": " More specifically, each knowledge injection layer has a multi-layer GMN (as shown in Fig. 4) to propagate knowledge on 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}. First, the input of GMN is formulated with the fused feature 𝐅lsubscript𝐅𝑙\\mathbf{F}_{l} as (15) 𝐄^l(0)=Γ​(𝐅l​𝐖l5+bl5),superscriptsubscript^𝐄𝑙0Γsubscript𝐅𝑙subscriptsuperscript𝐖5𝑙subscriptsuperscript𝑏5𝑙\\hat{\\mathbf{E}}_{l}^{(0)}=\\Gamma(\\mathbf{F}_{l}\\mathbf{W}^{5}_{l}+b^{5}_{l}), where ΓΓ\\Gamma represents the slice operation that extracts the fused information of the target entities in 𝐆𝐪,𝐩={ℰ𝐪,𝐩,ℛ𝐪,𝐩}subscript𝐆𝐪𝐩subscriptℰ𝐪𝐩subscriptℛ𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}=\\{\\mathcal{E}_{\\mathbf{q},\\mathbf{p}},\\mathcal{R}_{\\mathbf{q},\\mathbf{p}}\\}, and thus 𝐄^l(0)superscriptsubscript^𝐄𝑙0\\hat{\\mathbf{E}}_{l}^{(0)} consists of fused entities representation 𝐄^e1(0),𝐄^e2(0),…,𝐄^eΨ(0)subscriptsuperscript^𝐄0subscript𝑒1subscriptsuperscript^𝐄0subscript𝑒2…subscriptsuperscript^𝐄0subscript𝑒Ψ\\hat{\\mathbf{E}}^{(0)}_{e_{1}},\\hat{\\mathbf{E}}^{(0)}_{e_{2}},...,\\hat{\\mathbf{E}}^{(0)}_{e_{\\Psi}}, i.e., Ψ=|ℰ𝐪,𝐩|Ψsubscriptℰ𝐪𝐩\\Psi=|\\mathcal{E}_{\\mathbf{q},\\mathbf{p}}|. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_35", "text": " Next, in the k𝑘k-th layer of GMN, an entity embedding ehsubscript𝑒ℎe_{h} is updated via an attentive aggregation from its neighbors 𝒩​(eh)𝒩subscript𝑒ℎ\\mathcal{N}(e_{h}) as (16) 𝐄^eh(k)=𝐄^eh(k−1)+∑et∈𝒩​(eh)𝐚h​t(k)​𝐄^et(k−1).superscriptsubscript^𝐄subscript𝑒ℎ𝑘superscriptsubscript^𝐄subscript𝑒ℎ𝑘1subscriptsubscript𝑒𝑡𝒩subscript𝑒ℎsuperscriptsubscript𝐚ℎ𝑡𝑘superscriptsubscript^𝐄subscript𝑒𝑡𝑘1\\hat{\\mathbf{E}}_{e_{h}}^{(k)}=\\hat{\\mathbf{E}}_{e_{h}}^{(k-1)}+\\sum_{e_{t}\\in\\mathcal{N}(e_{h})}\\mathbf{a}_{ht}^{(k)}\\hat{\\mathbf{E}}_{e_{t}}^{(k-1)}. Here, 𝐚h​t(k)superscriptsubscript𝐚ℎ𝑡𝑘\\mathbf{a}_{ht}^{(k)} is the attention value, which can be defined as (17) 𝐚h​t(k)=e​x​p​(𝐦h​t(k))∑en∈𝒩​(eh)e​x​p​(𝐦h​n(k)),superscriptsubscript𝐚ℎ𝑡𝑘𝑒𝑥𝑝superscriptsubscript𝐦ℎ𝑡𝑘subscriptsubscript𝑒𝑛𝒩subscript𝑒ℎ𝑒𝑥𝑝superscriptsubscript𝐦ℎ𝑛𝑘\\mathbf{a}_{ht}^{(k)}=\\frac{exp(\\mathbf{m}_{ht}^{(k)})}{\\sum_{e_{n}\\in\\mathcal{N}(e_{h})}exp(\\mathbf{m}_{hn}^{(k)})}, and the logits 𝐦h​t(k)superscriptsubscript𝐦ℎ𝑡𝑘\\mathbf{m}_{ht}^{(k)} is computed as (18) 𝐦h​t(k)=σsuperscriptsubscript𝐦ℎ𝑡𝑘𝜎\\displaystyle\\mathbf{m}_{ht}^{(k)}=\\sigma (α(𝐄^eh(k−1)∥𝐄^et(k−1))+β(𝐄^eh(k−1)∥𝐄^rh​t(k−1))\\displaystyle\\left(\\alpha\\left(\\hat{\\mathbf{E}}_{e_{h}}^{(k-1)}\\|\\hat{\\mathbf{E}}_{e_{t}}^{(k-1)}\\right)+\\beta\\left(\\hat{\\mathbf{E}}_{e_{h}}^{(k-1)}\\|\\hat{\\mathbf{E}}_{r_{ht}}^{(k-1)}\\right)\\right. +γ(𝐄^rh​t(k−1)∥𝐄^et(k−1))).\\displaystyle\\left.+\\gamma\\left(\\hat{\\mathbf{E}}_{r_{ht}}^{(k-1)}\\|\\hat{\\mathbf{E}}_{e_{t}}^{(k-1)}\\right)\\right). In Eq. (18), the functions α​(⋅)𝛼⋅\\alpha(\\cdot), β​(⋅)𝛽⋅\\beta(\\cdot) and γ​(⋅)𝛾⋅\\gamma(\\cdot) are full-connected layers, and ⋅∥⋅\\cdot\\|\\cdot represents concatenation operation. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_36", "text": " By applying a K𝐾K-layer GMN in each layer of the knowledge injector, the output entity representation 𝐄^eh(K)superscriptsubscript^𝐄subscript𝑒ℎ𝐾\\hat{\\mathbf{E}}_{e_{h}}^{(K)} can ensemble knowledge from all the K𝐾K-hop neighbors. As described in Section 4.1.2 that all the paths of 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} between 𝐪𝐪\\mathbf{q} and 𝐩𝐩\\mathbf{p} is within K𝐾K hops, the GMN module can attentively propagate knowledge along the paths from entities in 𝐩𝐩\\mathbf{p} to those in 𝐪𝐪\\mathbf{q}, and vice versa, which can enrich the semantics of the entities that benefit the relevance modeling. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_37", "text": " Subsequently, the updated entity embeddings could be used as the knowledge to be injected in the next layer, i.e., 𝐄:=𝐄^(K)assign𝐄superscript^𝐄𝐾\\mathbf{E}:=\\hat{\\mathbf{E}}^{(K)}. In other words, we can re-define Eq. (12) as (19) 𝐅l=σ​((𝐇l^​𝐖l1+bl1)⊕Λ​(𝐄l​𝐖l3+bl3)),subscript𝐅𝑙𝜎direct-sum^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1Λsubscript𝐄𝑙superscriptsubscript𝐖𝑙3superscriptsubscript𝑏𝑙3\\mathbf{F}_{l}=\\sigma\\left((\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1})\\oplus\\Lambda(\\mathbf{E}_{l}\\mathbf{W}_{l}^{3}+b_{l}^{3})\\right), where 𝐄lsubscript𝐄𝑙\\mathbf{E}_{l} is defined as (20) 𝐄l={𝐄^l−1(K),l∈(2,M)TransE embeddings.l=1subscript𝐄𝑙casessuperscriptsubscript^𝐄𝑙1𝐾𝑙2𝑀TransE embeddings𝑙1\\mathbf{E}_{l}=\\begin{cases}\\hat{\\mathbf{E}}_{l-1}^{(K)},&l\\in(2,M)\\\\ \\text{TransE embeddings}.&l=1\\end{cases} ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_38", "text": " Knowledge-enhanced pre-training. Following previous studies (Nogueira et al., 2019a; Yan et al., 2021; Kim and Ko, 2021), we conduct continual pre-training on MSMARCO corpus to warm up the parameters of GMN module. We apply Masked Language Model (MLM) (Devlin et al., 2018) and Sentence Relation Prediction (SRP) (Wang et al., 2019) as the pre-training tasks in KERM. Compared to conventional Next Sentence Prediction (NSP) (Devlin et al., 2018), the task of SRP is to predict whether a given sentence is the next sentence, previous sentence relation or no relation with another sentence. To incorporate knowledge during the pre-training stage, we construct a meta-graph for each sentence pair, and apply the knowledge aggregation process as introduced above. The pre-training loss is defined as ℒp=ℒM​L​M+ℒS​R​Psubscriptℒ𝑝subscriptℒ𝑀𝐿𝑀subscriptℒ𝑆𝑅𝑃\\mathcal{L}_{p}=\\mathcal{L}_{MLM}+\\mathcal{L}_{SRP}. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_39", "text": " Knowledge-enhanced fine-tuning. We adopt a cross-entropy loss to fine-tune KERM: (21) ℒf=−1|𝒬|​∑q∈𝒬log⁡exp​(f​(𝐪,𝐩+|𝒢))exp​(f​(𝐪,𝐩+|𝒢))+∑p−exp​(f​(𝐪,𝐩−|𝒢))subscriptℒ𝑓1𝒬subscript𝑞𝒬exp𝑓𝐪conditionalsuperscript𝐩𝒢exp𝑓𝐪conditionalsuperscript𝐩𝒢subscriptsuperscript𝑝exp𝑓𝐪conditionalsuperscript𝐩𝒢\\mathcal{L}_{f}=-\\frac{1}{|\\mathcal{Q}|}\\sum_{q\\in\\mathcal{Q}}\\log\\frac{\\mathrm{exp}({f(\\mathbf{q},\\mathbf{p}^{+}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})})}{\\mathrm{exp}({f(\\mathbf{q},\\mathbf{p}^{+}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})})+\\sum_{p^{-}}\\mathrm{exp}({f(\\mathbf{q},\\mathbf{p}^{-}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})})} where |𝒬|𝒬|\\mathcal{Q}| is the number of queries in training set, and p+superscript𝑝p^{+} and p−superscript𝑝p^{-} denote the positive passage and negative passage in ℙℙ\\mathbb{P} for current query 𝐪𝐪\\mathbf{q}, respectively. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_40", "text": " We use a large-scale public available corpus, i.e., MSMARCO-Passage collection (Nguyen et al., 2016), as our passage collection. This collection contains approximately 8.8 million passages extracted from 3.2 million web documents covering multiple fields. We train our model on the MSMARCO-TRAIN query set of 502,939 queries and evaluate KERM on three query sets. Table 1 provides the detailed information of these query sets. The first test set is MSMARCO-DEV, which includes 6,980 sparsely-judged queries mixed with multiple domains. Each query has an average of 1.1 relevant passages with binary relevance label. The second test set is TREC 2019 DL (Craswell et al., 2020), which contains 43 densely-judged queries with fine-grained relevance labels, i.e., irrelevant, relevant, highly relevant and perfectly relevant. On average, a query has 95.4 relevant passages, and most queries have more than 10 relevant passages. With fine-grained labels and multiple relevant passages per query, TREC 2019 DL can be used to reflect the fine-grained ranking performance between relevant passages. To evaluate KERM on specific domains, we further introduce Ohsumed 111http://disi.unitn.it/moschitti/corpora.htm query set, which contains 63 queries on bio-medical domain. The collection of Ohsumed is constructed from the first 20,000 passages in Mesh categories of the year 1991. Following the previous work (Joachims, 1998), the test collection including 10,000 passages are utilized for performance comparison on Ohsumed query set. Each query has an average of 50.9 relevant passages with three graded relevance labels. In section 6.4, we demonstrate that the quality of external knowledge constructed by KERM in such domain could be more useful. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_41", "text": " We use ConceptNet (Speer et al., 2017), a general knowledge graph as our external knowledge base 𝒢𝒢\\mathcal{G}. Following KagNet (Lin et al., 2019), we merge relation types to increase graph density and construct a multi-relational graph with 17 relation types, including a​t​l​o​c​a​t​i​o​n𝑎𝑡𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛atlocation, c​a​u​s​e​s𝑐𝑎𝑢𝑠𝑒𝑠causes, c​r​e​a​t​e​d​b​y𝑐𝑟𝑒𝑎𝑡𝑒𝑑𝑏𝑦createdby, etc. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_42", "text": " We include several PLMs based re-rankers in our evaluation, including the state-of-the-art: • monoBERT (Nogueira and Cho, 2019): The first study that re-purposes BERT as a re-ranker and achieves state-of-the-art results. • duoBERT (Nogueira et al., 2019a): This work proposes a pairwise classification approach using BERT, which obtains the ability to be more sensitive to semantics through greater computation. • UED (Yan et al., 2021): A unified pre-training framework that jointly refines re-ranker and query generator. For a fair comparison, we only use the re-ranker in UED without passage expansion. • LM Distill+Fine-Tuning (LDFT) (Gao et al., 2020): A variety of distillation methods are compared in this paper. The experimental results indicate that a proper distillation procedure (i.e. first distill the language model, and then fine-tune on the ranking task) could produce a faster re-ranker with better ranking performance. • CAKD (Hofstätter et al., 2020): This work proposes a cross-architecture knowledge distillation procedure with Margin-MSE loss, which can distill knowledge from multiple teachers. • RocketQAv1 (Qu et al., 2021): This work mainly focuses on the training of PLM based retriever, where the re-ranker is an intermediate product of its training process. • RocketQAv2 (Ren et al., 2021): Based on RocketQAv1, this work proposes a novel approach that jointly trains the PLM based retriever and re-ranker. To compare the performance of different methods, we resort to two ranking metrics. For MSMARCO-DEV, We adopt Mean Reciprocal Rank (i.e., MRR@10). For TREC 2019 DL, we use Mean Average Precision, i.e., MAP@10 and MAP@30. For Ohsumed, both Mean Reciprocal Rank and Mean Average Precision (i.e., MRR@10 and MAP@10) are employed for comprehensive performance analysis in queries requiring in-depth domain knowledge. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_43", "text": " We use the traditional sparse retriever BM25 (Yang et al., 2017) as our first stage method. All experiments are conducted under the same BM25 setting with 1000 retrieved candidates. We conduct experiments with the deep learning framework PaddlePaddle (Ma et al., 2019) on up to 4 NVIDIA Tesla A100 GPUs (with 40G RAM). For the GMN module, we use Paddle Graph Learning (PGL) 222https://github.com/PaddlePaddle/PGL, an efficient and flexible graph learning framework based on PaddlePaddle. For training, we used the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e-5 for text encoder and 1e-4 for knowledge injector. The model is trained up to 5 epochs with a batch size of 640 and 240 for base and large models respectively. In our experiments, the PLM small, base and large models have 6, 12 and 24 Transformer layers respectively. The text encoder has 9 layers and 21 layers for base and large model respectively, and the knowledge injector both has 3 layers in our experiment. The dropout rates are set to 0.1. The ratio of the positive to the hard negative is set to 1:19. All transformer layers in KERM’s backbone are initialized from ERNIE-2.0 base (Sun et al., 2020b), which is a BERT-like model pre-trained with a continual pre-training framework on multiple tasks. We perform Knowledge-enhanced pre-training on MARCO passage collection to warm up the parameters in knowledge injector, which has 60,000 iterations under the batch size of 256. For a fair comparison, the same pre-training without knowledge enhancement is also conducted on ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} re-ranker and all models in ablation studies. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_44", "text": " Here we compare ranking performances of KERM and other PLMs based re-rankers on the first two widely used query sets. Moreover, ablation studies for each component of KERM are also explored. All experimental results were reported under the same BM25 setting. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_45", "text": " Table 2 shows the ranking performance of KERM and baselines on MSMARCO-DEV and TREC 2019 DL. In the second column, model settings are displayed, including the PLMs used in models, whether distillation is enabled and computing resources required for model training. From Table 2, we observe the following phenomena. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_46", "text": " (1) Compared with the best SOTA methods on the sparsely-judged MARCO-DEV query set, KERM outperforms all other baseline models except RocketQAv2. It utilizes a well-trained cross-encoder ERNIElargesubscriptERNIElarge\\textrm{ERNIE}_{\\textrm{large}} in RocketQAv1 to remove the predicted negatives with low confidence scores and include the predicted positives with high confidence scores. This can be regarded as a distillation. Meanwhile, RocketQAv2 achieves promising performance through a very large batch size on enormous computational resources, which is hardly comparable to our technique that only requires up to 4 GPUs. In addition to RocketQAv2, both KERMbasesubscriptKERMbase\\textrm{KERM}_{\\textrm{base}} and KERMlargesubscriptKERMlarge\\textrm{KERM}_{\\textrm{large}} exceed strong baseline models, including duoBERT with sophisticated multiple re-ranking stages and CAKD distilled from multiple large models. It demonstrates the effectiveness of external knowledge injection. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_47", "text": " (2) Among both kinds of baselines, KERMlargesubscriptKERMlarge\\textrm{KERM}_{\\textrm{large}} achieves the best performance on the densely-judged TREC 2019 DL query set. MAP @10 and MAP@30 measure the quality of the ranking result over related passages. Baseline models with larger networks usually perform better in MAP, which indicates that complex structure helps model capture the fine-grained differences between related passages. With the well-designed GMN module and introduced reliable external knowledge, KERMbasesubscriptKERMbase\\textrm{KERM}_{\\textrm{base}} achieves the best performance on MAP@10 even compared to various large baseline models. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_48", "text": " (3) Distilled models typically perform better at putting the relevant passage at top positions, but the subtle differences between relevant passages cannot be captured effectively through relatively small distilled models. On the MARCO-DEV query set, LDFT (Gao et al., 2020) performs better than duoBERT on MRR@10 and the former model size is much smaller than the latter. It shows that distillation plays a great role in performance improvement. Because LDFT (Gao et al., 2020) neither release the code nor report MAP in the original paper, we omit its result on TREC 2019 DL query set. Additionally, models that perform well on MAP do not lead in MRR and vice versa, demonstrating that two metrics are to measure different aspects of the ranking quality. KERM shows the most stable performance on both metrics among all baseline models. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_49", "text": " (4) Compared with ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} we trained, KERMbasesubscriptKERMbase\\textrm{KERM}_{\\textrm{base}} shows a significant improvement on both two query sets. This indicates the explicit introduction of external knowledge can alleviate the semantic gap and heterogeneity between query and passage, and improve the semantic matching performance. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_50", "text": " Knowledge injector module including knowledge injection and propagation process realized as Graph Meta Network (GMN), is mainly responsible for the interaction between text and knowledge graph. To explore their roles in the ranking performance, we remove the knowledge injection, aggregation process and the whole module separately and keep other units unchanged in KERM. Experimental results of three base models are shown in Table 3. KERM without knowledge injector module is degraded to vanilla ERNIE. KERM without knowledge propagation process is formally equivalent to ERNIE(THU) (Zhang et al., 2019). KERM without knowledge injection process takes the text of query-passage pair and meta graph as separate inputs, and then concatenate two parts of outputs to feed into a linear layer by redefining Eq.(19) and Eq.(14) respectively as (22) 𝐅l={σ​(𝐇l^​𝐖l1+bl1),for​Eq.​(13)σ​(𝐄l​𝐖l3+bl3),for​Eq.​(15)subscript𝐅𝑙cases𝜎^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1forEq.13𝜎subscript𝐄𝑙superscriptsubscript𝐖𝑙3superscriptsubscript𝑏𝑙3forEq.15\\mathbf{F}_{l}=\\begin{cases}\\sigma\\left(\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1}\\right),&\\textrm{for}\\;\\textrm{Eq.}(\\ref{eq:textoutput})\\\\ \\sigma\\left(\\mathbf{E}_{l}\\mathbf{W}_{l}^{3}+b_{l}^{3}\\right),&\\textrm{for}\\;\\textrm{Eq.}(\\ref{eq:gmninput})\\end{cases} (23) f​(𝐪,𝐩|𝒢)=σ​((𝐎M(CLS)∥𝐄M(K))​𝐖6+b6).𝑓𝐪conditional𝐩𝒢𝜎conditionalsuperscriptsubscript𝐎M(CLS)superscriptsubscript𝐄M𝐾superscript𝐖6superscript𝑏6f(\\mathbf{q},\\mathbf{p}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})=\\sigma\\left(\\left(\\mathbf{O}_{\\textrm{M}}^{\\textrm{(CLS)}}\\|\\mathbf{E}_{\\textrm{M}}^{(K)}\\right)\\mathbf{W}^{6}+b^{6}\\right). ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_51", "text": " Table 3 shows the performance comparisons between different settings of knowledge injector, which is statistically significant. From this table, we can observe the following phenomena. (1) MRR@10 of KERM without interaction and propagation process decreases at least 1%percent11\\% respectively. This indicates both knowledge interaction and propagation processes play an indispensable role in ranking performance. (2) The performance of KERM without propagation is comparable to vanilla ERNIE. Not only query and passage entities, but also their multi-hop neighbors are essential for the ranking performance. (3) MRR@10 of KERM without knowledge interaction drops the most. It suggests the simple and straightforward way to aggregate knowledge graph with text does not work in the passage re-ranking scenario. The text and knowledge graph need to be refined with each other mutually in the interaction, which will be further analyzed in detail as follows. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_52", "text": " To further explore the text-knowledge interaction influence on the ranking performance, we compare ranking performances from KERM with different numbers of knowledge injector layers. All experiments in Table 4 are conducted with the same experimental settings except the number of knowledge injector layers (denoted as M𝑀M). Note that in our setting, the number of text encoder layers N𝑁N plus M𝑀M is always 121212, i.e. the number of layers in ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}}. No knowledge injector layer (M=0𝑀0M=0) represents the vanilla ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} re-ranker without explicit knowledge enhancement. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_53", "text": " With the increase of M𝑀M in Table 4, the ranking performance is not improved linearly. Instead, the performance achieves the best at M=3𝑀3M=3 and then falls down (statistically significant). This performance variation trend is contradictory to our intuition that the more injector layers, the deeper interaction between text and knowledge, and the more performance improvement is expected. The possible reason lies in that the knowledge injector layer makes pretrained parameters from ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} not reusable, which means the implicit knowledge learned from large-scale is not applicable to these layers. Hence the number choice of the knowledge injector layer is somehow determined by the trade-off between implicit and explicit knowledge. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_54", "text": " Knowledge graph distillation is performed in both global and local perspectives. To explore their roles in the ranking performance, we remove the graph pruning globally and sentence selection locally respectively, keep other settings unchanged, and derive KERM without graph pruning and sentence selection respectively. From results on TREC 2019 DL in Table 5, observations are listed as below. (1) Without global graph pruning, MRR@10 and the average edge score, calculated through Eq.(3), decrease the most, and the time efficiency drops slightly. This indicates the original knowledge graph exists noise data that affect performance. (2) Without sentence selection, the time efficiency drops the most and the average edge score decreases slightly, which proves that not every sentence in a passage has a positive effect on semantic matching. Overall, knowledge graph distillation is significant to KERM. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_55", "text": " We further investigate the ranking effect of KERM on a specific domain. Specifically, we conduct experiments on OHSUMED from bio-medical field, and a bio-medical query subset of MSMARCO-DEV including 1,11011101,110 queries. This query subset is derived from the mixed domain query set of MSMARCO-DEV by k-means clustering method (Hartigan and Wong, 1979), while the remaining subset with 5,87058705,870 queires is denoted as the general domain subset. Performance comparisons between KERM and BM25, ERNIE are shown in Table 6. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_56", "text": " Results are obtained from Table 6. (1) Poor ranking performances of all models on bio-medical domain indicates that it is more challenging in the data scarcity scenario, where textual data is not covered widely in the PLMs’ pretraining datasets. (2) Compared with ERNIE, KERM has a higher relative improvement in bio-medical domain than general domain. This demonstrates that the incorporation of knowledge graph is more useful for a data scarcity domain. To verify this idea, we compare the size of knowledge meta graph used for different domains as follows. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_57", "text": " We quantify the knowledge desirability as the size of average knowledge meta graph used in one domain. Specifically, we use the average number of edges as the size and average edge score calculated through Eq.(3) as the reliability of the knowledge meta graph. From Table 7, we can see that the meta-graph constructed on Bio-Medical domains is better in terms of quantity and quality. It indicates that the external knowledge found on professional domains contains more information. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_58", "text": " The main goal of this paper is to reasonably introduce external knowledge graph to PLMs for passage re-ranking. We first design a novel knowledge meta graph construction method to distill reliable and query related knowledge from a general and noisy knowledge graph. The knowledge meta graph bridges the semantic gap between each query and passage. Then we propose a knowledge injector layer for mutually updating text and knowledge representations, which transformers word to entity representations for graph meta network, vice versa. Knowledge Enhanced Ranking Model is pretrained with Masked Language Model (MLM) Sentence Relation Prediction (SRP) tasks, and fine-tuned with cross entropy loss function for passage re-ranking task. Experimental results on public benchmark datasets show the effectiveness of the proposed method compared with state-of-the-art baselines without external knowledge due to its first attempt. The role of each module in KERM is also comprehensively analyzed. Since this work was limited to the one-to-one meta-graph of a query-passage pair built online, continued efforts are needed to make knowledge enhancement more efficient for both retrieval and re-ranking stage. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" }, { "id": "2204.11673_all_59", "text": " Despite that the knowledge graph distillation in our method is empirically shown to be effective for the final performance, the implementation of graph pruning and meta-graph construction is still based on simple heuristics. A more promising way of formulating a useful meta-graph is to jointly learn a graph generator with the reranker in an end-to-end fashion, which enables more flexibility. Besides, it is currently infeasible to exploit the external knowledge in the retrieval stage, which needs to exhaustively build massive meta-graphs for a large scale of candidates. A further study could focus on how to use external knowledge in PLM based retriever. ", "title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking" } ]
Which language model is used for vocabulary encoding and why?
CLIP and T5-XXL language models are used for translating noise into image embeddings [1].
[ 1 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requires synthesizing instances of specific subjects (e.g., objects, animals) in new contexts such that they naturally and seamlessly blend into the scene. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_1", "text": " Recently developed large text-to-image models have shown unprecedented capabilities, by enabling high-quality and diverse synthesis of images based on a text prompt written in natural language (61, 54). One of the main advantages of such models is the strong semantic prior learned from a large collection of image-caption pairs. Such a prior learns, for instance, to bind the word “dog” with various instances of dogs that can appear in different poses and contexts in an image. While the synthesis capabilities of these models are unprecedented, they lack the ability to mimic the appearance of subjects in a given reference set, and synthesize novel renditions of the same subjects in different contexts. The main reason is that the expressiveness of their output domain is limited; even the most detailed textual description of an object may yield instances with different appearances. Furthermore, even models whose text embedding lies in a shared language-vision space cannot accurately reconstruct the appearance of given subjects but only create variations of the image content (Figure 2). ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_2", "text": " In this work, we present a new approach for “personalization” of text-to-image diffusion models (adapting them to user-specific image generation needs). Our goal is to expand the language-vision dictionary of the model such that it binds new words with specific subjects the user wants to generate. Once the new dictionary is embedded in the model, it can use these words to synthesize novel photorealistic images of the subject, contextualized in different scenes, while preserving their key identifying features. The effect is akin to a “magic photo booth”—once a few images of the subject are taken, the booth generates photos of the subject in different conditions and scenes, as guided by simple and intuitive text prompts (Figure 1). ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_3", "text": " More formally, given a few images of a subject (∼similar-to\\sim3-5), our objective is to implant the subject into the output domain of the model such that it can be synthesized with a unique identifier. To that end, we propose a technique to represent a given subject with rare token identifiers and fine-tune a pre-trained, diffusion-based text-to-image framework. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_4", "text": " We fine-tune the text-to-image model with the input images and text prompts containing a unique identifier followed by the class name of the subject (e.g., “A (V) dog”). The latter enables the model to use its prior knowledge on the subject class while the class-specific instance is bound with the unique identifier. In order to prevent language drift (34, 40) that causes the model to associate the class name (e.g., “dog”) with the specific instance, we propose an autogenous, class-specific prior preservation loss, which leverages the semantic prior on the class that is embedded in the model, and encourages it to generate diverse instances of the same class as our subject. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_5", "text": " We apply our approach to a myriad of text-based image generation applications including recontextualization of subjects, modification of their properties, original art renditions, and more, paving the way to a new stream of previously unassailable tasks. We highlight the contribution of each component in our method via ablation studies, and compare with alternative baselines and related work. We also conduct a user study to evaluate subject and prompt fidelity in our synthesized images, compared to alternative approaches. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_6", "text": " To the best of our knowledge, ours is the first technique that tackles this new challenging problem of subject-driven generation, allowing users, from just a few casually captured images of a subject, synthesize novel renditions of the subject in different contexts while maintaining its distinctive features. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_7", "text": " To evaluate this new task, we also construct a new dataset that contains various subjects captured in different contexts, and propose a new evaluation protocol that measures the subject fidelity and prompt fidelity of the generated results. We make our dataset and evaluation protocol publicly available on the project webpage. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_8", "text": " Image Composition. Image composition techniques (70, 13, 38) aim to clone a given subject into a new background such that the subject melds into the scene. To consider composition in novel poses, one may apply 3D reconstruction techniques (41, 6, 8, 68, 49) which usually works on rigid objects and require a larger number of views. Some drawbacks include scene integration (lighting, shadows, contact) and the inability to generate novel scenes. In contrast, our approach enable generation of subjects in novel poses and new contexts. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_9", "text": " Text-to-Image Editing and Synthesis. Text-driven image manipulation has recently achieved significant progress using GANs  (22, 9, 28, 29, 30) combined with image-text representations such as CLIP , yielding realistic manipulations using text (48, 21, 71, 2, 7, 43). These methods work well on structured scenarios (e.g. human face editing) and can struggle over diverse datasets where subjects are varied. Crowson et al. use VQ-GAN and train over more diverse data to alleviate this concern. Other works (4, 31) exploit the recent diffusion models (25, 63, 65, 25, 64, 58, 45, 66, 60, 62), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . While most works that require only text are limited to global editing (14, 33), Bar-Tal et al.  proposed a text-based localized editing technique without using masks, showing impressive results. While most of these editing approaches allow modification of global properties or local editing of a given image, none enables generating novel renditions of a given subject in new contexts. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_10", "text": " There also exists work on text-to-image synthesis (16, 24, 67, 35, 36, 50, 51, 55, 74, 14, 19, 58, 27). Recent large text-to-image models such as Imagen , DALL-E2 , Parti , CogView2  and Stable Diffusion  demonstrated unprecedented semantic generation. These models do not provide fine-grained control over a generated image and use text guidance only. Specifically, it is challenging or impossible to preserve the identity of a subject consistently across synthesized images. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_11", "text": " Controllable Generative Models. There are various approaches to control generative models, where some of them might prove to be viable directions for subject-driven prompt-guided image synthesis. Liu et al.  propose a diffusion-based technique allowing for image variations guided by reference image or text. To overcome subject modification, several works (44, 3) assume a user-provided mask to restrict the modified area. Inversion (12, 15, 54) can be used to preserve a subject while modifying context. Prompt-to-prompt  allows for local and global editing without an input mask. These methods fall short of identity-preserving novel sample generation of a subject. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_12", "text": " In the context of GANs, Pivotal Tuning  allows for real image editing by finetuning the model with an inverted latent code anchor, and Nitzan et al.  extended this work to GAN finetuning on faces to train a personalized prior, which requires around 100 images and are limited to the face domain. Casanova et al.  propose an instance conditioned GAN that can generate variations of an instance, although it can struggle with unique subjects and does not preserve all subject details. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_13", "text": " Finally, the concurrent work of Gal et al. proposes a method to represent visual concepts, like an object or a style, through new tokens in the embedding space of a frozen text-to-image model, resulting in small personalized token embeddings. While this method is limited by the expressiveness of the frozen diffusion model, our fine-tuning approach enables us to embed the subject within the model’s output domain, resulting in the generation of novel images of the subject which preserve its key visual features. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_14", "text": " Given only a few (typically 3-5) casually captured images of a specific subject, without any textual description, our objective is to generate new images of the subject with high detail fidelity and with variations guided by text prompts. Example variations include changing the subject location, changing subject properties such as color or shape, modifying the subject’s pose, viewpoint, and other semantic modifications. We do not impose any restrictions on input image capture settings and the subject image can have varying contexts. We next provide some background on text-to-image diffusion models (Sec. 3.1), then present our fine-tuning technique to bind a unique identifier with a subject described in a few images (Sec. 3.2), and finally propose a class-specific prior-preservation loss that enables us to overcome language drift in our fine-tuned model (Sec. 3.3). ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_15", "text": " Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, we are interested in a pre-trained text-to-image diffusion model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} that, given an initial noise map ϵ∼𝒩​(𝟎,𝐈)similar-tobold-italic-ϵ𝒩0𝐈{\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and a conditioning vector 𝐜=Γ​(𝐏)𝐜Γ𝐏\\mathbf{c}=\\Gamma(\\mathbf{P}) generated using a text encoder ΓΓ\\Gamma and a text prompt 𝐏𝐏\\mathbf{P}, generates an image 𝐱gen=𝐱^θ​(ϵ,𝐜)subscript𝐱gensubscript^𝐱𝜃bold-italic-ϵ𝐜\\mathbf{x}_{\\text{gen}}=\\hat{\\mathbf{x}}_{\\theta}({\\bm{\\epsilon}},\\mathbf{c}). They are trained using a squared error loss to denoise a variably-noised image or latent code 𝐳t≔αt​𝐱+σt​ϵ≔subscript𝐳𝑡subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ\\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows: 𝔼𝐱,𝐜,ϵ,t​(wt​‖𝐱^θ​(αt​𝐱+σt​ϵ,𝐜)−𝐱‖22)subscript𝔼𝐱𝐜bold-italic-ϵ𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptnormsubscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right) (1) where 𝐱𝐱\\mathbf{x} is the ground-truth image, 𝐜𝐜\\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), and αt,σt,wtsubscript𝛼𝑡subscript𝜎𝑡subscript𝑤𝑡\\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t∼𝒰​((0,1))similar-to𝑡𝒰01t\\sim\\mathcal{U}((0,1)). A more detailed description is given in the supplementary material. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_16", "text": " Our first task is to implant the subject instance into the output domain of the model such that we can query the model for varied novel images of the subject. One natural idea is to fine-tune the model using the few-shot dataset of the subject. Careful care had to be taken when fine-tuning generative models such as GANs in a few-shot scenario as it can cause overfitting and mode-collapse - as well as not capturing the target distribution sufficiently well. There has been research on techniques to avoid these pitfalls (56, 47, 37, 42, 69), although, in contrast to our work, this line of work primarily seeks to generate images that resemble the target distribution but has no requirement of subject preservation. With regards to these pitfalls, we observe the peculiar finding that, given a careful fine-tuning setup using the diffusion loss from Eq 1, large text-to-image diffusion models seem to excel at integrating new information into their domain without forgetting the prior or overfitting to a small set of training images. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_17", "text": " Our goal is to “implant” a new (unique identifier, subject) pair into the diffusion model’s “dictionary” . In order to bypass the overhead of writing detailed image descriptions for a given image set we opt for a simpler approach and label all input images of the subject “a (identifier) (class noun)”, where (identifier) is a unique identifier linked to the subject and (class noun) is a coarse class descriptor of the subject (e.g. cat, dog, watch, etc.). The class descriptor can be provided by the user or obtained using a classifier. We use a class descriptor in the sentence in order to tether the prior of the class to our unique subject and find that using a wrong class descriptor, or no class descriptor increases training time and language drift while decreasing performance. In essence, we seek to leverage the model’s prior of the specific class and entangle it with the embedding of our subject’s unique identifier so we can leverage the visual prior to generate new poses and articulations of the subject in different contexts. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_18", "text": " We generally find existing English words (e.g. “unique”, “special”) suboptimal since the model has to learn to disentangle them from their original meaning and to re-entangle them to reference our subject. This motivates the need for an identifier that has a weak prior in both the language model and the diffusion model. A hazardous way of doing this is to select random characters in the English language and concatenate them to generate a rare identifier (e.g. “xxy5syt00”). In reality, the tokenizer might tokenize each letter separately, and the prior for the diffusion model is strong for these letters. We often find that these tokens incur the similar weaknesses as using common English words. Our approach is to find rare tokens in the vocabulary, and then invert these tokens into text space, in order to minimize the probability of the identifier having a strong prior. We perform a rare-token lookup in the vocabulary and obtain a sequence of rare token identifiers f​(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}), where f𝑓f is a tokenizer; a function that maps character sequences to tokens and 𝐕^^𝐕\\hat{\\mathbf{V}} is the decoded text stemming from the tokens f​(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}). The sequence can be of variable length k𝑘k, and find that relatively short sequences of k={1,…,3}𝑘1…3k=\\{1,...,3\\} work well. Then, by inverting the vocabulary using the de-tokenizer on f​(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}) we obtain a sequence of characters that define our unique identifier 𝐕^^𝐕\\hat{\\mathbf{V}}. For Imagen, we find that using uniform random sampling of tokens that correspond to 3 or fewer Unicode characters (without spaces) and using tokens in the T5-XXL tokenizer range of {5000,…,10000}5000…10000\\{5000,...,10000\\} works well. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_19", "text": " In our experience, the best results for maximum subject fidelity are achieved by fine-tuning all layers of the model. This includes fine-tuning layers that are conditioned on the text embeddings, which gives rise to the problem of language drift. Language drift has been an observed problem in language models (34, 40), where a model that is pre-trained on a large text corpus and later fine-tuned for a specific task progressively loses syntactic and semantic knowledge of the language. To the best of our knowledge, we are the first to find a similar phenomenon affecting diffusion models, where to model slowly forgets how to generate subjects of the same class as the target subject. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_20", "text": " Another problem is the possibility of reduced output diversity. Text-to-image diffusion models naturally posses high amounts of output diversity. When fine-tuning on a small set of images we would like to be able to generate the subject in novel viewpoints, poses and articulations. Yet, there is a risk of reducing the amount of variability in the output poses and views of the subject (e.g. snapping to the few-shot views). We observe that this is often the case, especially when the model is trained for too long. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_21", "text": " To mitigate the two aforementioned issues, we propose an autogenous class-specific prior preservation loss that encourages diversity and counters language drift. In essence, our method is to supervise the model with its own generated samples, in order for it to retain the prior once the few-shot fine-tuning begins. This allows it to generate diverse images of the class prior, as well as retain knowledge about the class prior that it can use in conjunction with knowledge about the subject instance. Specifically, we generate data 𝐱pr=𝐱^​(𝐳t1,𝐜pr)subscript𝐱pr^𝐱subscript𝐳subscript𝑡1subscript𝐜pr\\mathbf{x}_{\\text{pr}}=\\hat{\\mathbf{x}}(\\mathbf{z}_{t_{1}},\\mathbf{c}_{\\text{pr}}) by using the ancestral sampler on the frozen pre-trained diffusion model with random initial noise 𝐳t1∼𝒩​(𝟎,𝐈)similar-tosubscript𝐳subscript𝑡1𝒩0𝐈\\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and conditioning vector 𝐜pr≔Γ​(f​(”a (class noun)”))≔subscript𝐜prΓ𝑓”a (class noun)”\\mathbf{c}_{\\text{pr}}\\coloneqq\\Gamma(f(\\text{\"a (class noun)\"})). The loss becomes: 𝔼𝐱,𝐜,ϵ,ϵ′,t(wt∥𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱∥22+λwt′∥𝐱^θ(αt′𝐱pr+σt′ϵ′,𝐜pr)−𝐱pr∥22),subscript𝔼𝐱𝐜bold-italic-ϵsuperscriptbold-italic-ϵ′𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22𝜆subscript𝑤superscript𝑡′subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼superscript𝑡′subscript𝐱prsubscript𝜎superscript𝑡′superscriptbold-italic-ϵ′subscript𝐜prsubscript𝐱pr22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},{\\bm{\\epsilon}}^{\\prime},t}(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}+\\\\ \\lambda w_{t^{\\prime}}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t^{\\prime}}\\mathbf{x}_{\\text{pr}}+\\sigma_{t^{\\prime}}{\\bm{\\epsilon}}^{\\prime},\\mathbf{c}_{\\text{pr}})-\\mathbf{x}_{\\text{pr}}\\|^{2}_{2}), (2) where the second term is the prior-preservation term that supervises the model with its own generated images, and λ𝜆\\lambda controls for the relative weight of this term. Figure 3 illustrates the model fine-tuning with the class-generated samples and prior-preservation loss. Despite being simple, we find this prior-preservation loss is effective in encouraging output diversity and in overcoming language-drift. We also find that we can train the model for more iterations without risking overfitting. We find that ∼similar-to\\sim 1000 iterations with λ=1𝜆1\\lambda=1 and learning rate 10−5superscript10510^{-5} for Imagen  and 5×10−65superscript1065\\times 10^{-6} for Stable Diffusion , and with a subject dataset size of 3-5 images is enough to achieve good results. During this process, ∼1000similar-toabsent1000\\sim 1000 “a (class noun)” samples are generated - but less can be used. The training process takes about 5 minutes on one TPUv4 for Imagen, and 5 minutes on a NVIDIA A100 for Stable Diffusion. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_22", "text": " In this section, we show experiments and applications. Our method enables a large expanse of text-guided semantic modifications of our subject instances, including recontextualization, modification of subject properties such as material and species, art rendition, and viewpoint modification. Importantly, across all of these modifications, we are able to preserve the unique visual features that give the subject its identity and essence. If the task is recontextualization, then the subject features are unmodified, but appearance (e.g., pose) may change. If the task is a stronger semantic modification, such as crossing between our subject and another species/object, then the key features of the subject are preserved after modification. In this section, we reference the subject’s unique identifier using (V). We include specific Imagen and Stable Diffusion implementation details in the supp. material. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_23", "text": " We collected a dataset of 30 subjects, including unique objects and pets such as backpacks, stuffed animals, dogs, cats, sunglasses, cartoons, etc. We separate each subject into two categories: objects and live subjects/pets. 21 of the 30 subjects are objects, and 9 are live subjects/pets. We provide one sample image for each of the subjects in Figure 5. Images for this dataset were collected by the authors or sourced from Unsplash . We also collected 25 prompts: 20 recontextualization prompts and 5 property modification prompts for objects; 10 recontextualization, 10 accessorization, and 5 property modification prompts for live subjects/pets. The full list of prompts can be found in the supplementary material. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_24", "text": " For the evaluation suite we generate four images per subject and per prompt, totaling 3,000 images. This allows us to robustly measure performances and generalization capabilities of a method. We make our dataset and evaluation protocol publicly available on the project webpage for future use in evaluating subject-driven generation. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_25", "text": " One important aspect to evaluate is subject fidelity: the preservation of subject details in generated images. For this, we compute two metrics: CLIP-I and DINO . CLIP-I is the average pairwise cosine similarity between CLIP  embeddings of generated and real images. Although this metric has been used in other work , it is not constructed to distinguish between different subjects that could have highly similar text descriptions (e.g. two different yellow clocks). Our proposed DINO metric is the average pairwise cosine similarity between the ViT-S/16 DINO embeddings of generated and real images. This is our preferred metric, since, by construction and in contrast to supervised networks, DINO is not trained to ignore differences between subjects of the same class. Instead, the self-supervised training objective encourages distinction of unique features of a subject or image. The second important aspect to evaluate is prompt fidelity, measured as the average cosine similarity between prompt and image CLIP embeddings. We denote this as CLIP-T. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_26", "text": " We compare our results with Textual Inversion, the recent concurrent work of Gal et al. , using the hyperparameters provided in their work. We find that this work is the only comparable work in the literature that is subject-driven, text-guided and generates novel images. We generate images for DreamBooth using Imagen, DreamBooth using Stable Diffusion and Textual Inversion using Stable Diffusion. We compute DINO and CLIP-I subject fidelity metrics and the CLIP-T prompt fidelity metric. In Table 1 we show sizeable gaps in both subject and prompt fidelity metrics for DreamBooth over Textual Inversion. We find that DreamBooth (Imagen) achieves higher scores for both subject and prompt fidelity than DreamBooth (Stable Diffusion), approaching the upper-bound of subject fidelity for real images. We believe that this is due to the larger expressive power and higher output quality of Imagen. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_27", "text": " Further, we compare Textual Inversion (Stable Diffusion) and DreamBooth (Stable Diffusion) by conducting a user study. For subject fidelity, we asked 72 users to answer questionnaires of 25 comparative questions (3 users per questionnaire), totaling 1800 answers. Samples are randomly selected from a large pool. Each question shows the set of real images for a subject, and one generated image of that subject by each method (with a random prompt). Users are asked to answer the question: “Which of the two images best reproduces the identity (e.g. item type and details) of the reference item?”, and we include a “Cannot Determine / Both Equally” option. Similarly for prompt fidelity, we ask “Which of the two images is best described by the reference text?”. We average results using majority voting and present them in Table 2. We find an overwhelming preference for DreamBooth for both subject fidelity and prompt fidelity. This shines a light on results in Table 1, where DINO differences of around 0.10.10.1 and CLIP-T differences of 0.050.050.05 are significant in terms of user preference. Finally, we show qualitative comparisons in Figure 4. We observe that DreamBooth better preserves subject identity, and is more faithful to prompts. We show samples of the user study in the supp. material. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_28", "text": " We fine-tune Imagen on 15 subjects from our dataset, with and without our proposed prior preservation loss (PPL). The prior preservation loss seeks to combat language drift and preserve the prior. We compute a prior preservation metric (PRES) by computing the average pairwise DINO embeddings between generated images of random subjects of the prior class and real images of our specific subject. The higher this metric, the more similar random subjects of the class are to our specific subject, indicating collapse of the prior. We report results in Table 3 and observe that PPL substantially counteracts language drift and helps retain the ability to generate diverse images of the prior class. Additionally, we compute a diversity metric (DIV) using the average LPIPS  cosine similarity between generated images of same subject with same prompt. We observe that our model trained with PPL achieves higher diversity (with slightly diminished subject fidelity), which can also be observed qualitatively in Figure 6, where our model trained with PPL overfits less to the environment of the reference images and can generate the dog in more diverse poses and articulations. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_29", "text": " We finetune Imagen on a subset of our dataset subjects (5 subjects) with no class noun, a randomly sampled incorrect class noun, and the correct class noun. With the correct class noun for our subject, we are able to faithfully fit to the subject, take advantage of the class prior, allowing us to generate our subject in various contexts. When an incorrect class noun (e.g. “can” for a backpack) is used, we run into contention between our subject and and the class prior - sometimes obtaining cylindrical backpacks, or otherwise misshapen subjects. If we train with no class noun, the model does not leverage the class prior, has difficulty learning the subject and converging, and can generate erroneous samples. Subject fidelity results are shown in Table 4, with substantially higher subject fidelity for our proposed approach. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_30", "text": " We can generate novel images for a specific subject in different contexts (Figure 7) with descriptive prompts (“a (V) (class noun) (context description)”). Importantly, we are able to generate the subject in new poses and articulations, with previously unseen scene structure and realistic integration of the subject in the scene (e.g. contact, shadows, reflections). ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_31", "text": " Given a prompt “a painting of a (V) (class noun) in the style of (famous painter)” or “a statue of a (V) (class noun) in the style of (famous sculptor)” we are able to generate artistic renditions of our subject. Unlike style transfer, where the source structure is preserved and only the style is transferred, we are able to generate meaningful, novel variations depending on the artistic style, while preserving subject identity. E.g, as shown in Figure 8, “Michelangelo”, we generated a pose that is novel and not seen in the input images. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_32", "text": " We are able to render the subject under novel viewpoints. In Figure 8, we generate new images of the input cat (with consistent complex fur patterns) under new viewpoints. We highlight that the model has not seen this specific cat from behind, below, or above - yet it is able to extrapolate knowledge from the class prior to generate these novel views given only 4 frontal images of the subject. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_33", "text": " We are able to modify subject properties. For example, we show crosses between a specific Chow Chow dog and different animal species in the bottom row of Figure 8. We prompt the model with sentences of the following structure: “a cross of a (V) dog and a (target species)”. In particular, we can see in this example that the identity of the dog is well preserved even when the species changes - the face of the dog has certain unique features that are well preserved and melded with the target species. Other property modifications are possible, such as material modification (e.g. “a transparent (V) teapot” in Figure 7). Some are harder than others and depend on the prior of the base generation model. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_34", "text": " We illustrate some failure models of our method in Figure 9. The first is related to not being able to accurately generate the prompted context. Possible reasons are a weak prior for these contexts, or difficulty in generating both the subject and specified concept together due to low probability of co-occurrence in the training set. The second is context-appearance entanglement, where the appearance of the subject changes due to the prompted context, exemplified in Figure 9 with color changes of the backpack. Third, we also observe overfitting to the real images that happen when the prompt is similar to the original setting in which the subject was seen. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_35", "text": " Other limitations are that some subjects are easier to learn than others (e.g. dogs and cats). Occasionally, with subjects that are rarer, the model is unable to support as many subject variations. Finally, there is also variability in the fidelity of the subject and some generated images might contain hallucinated subject features, depending on the strength of the model prior, and the complexity of the semantic modification. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_36", "text": " We presented an approach for synthesizing novel renditions of a subject using a few images of the subject and the guidance of a text prompt. Our key idea is to embed a given subject instance in the output domain of a text-to-image diffusion model by binding the subject to a unique identifier. Remarkably - this fine-tuning process can work given only 3-5 subject images, making the technique particularly accessible. We demonstrated a variety of applications with animals and objects in generated photorealistic scenes, in most cases indistinguishable from real images. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" }, { "id": "2208.12242_all_37", "text": " We thank Rinon Gal, Adi Zicher, Ron Mokady, Bill Freeman, Dilip Krishnan, Huiwen Chang and Daniel Cohen-Or for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained Imagen models. Finally, a special thanks to David Salesin for his feedback, advice and for his support for the project. ", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" } ]
How do the authors evaluate the performance of the networks in the experiments?
[The author evaluate the performance of network by calculating testing and training error for classification evaluation and mAP for object detection ] [21].
[ 21 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence (41, 44) reveals that network depth is of crucial importance, and the leading results (41, 44, 13, 16) on the challenging ImageNet dataset all exploit “very deep” models, with a depth of sixteen to thirty . Many other nontrivial visual recognition tasks (8, 12, 7, 32, 27) have also greatly benefited from very deep models. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_1", "text": " Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients (1, 9), which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization (23, 9, 37, 13) and intermediate normalization layers , which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_2", "text": " When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in (11, 42) and thoroughly verified by our experiments. Fig. 1 shows a typical example. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_3", "text": " The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_4", "text": " In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we let the stacked nonlinear layers fit another mapping of ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original mapping is recast into ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_5", "text": " The formulation of ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x} can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections (2, 34, 49) are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe ) without modifying the solvers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_6", "text": " We present comprehensive experiments on ImageNet to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_7", "text": " Similar phenomena are also shown on the CIFAR-10 set , suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_8", "text": " On the ImageNet classification dataset , we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets . Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_9", "text": " Residual Representations. In image recognition, VLAD is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector can be formulated as a probabilistic version of VLAD. Both of them are powerful shallow representations for image retrieval and classification (4, 48). For vector quantization, encoding residual vectors is shown to be more effective than encoding original vectors. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_10", "text": " In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning (45, 46), which relies on variables that represent residual vectors between two scales. It has been shown (3, 45, 46) that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_11", "text": " Shortcut Connections. Practices and theories that lead to shortcut connections (2, 34, 49) have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output (34, 49). In (44, 24), a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of (39, 38, 31, 47) propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In , an “inception” layer is composed of a shortcut branch and a few deeper branches. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_12", "text": " Concurrent with our work, “highway networks” (42, 43) present shortcut connections with gating functions . These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_13", "text": " Let us consider ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with 𝐱𝐱\\mathbf{x} denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions222This hypothesis, however, is still an open question. See ., then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., ℋ​(𝐱)−𝐱ℋ𝐱𝐱\\mathcal{H}(\\mathbf{x})-\\mathbf{x} (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we explicitly let these layers approximate a residual function ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original function thus becomes ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_14", "text": " This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_15", "text": " In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_16", "text": " We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: 𝐲=ℱ​(𝐱,{Wi})+𝐱.𝐲ℱ𝐱subscript𝑊𝑖𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+\\mathbf{x}. (1) Here 𝐱𝐱\\mathbf{x} and 𝐲𝐲\\mathbf{y} are the input and output vectors of the layers considered. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, ℱ=W2​σ​(W1​𝐱)ℱsubscript𝑊2𝜎subscript𝑊1𝐱\\mathcal{F}=W_{2}\\sigma(W_{1}\\mathbf{x}) in which σ𝜎\\sigma denotes ReLU and the biases are omitted for simplifying notations. The operation ℱ+𝐱ℱ𝐱\\mathcal{F}+\\mathbf{x} is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ​(𝐲)𝜎𝐲\\sigma(\\mathbf{y}), see Fig. 2). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_17", "text": " The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_18", "text": " The dimensions of 𝐱𝐱\\mathbf{x} and ℱℱ\\mathcal{F} must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Wssubscript𝑊𝑠W_{s} by the shortcut connections to match the dimensions: 𝐲=ℱ​(𝐱,{Wi})+Ws​𝐱.𝐲ℱ𝐱subscript𝑊𝑖subscript𝑊𝑠𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+W_{s}\\mathbf{x}. (2) We can also use a square matrix Wssubscript𝑊𝑠W_{s} in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Wssubscript𝑊𝑠W_{s} is only used when matching dimensions. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_19", "text": " The form of the residual function ℱℱ\\mathcal{F} is flexible. Experiments in this paper involve a function ℱℱ\\mathcal{F} that has two or three layers (Fig. 5), while more layers are possible. But if ℱℱ\\mathcal{F} has only a single layer, Eqn.(1) is similar to a linear layer: 𝐲=W1​𝐱+𝐱𝐲subscript𝑊1𝐱𝐱\\mathbf{y}=W_{1}\\mathbf{x}+\\mathbf{x}, for which we have not observed advantages. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_20", "text": " We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_21", "text": " We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_22", "text": " Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets (Fig. 3, left). The convolutional layers mostly have 3×\\times3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_23", "text": " It is worth noticing that our model has fewer filters and lower complexity than VGG nets (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_24", "text": " Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×\\times1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_25", "text": " Our implementation for ImageNet follows the practice in (21, 41). The image is resized with its shorter side randomly sampled in (256,480)256480(256,480) for scale augmentation . A 224×\\times224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted . The standard color augmentation in is used. We adopt batch normalization (BN) right after each convolution and before activation, following . We initialize the weights as in and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×10460superscript10460\\times 10^{4} iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout , following the practice in . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_26", "text": " In testing, for comparison studies we adopt the standard 10-crop testing . For best results, we adopt the fully-convolutional form as in (41, 13), and average the scores at multiple scales (images are resized such that the shorter side is in {224,256,384,480,640}224256384480640\\{224,256,384,480,640\\}). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_27", "text": " We evaluate our method on the ImageNet 2012 classification dataset that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_28", "text": " Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_29", "text": " The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_30", "text": " We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN , which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error333We have experimented with more training iterations (3×\\times) and still observed the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.. The reason for such optimization difficulties will be studied in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_31", "text": " Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×\\times3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_32", "text": " We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_33", "text": " Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_34", "text": " Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_35", "text": " Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_36", "text": " Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_37", "text": " Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design444Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.. For each residual function ℱℱ\\mathcal{F}, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×\\times1, 3×\\times3, and 1×\\times1 convolutions, where the 1×\\times1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×\\times3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_38", "text": " The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_39", "text": " 50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_40", "text": " 101-layer and 152-layer ResNets: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_41", "text": " The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 5). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 5). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_42", "text": " Comparisons with State-of-the-art Methods. In Table 5 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_43", "text": " We conducted more studies on the CIFAR-10 dataset , which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_44", "text": " The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×\\times32 images, with the per-pixel mean subtracted. The first layer is 3×\\times3 convolutions. Then we use a stack of 6​n6𝑛6n layers with 3×\\times3 convolutions on the feature maps of sizes {32,16,8}32168\\{32,16,8\\} respectively, with 2n𝑛n layers for each feature map size. The numbers of filters are {16,32,64}163264\\{16,32,64\\} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n𝑛n+2 stacked weighted layers. The following table summarizes the architecture: output map size 32×\\times32 16×\\times16 8×\\times8 # layers 1+2n𝑛n 2n𝑛n 2n𝑛n # filters 16 32 64 When shortcut connections are used, they are connected to the pairs of 3×\\times3 layers (totally 3​n3𝑛3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_45", "text": " We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in and BN but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in for training: 4 pixels are padded on each side, and a 32×\\times32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×\\times32 image. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_46", "text": " We compare n={3,5,7,9}𝑛3579n=\\{3,5,7,9\\}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see ), suggesting that such an optimization difficulty is a fundamental problem. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_47", "text": " Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_48", "text": " We further explore n=18𝑛18n=18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging555With an initial learning rate of 0.1, it starts converging (<<90% error) after several epochs, but still reaches similar accuracy.. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet and Highway (Table 6), yet is among the state-of-the-art results (6.43%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_49", "text": " Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×\\times3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_50", "text": " Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n=200𝑛200n=200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103superscript10310^{3}-layer network is able to achieve training error <<0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_51", "text": " But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout or dropout is applied to obtain the best results ((10, 25, 24, 35)) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_52", "text": " Our method has good generalization performance on other recognition tasks. Table 8 and  8 show the object detection baseline results on PASCAL VOC 2007 and 2012 and COCO . We adopt Faster R-CNN as the detection method. Here we are interested in the improvements of replacing VGG-16 with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@(.5, .95)), which is a 28% relative improvement. This gain is solely due to the learned representations. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_53", "text": " Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix. ", "title": "Deep Residual Learning for Image Recognition" } ]
What are the reasons and goals behind image sampling ?
Image sampling is useful for data augmentation to create larger datasets for training, and can also be used to balance the number of data per class during the data augmentation process [1].
[ 1 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annotated datasets with representative data distribution characteristics are crucial to learning more accurate or generalizable models (5, 4). Unlike previous image datasets used in computer vision, ImageNet offers a very comprehensive database of more than 1.2 million categorized natural images of 1000+ classes. The CNN models trained upon this database serve as the backbone for significantly improving many object detection and image segmentation problems using other datasets (6, 7), e.g., PASCAL and medical image categorization (9, 10, 11, 12). However, there exists no large-scale annotated medical image dataset comparable to ImageNet, as data acquisition is difficult, and quality annotation is costly. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_1", "text": " There are currently three major techniques that successfully employ CNNs to medical image classification: 1) training the “CNN from scratch” (13, 14, 15, 16, 17); 2) using “off-the-shelf CNN” features (without retraining the CNN) as complementary information channels to existing hand-crafted image features, for Chest X-rays and CT lung nodule identification (9, 12); and 3) performing unsupervised pre-training on natural or medical images and fine-tuning on medical target images using CNN or other types of deep learning models (18, 19, 20, 21). A decompositional 2.5D view resampling and an aggregation of random view classification scores are used to eliminate the “curse-of-dimensionality” issue in , in order to acquire a sufficient number of training image samples. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_2", "text": " Previous studies have analyzed three-dimensional patch creation for LN detection (23, 24), atlas creation from chest CT and the extraction of multi-level image features (26, 27). At present, there are several extensions or variations of the decompositional view representation introduced in (22, 28), such as: using a novel vessel-aligned multi-planar image representation for pulmonary embolism detection , fusing unregistered multiview for mammogram analysis and classifying pulmonary peri-fissural nodules via an ensemble of 2D views . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_3", "text": " Although natural images and medical images differ significantly, conventional image descriptors developed for object recognition in natural images, such as the scale-invariant feature transform (SIFT) and the histogram of oriented gradients (HOG) , have been widely used for object detection and segmentation in medical image analysis. Recently, ImageNet pre-trained CNNs have been used for chest pathology identification and detection in X-ray and CT modalities (10, 9, 12). They have yielded the best performance results by integrating low-level image features (e.g., GIST , bag of visual words (BoVW) and bag-of-frequency ). However, the fine-tuning of an ImageNet pre-trained CNN model on medical image datasets has not yet been exploited. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_4", "text": " In this paper, we exploit three important, but previously under-studied factors of employing deep convolutional neural networks to computer-aided detection problems. Particularly, we explore and evaluate different CNN architectures varying in width (ranging from 5 thousand to 160 million parameters) and depth (various numbers of layers), describe the effects of varying dataset scale and spatial image context on performance, and discuss when and why transfer learning from pre-trained ImageNet CNN models can be valuable. We further verify our hypothesis by inheriting and adapting rich hierarchical image features (5, 33) from the large-scale ImageNet dataset for computer aided diagnosis (CAD). We also explore CNN architectures of the most studied seven-layered “AlexNet-CNN” , a shallower “Cifar-CNN” , and a much deeper version of “GoogLeNet-CNN” (with our modifications on CNN structures). This study is partially motivated by recent studies (34, 35) in computer vision. The thorough quantitative analysis and evaluation on deep CNN or sparsity image coding methods elucidate the emerging techniques of the time and provide useful suggestions for their future stages of development, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_5", "text": " Two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied in this work. On mediastinal LN detection, we surpass all currently reported results. We obtain 86%percent8686\\% sensitivity on 3 false positives (FP) per patient, versus the prior state-of-art sensitivities of 78%percent7878\\% (stacked shallow learning) and 70%percent7070\\% (CNN), as prior state-of-the-art. For the first time, ILD classification results under the patient-level five-fold cross-validation protocol (CV5) are investigated and reported. The ILD dataset contains 905 annotated image slices with 120 patients and 6 ILD labels. Such sparsely annotated datasets are generally difficult for CNN learning, due to the paucity of labeled instances. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_6", "text": " Evaluation protocols and details are critical to deriving significant empirical findings . Our experimental results suggest that different CNN architectures and dataset re-sampling protocols are critical for the LN detection tasks where the amount of labeled training data is sufficient and spatial contexts are local. Since LN images are more flexible than ILD images with respect to resampling and reformatting, LN datasets may be more readily augmented by such image transformations. As a result, LN datasets contain more training and testing data instances (due to data auugmentation) than ILD datasets. They nonetheless remain less comprehensive than natural image datasets, such as ImageNet. Fine-tuning ImageNet-trained models for ILD classification is clearly advantageous and yields early promising results, when the amount of labeled training data is highly insufficient and multi-class categorization is used, as opposed to the LN dataset’s binary class categorization. Another significant finding is that CNNs trained from scratch or fine-tuned from ImageNet models consistently outperform CNNs that merely use off-the-shelf CNN features, in both the LN and ILD classification problems. We further analyze, via CNN activation visualizations, when and why transfer learning from non-medical to medical images in CADe problems can be valuable. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_7", "text": " We employ CNNs (with the characteristics defined above) to thoraco-abdominal lymph node (LN) detection (evaluated separately on the mediastinal and abdominal regions) and interstitial lung disease (ILD) detection. For LN detection, we use randomly sampled 2.5D views in CT . We use 2D CT slices (38, 39, 40) for ILD detection. We then evaluate and compare CNN performance results. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_8", "text": " Until the detection aggregation approach (22, 41), thoracoabdominal lymph node (LN) detection via CADe mechanisms has yielded poor performance results. In , each 3D LN candidate produces up to 100 random 2.5D orthogonally sampled images or views which are then used to train an effective CNN model. The best performance on abdominal LN detection is achieved at 83%percent8383\\% recall on 3FP per patient , using a “Cifar-10” CNN. Using the thoracoabdominal LN detection datasets , we aim to surpass this CADe performance level, by testing different CNN architectures, exploring various dataset re-sampling protocols, and applying transfer learning from ImageNet pre-trained CNN models. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_9", "text": " Interstitial lung disease (ILD) comprises more than 150 lung diseases affecting the interstitium, which can severely impair the patient’s ability to breathe. Gao et al. investigate the ILD classification problem in two scenarios: 1) slice-level classification: assigning a holistic two-dimensional axial CT slice image with its occurring ILD disease label(s); and 2) patch-level classification: a/ sampling patches within the 2D ROIs (Regions of Interest provided by ), then b/ classifying patches into seven category labels ( six disease labels and one “healthy” label). Song et al. (38, 39) only address the second sub-task of patch-level classification under the “leave-one-patient-out” (LOO) criterion. By training on the moderate-to-small scale ILD dataset , our main objective is to exploit and benchmark CNN based ILD classification performances under the CV5 metric (which is more realistic and unbiased than LOO (38, 39) and hard-split ), with and without transfer learning. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_10", "text": " Thoracoabdominal Lymph Node Datasets. We use the publicly available dataset from (22, 41). There are 388 mediastinal LNs labeled by radiologists in 90 patient CT scans, and 595 abdominal LNs in 86 patient CT scans. To facilitate comparison, we adopt the data preparation protocol of , where positive and negative LN candidates are sampled with the fields-of-view (FOVs) of 30mm to 45mm, surrounding the annotated and detected LN centers (obtained by a candidate generation process). More precisely, (22, 41, 36) follow a coarse-to-fine CADe scheme, partially inspired by , which operates with ∼100%similar-toabsentpercent100\\sim 100\\% detection recalls at the cost of approximately 40 false or negative LN candidates per patient scan. In this work, positive and negative LN candidate are first sampled up to 200 times with translations and rotations. Afterwards, negative LN samples are randomly re-selected at a lower rate close to the total number of positives. LN candidates are randomly extracted from fields-of-view (FOVs) spanning 35mm to 128mm in soft-tissue window (-100, 200HU). This allows us to capture multiple spatial scales of image context (43, 44)). The samples are then rescaled to a 64×64646464\\times 64 pixel resolution via B-spline interpolation. A few examples of LNs with axial, coronal, and sagittal views encoded in RGB color images are shown in Figure 1. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_11", "text": " Unlike the heart or the liver, lymph nodes have no pre-determined anatomic orientation. Hence, the purely random image resampling (with respect to scale, displacement and orientation) and reformatting (the axial, coronal, and sagittal views are in any system randomly resampled coordinates) is a natural choice, which also happens to yield high CNN performance. Although we integrate three channels of information from three orthogonal views for LN detection, the pixel-wise spatial correlations between or among channels are not necessary. The convolutional kernels in the lower level CNN architectures can learn the optimal weights to linearly combine the observations from the axial, coronal, and sagittal channels by computing their dot-products. Transforming axial, coronal, and sagittal representations to RGB also facilitates transfer learning from CNN models trained on ImageNet. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_12", "text": " This learning representation (i.e., “built-in CNN”) is flexible, in that it naturally combines multiple sources or channels of information. In the recent literature , even heterogeneous class-conditional probability maps can be combined with raw images to improve performance. This set-up is similar to that of other works in computer vision, such as , where heterogeneous image information channels are jointly fed into the CNN convolutional layers for high-accuracy human parsing and segmentation. Finally, if there are correlations among CNN input channels, one may observe the corresponding correlated patterns in the learned filters. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_13", "text": " In summary, the assumption that there are or must be pixel-wise spatial correlations among input channels does not apply to the CNN model representation. For other medical imaging problems, such as pulmonary embolism detection , in which orientation can be constrained along the attached vessel axis, vessel-aligned multi-planar image representation (MPR) is more effective than randomly aligned MPR. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_14", "text": " Interstitial Lung Disease Dataset. We utilize the publicly available dataset of . It contains 905 image slices from 120 patients, with six lung tissue types annotations containing at least one of the following: healthy (NM), emphysema (EM), ground glass (GG), fibrosis (FB), micronodules (MN) and consolidation (CD) (Figure 3). At the slice level, the objective is to classify the status of “presence/absence” of any of the six ILD classes for an input axial CT slice . Characterizing an arbitrary CT slice against any possible ILD type, without any manual ROI (in contrast to (38, 39)), can be useful for large-scale patient screening. For slice-level ILD classification, we sampled the slices 12 times with random translations and rotations. After this, we balanced the numbers of CT slice samples for the six classes by randomly sampling several instances at various rates. For patch-based classification, we sampled up to 100 patches of size 64×64646464\\times 64 from each ROI. This dataset is divided into five folds with disjoint patient subsets. The average number of CT slices (training instances) per fold is small, as shown in Table I. Slice-level ILD classification is a very challenging task where CNN models need to learn from very small numbers of training examples and predict ILD labels on unseen patients. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_15", "text": " In the publicly available ILD dataset, very few CT slices are labeled as normal or healthy. The remaining CT slices cannot be simply classified as normal, because many ILD disease regions or slices have not yet been labeled. ILD is a partially labeled database; this is one of its main limitations. Research is being conducted to address this issue. In particular, has proposed to fully label the ILD dataset pixel-wise via proposed segmentation label propagation. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_16", "text": " To leverage the CNN architectures designed for color images and to transfer CNN parameters pre-trained on ImageNet, we transform all gray-scale axial CT slice images via three CT window ranges: lung window range (-1400, -200HU), high-attenuation range (-160, 240HU), and low-attenuation range (-1400; -950HU). We then encode the transformed images into RGB channels (to be aligned with the input channels of CNN models (4, 33) pre-trained from natural image datasets ). The low-attenuation CT window is useful for visualizing certain texture patterns of lung diseases (especially emphysema). The usage of different CT attenuation channels improves classification results over the usage of a single CT windowing channel, as demonstrated in . More importantly, these CT windowing processes do not depend on the lung segmentation, which instead is directly defined in the CT HU space. Figure 4 shows a representative example of lung, high-attenuation, and low-attenuation CT windowing for an axis lung CT slice. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_17", "text": " As observed in , lung segmentation is crucial to holistic slice-level ILD classification. We empirically compare performance in two scenarios with a rough lung segmentation111This can be achieved by segmenting the lung using simple label-fusion methods . In the first case, we overlay the target image slice with the average lung mask among the training folds. In the second, we perform simple morphology operations to obtain the lung boundary. In order to retain information from the inside of the lung, we apply Gaussian smoothing to the regions outside of the lung boundary. There is no significant difference between two setups. Due to the high precision of CNN based image processing, highly accurate lung segmentation is not necessary . The localization of ILD regions within the lung is simultaneously learned through selectively weighted CNN reception fields in the deepest convolutional layers during the classification based CNN training (49, 50). Some areas outside of the lung appear in both healthy or diseased images. CNN training learns to ignore them by setting very small filter weights around the corresponding regions (Figure 13). This observation is validated by . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_18", "text": " In this study, we explore, evaluate and analyze the influence of various CNN Architectures, dataset characteristics (when we need more training data or better models for object detection ) and CNN transfer learning from non-medical to medical image domains. These three key elements of building effective deep CNN models for CADe problems are described below. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_19", "text": " We mainly explore three convolutional neural network architectures (CifarNet (5, 22), AlexNet and GoogLeNet ) with different model training parameter values. The current deep learning models (22, 52, 53) in medical image tasks are at least 2∼5similar-to252\\sim 5 orders of magnitude smaller than even AlexNet . More complex CNN models (22, 52) have only about 150K or 15K parameters. Roth et al. adopt the CNN architecture tailored to the Cifar-10 dataset and operate on image windows of 32×32×33232332\\times 32\\times 3 pixels for lymph node detection, while the simplest CNN in has only one convolutional, pooling, and FC layer, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_20", "text": " We use CifarNet as used in as a baseline for the LN detection. AlexNet and GoogLeNet are also modified to evaluate these state-of-the-art CNN architecture from ImageNet classification task to our CADe problems and datasets. A simplified illustration of three CNN architectures exploited is shown in Figure 5. CifarNet always takes 32×32×33232332\\times 32\\times 3 image patches as input while AlexNet and GoogLeNet are originally designed for the fixed image dimension of 256×256×32562563256\\times 256\\times 3 pixels. We also reduced the filter size, stride and pooling parameters of AlexNet and GoogLeNet to accommodate a smaller input size of 64×64×36464364\\times 64\\times 3 pixels. We do so to produce and evaluate “simplified” AlexNet and GoogLeNet versions that are better suited to the smaller scale training datasets common in CADe problems. Throughout the paper, we refer to the models as CifarNet (32x32) or CifarNet (dropping 32x32); AlexNet (256x256) or AlexNet-H (high resolution); AlexNet (64x64) or AlexNet-L (low resolution); GoogLeNet (256x256) or GoogLeNet-H and GoogLeNet (64x64) or GoogLeNet-L (dropping 3 since all image inputs are three channels). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_21", "text": " CifarNet, introduced in , was the state-of-the-art model for object recognition on the Cifar10 dataset, which consists of 32×32323232\\times 32 images of 10 object classes. The objects are normally centered in the images. Some example images and class categories from the Cifar10 dataset are shown in Figure 7. CifarNet has three convolution layers, three pooling layers, and one fully-connected layer. This CNN architecture, also used in has about 0.15 million free parameters. We adopt it as a baseline model for the LN detection. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_22", "text": " The AlexNet architecture was published in , achieved significantly improved performance over the other non-deep learning methods for ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012. This success has revived the interest in CNNs in computer vision. ImageNet consists of 1.2 million 256×256256256256\\times 256 images belonging to 1000 categories. At times, the objects in the image are small and obscure, and thus pose more challenges for learning a successful classification model. More details about the ImageNet dataset will be discussed in Sec. III-B. AlexNet has five convolution layers, three pooling layers, and two fully-connected layers with approximately 60 million free parameters. AlexNet is our default CNN architecture for evaluation and analysis in the remainder of the paper. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_23", "text": " The GoogLeNet model proposed in , is significantly more complex and deep than all previous CNN architectures. More importantly, it also introduces a new module called “Inception”, which concatenates filters of different sizes and dimensions into a single new filter (refer to Figure 6). Overall, GoogLeNet has two convolution layers, two pooling layers, and nine “Inception” layers. Each “Inception” layer consists of six convolution layers and one pooling layer. An illustration of an “Inception” layer (inception3a) from GoogLeNet is shown in Figure 6. GoogLeNet is the current state-of-the-art CNN architecture for the ILSVRC challenge, where it achieved 5.5% top-5 classification error on the ImageNet challenge, compared to AlexNet’s 15.3% top-5 classification error. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_24", "text": " ImageNet has more than 1.2 million 256×256256256256\\times 256 images categorized under 1000 object class categories. There are more than 1000 training images per class. The database is organized according to the WordNet hierarchy, which currently contains only nouns in 1000 object categories. The image-object labels are obtained largely through crowd-sourcing, e.g., Amazon Mechanical Turk, and human inspection. Some examples of object categories in ImageNet are “sea snake”, “sandwich”, “vase”, “leopard”, etc. ImageNet is currently the largest image dataset among other standard datasets for visual recognition. Indeed, the Caltech101, Caltech256 and Cifar10 dataset merely contain 60000 32×32323232\\times 32 images and 10 object classes. Furthermore, due to the large number (1000+) of object classes, the objects belonging to each ImageNet class category can be occluded, partial and small, relative to those in the previous public image datasets. This significant intra-class variation poses greater challenges to any data-driven learning system that builds a classifier to fit given data and generalize to unseen data. For comparison, some example images of Cifar10 dataset and ImageNet images in the “tennis ball” class category are shown in Figure 7. The ImageNet dataset is publicly available, and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has become the standard benchmark for large-scale object recognition. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_25", "text": " When learned from scratch, all the parameters of CNN models are initialized with random Gaussian distributions and trained for 30 epochs with the mini-batch size of 50 image instances. Training convergence can be observed within 30 epochs. The other hyperparameters are momentum: 0.9; weight decay: 0.0005; (base) learning rate: 0.01, decreased by a factor of 10 at every 10 epochs. We use the Caffe framework and NVidia K40 GPUs to train the CNNs. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_26", "text": " AlexNet and GoogLeNet CNN models can be either learned from scratch or fine-tuned from pre-trained models. Girshick et al. find that, by applying ImageNet pre-trained ALexNet to PASCAL dataset , performances of semantic 20-class object detection and segmentation tasks significantly improve over previous methods that use no deep CNNs. AlexNet can be fine-tuned on the PASCAL dataset to surpass the performance of the ImageNet pre-trained AlexNet, although the difference is not as significant as that between the CNN and non-CNN methods. Similarly, (57, 58) also demonstrate that better performing deep models are learned via CNN transfer learning from ImageNet to other datasets of limited scales. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_27", "text": " Our hypothesis on CNN parameter transfer learning is the following: despite the disparity between natural images and natural images, CNNs comprehensively trained on the large scale well-annotated ImageNet may still be transferred to make medical image recognition tasks more effective. Collecting and annotating large numbers of medical images still poses significant challenges. On the other hand, the mainstream deep CNN architectures (e.g., AlexNet and GoogLeNet) contain tens of millions of free parameters to train, and thus require sufficiently large numbers of labeled medical images. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_28", "text": " For transfer learning, we follow the approach of (57, 6) where all CNN layers except the last are fine-tuned at a learning rate 10 times smaller than the default learning rate. The last fully-connected layer is random initialized and freshly trained, in order to accommodate the new object categories in our CADe applications. Its learning rate is kept at the original 0.01. We denote the models with random initialization or transfer learning as AlexNet-RI and AlexNet-TL, and GoogLeNet-RI and GoogLeNet-TL. We found that the transfer learning strategy yields the best performance results. Determining the optimal learning rate for different layers is challenging, especially for very deep networks such as GoogLeNet. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_29", "text": " We also perform experiments using “off-the-shelf” CNN features of AlexNet pre-trained on ImageNet and training only the final classifier layer to complete the new CADe classification tasks. Parameters in the convolutional and fully connected layers are fixed and are used as deep image extractors, as in (10, 9, 12). We refer to this model as AlexNet-ImNet in the remainder of the paper. Note that (10, 9, 12) train support vector machines and random forest classifiers using ImageNet pre-trained CNN features. Our simplified implementation is intended to determine whether fine-tuning the “end-to-end” CNN network is necessary to improve performance, as opposed to merely training the final classification layer. This is a slight modification from the method described in (10, 9, 12). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_30", "text": " Finally, transfer learning in CNN representation, as empirically verified in previous literature (59, 60, 61, 11, 62), can be effective in various cross-modality imaging settings (RGB images to depth images (59, 60), natural images to general CT and MRI images , and natural images to neuroimaging or ultrasound data). More thorough theoretical studies on cross-modality imaging statistics and transferability will be needed for future studies. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_31", "text": " In this section, we evaluate and compare the performances of nine CNN model configurations (CifarNet, AlexNet-ImNet, AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-L, GoogLeNet-RI-H, GoogLeNet-TL-H, GoogLeNet-RI-L and combined) on two important CADe problems using publicly available datasets (22, 41, 37). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_32", "text": " We train and evaluate CNNs using three-fold cross-validation (folds are split into disjoint sets of patients), with the different CNN architectures described above. In testing, each LN candidate has multiple random 2.5D views tested by CNN classifiers to generate LN class probability scores. We follow the random view aggregation by averaging probabilities, as in . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_33", "text": " We first sample the LN image patches at a 64×64646464\\times 64 pixel resolution. We then up-sample the 64×64646464\\times 64 pixel LN images via bi-linear interpolation to 256×256256256256\\times 256 pixels, in order to accommodate AlexNet-RI-L, AlexNet-TL-H, GoogLeNet-RI-H and GoogLeNet-TL-H. For the modified AlexNet-RI-L at (64×64646464\\times 64) pixel resolution, we reduce the number of first layer convolution filters from 96 to 64 and reduce the stride from 4 to 2. For the modified GoogLeNet-RI (64×64646464\\times 64), we decrease the number of first layer convolution filters from 64 to 32, the pad size from 3 to 2, the kernel size from 7 to 5, stride from 2 to 1 and the stride of the subsequent pooling layer from 2 to 1. We slightly reduce the number of convolutional filters in order to accommodate the smaller input image sizes of target medical image datasets (22, 37), while preventing over-fitting. This eventually improves performance on patch-based classification. CifarNet is used in to detect LN samples of 32×32×33232332\\times 32\\times 3 images. For consistency purposes, we down-sample 64×64×36464364\\times 64\\times 3 resolution LN sample images to the dimension of 32×32×33232332\\times 32\\times 3. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_34", "text": " Results for lymph node detection in the mediastinum and abdomen are reported in Table II. FROC curves are illustrated in Figure 8. The area-under-the-FROC-curve (AUC) and true positive rate (TPR, recall or sensitivity) at three false positives per patient (TPR/3FP) are used as performance metrics. Of the nine investigated CNN models, CifarNet, AlexNet-ImNet and GoogLeNet-RI-H generally yielded the least competitive detection accuracy results. Our LN datasets are significantly more complex (i.e., display much larger within-class appearance variations), especially due to the extracted fields-of-view (FOVs) of (35mm-128mm) compared to (30mm-45mm) in , where CifarNet is also employed. In this experiment, CifarNet is under-trained with respect to our enhanced LN datasets, due to its limited input resolution and parameter complexity. The inferior performance of AlexNet-ImNet implies that using the pre-trained ImageNet CNNs alone as “off-the-shelf” deep image feature extractors may not be optimal or adequate for mediastinal and abdominal LN detection tasks. To complement “off-the-shelf” CNN features, (10, 9, 12) all add and integrate various other hand-crafted image features as hybrid inputs for the final CADe classification. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_35", "text": " GoogLeNet-RI-H performs poorly, as it is susceptible to over-fitting. No sufficient data samples are available to train GoogLeNet-RI-H with random initialization. Indeed, due to GoogLeNet-RI-H’s complexity and 22-layer depth, million-image datasets may be required to properly train this model. However, GoogLeNet-TL-H significantly improves upon GoogLeNet-RI-H (0.81 versus 0.61 TPR/3FP in mediastinum; 0.70 versus 0.48 TPR/3FP in abdomen). This indicates that transfer learning offers a much better initialization of CNN parameters than random initialization. Likewise, AlexNet-TL-H consistently outperforms AlexNet-RI-H, though by smaller margins (0.81 versus 0.79 TPR/3FP in mediastinum; 0.69 versus 0.67 TPR/3FP in abdomen). This is also consistent with the findings reported for ILD detection in Table III and Figure 11. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_36", "text": " GoogLeNet-TL-H yields results similar to AlexNet-TL-H’s for the mediastinal LN detection, and slightly outperforms Alex-Net-H for abdominal LN detection. AlexNet-RI-H exhibits less severe over-fitting than GoogLeNet-RI-H. We also evaluate a simple ensemble by averaging the probability scores from five CNNs: AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-H, GoogLeNet-TL-H and GoogLeNet-RI-L. This combined ensemble outputs the classification accuracies matching or slightly exceeding the best performing individual CNN models on the mediastinal or abdominal LN detection tasks, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_37", "text": " Many of our CNN models achieve notably better (FROC-AUC and TPR/3FP) results than the previous state-of-the-art models for mediastinal LN detection: GoogLeNet-RI-L obtains an AUC=0.95 and 0.85 TPR/3FP, versus AUC=0.92 and 0.70 TPR/3FP and 0.78 TPR/3FP which uses stacked shallow learning. This difference lies in the fact that annotated lymph node segmentation masks are required to learn a mid-level semantic boundary detector , whereas CNN approaches only need LN locations for training . In abdominal LN detection, obtains the best trade-off between its CNN model complexity and sampled data configuration. Our best performing CNN model is GoogLeNet-TL (256x256) which obtains an AUC=0.92 and 0.70 TPR/3FP. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_38", "text": " The main difference between our dataset preparation protocol and that from is a more aggressive extraction of random views within a much larger range of FOVs. The usage of larger FOVs to capture more image spatial context is inspired by deep zoom-out features that improve semantic segmentation. This image sampling scheme contributes to our best reported performance results in both mediastinal LN detection (in this paper) and automated pancreas segmentation . As shown in Figure 1, abdominal LNs are surrounded by many other similar looking objects. Meanwhile, mediastinal LNs are more easily distinguishable, due to the images’ larger spatial contexts. Finally, from the perspective of the data-model trade-off: “Do We Need More Training Data or Better Models?” , more abdomen CT scans from distinct patient populations need to be acquired and annotated, in order to take full advantage of deep CNN models of high capacity. Nevertheless, deeper and wider CNN models (e.g., GoogLeNet-RI-L and GoogLeNet-TL-H versus Cifar-10 ) have shown improved results in the mediastinal LN detection. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_39", "text": " Figure 9 provides examples of misclassified lymph nodes (in axial view) (both false negatives (Left) and false positives(Right)), from the Abdomen and Mediastinum datasets. The overall reported LN detection results are clinically significant, as indicated in . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_40", "text": " The CNN models evaluated in this experiment are 1) AlexNet-RI (training from scratch on the ILD dataset with random initialization); 2) AlexNet-TL (with transfer learning from ); 3) AlexNet-ImNet: pre-trained ImageNet-CNN model with only the last cost function layer retrained from random initialization, according to the six ILD classes (similar to but without using additional hand-crafted non-deep feature descriptors, such as GIST and BoVW); 4) GoogLeNet-RI (random initialization); 5) GoogLeNet-TL (GoogLeNet with transfer learning from ). All ILD images (patches of 64×64646464\\times 64 and CT axial slices of 512×512512512512\\times 512) are re-sampled to a fixed dimension of 256×256256256256\\times 256 pixels. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_41", "text": " We evaluate the ILD classification task with five-fold CV on patient-level split, as it is more informative for real clinical performance than LOO. The classification accuracy rates for interstitial lung disease detection are shown in Table III. Two sub-tasks on ILD patch and slice classifications are conducted. In general, patch-level ILD classification is less challenging than slice-level classification, as far more data samples can be sampled from the manually annotated ROIs (up to 100 image patches per ROI), available from . From Table III, all five deep models evaluated obtain comparable results within the range of classification accuracy rates (0.74,0.76)0.740.76(0.74,0.76). Their averaged model achieves a slightly better accuracy of 0.79. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_42", "text": " F1-scores (38, 39, 54) and the confusion matrix (Table V) for patch-level ILD classification using GoogLeNet-TL under five-fold cross-validation (we denote as Patch-CV5) are also computed. F1-scores are reported on patch classification only (32×32323232\\times 32 pixel patches extracted from manual ROIs) (38, 39, 54), as shown in Table IV. Both and use the evaluation protocol of “leave-one-patient-out” (LOO), which is arguably much easier and not directly comparable to 10-fold CV or our Patch-CV5. In this study, we classify six ILD classes by adding a consolidation (CD) class to five classes of healthy (normal - NM), emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) in (38, 39, 54). Patch-CV10 and Patch-CV5 report similar medium to high F-scores. This implies that the ILD dataset (although one of the mainstream public medical image datasets) may not adequately represent ILD disease CT lung imaging patterns, over a population of only 120 patients. Patch-CV5 yields higher F-scores than and classifies the extra consolidation (CD) class. At present, the most pressing task is to drastically expand the dataset or to explore across-dataset deep learning on the combined ILD and LTRC datasets . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_43", "text": " Recently, Gao et al. have argued that a new CADe protocol on holistic classification of ILD diseases directly, using axial CT slice attenuation patterns and CNN, may be more realistic for clinical applications. We refer to this as slice-level classification, as image patch sampling from manual ROIs can be completely avoided (hence, no manual ROI inputs will be provided). The experimental results in are conducted with a patient-level hard split of 100 (training) and 20 (testing). The method’s testing F-scores (i.e., Slice-Test) are given in Table IV. Note that the F-scores in are not directly comparable to our results, due to different evaluation criteria. Only Slice-Test is evaluated and reported in , and we find that F-scores can change drastically from different rounds of the five-fold CV. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_44", "text": " While it is a more practical CADe scheme, slice-level CNN learning is very challenging, as it is restricted to only 905 CT image slices with tagged ILD labels. We only benchmark the slice-level ILD classification results in this section. Even with the help of data augmentation (described in Sec. II), the classification accuracy of GoogLeNet-TL from Table III is only 0.57. However, transfer learning from ImageNet pre-trained model is consistently beneficial, as evidenced by AlexNet-TL (0.46) versus AlexNet-RI (0.44), and GoogLeNet-TL (0.57) versus GoogLeNet-RI (0.41). It especially prevents GoogLeNet from over-fitting on the limited CADe datasets. Finally, when the cross-validation is conducted by randomly splitting the set of all 905 CT axial slices into five folds, markedly higher F-scores are obtained (Slice-Random in Table IV). This further validates the claim that the dataset poorly generalizes ILDs for different patients. Figure 10 shows examples of misclassified ILD patches (in axial view), with their ground truth labels and inaccurately classified labels. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_45", "text": " No existing work has reached the performance requirements for a realistic clinical setting , in which simple ROI-guided image patch extraction and classification (which requires manual ROI selection by clinicians) is implemented. The main goal of this paper is to investigate the three factors (CNN architectures, dataset characteristics and transfer learning) that affect performance on a specific medical image analysis problem and to ultimately deliver clinically relevant results. For ILD classification, the most critical performance bottlenecks are the challenge of cross-dataset learning and the limited patient population size. We attempt to overcome these obstacles by merging the ILD and LTRC datasets. Although the ILD and LTRC datasets (used in ) were generated and annotated separately, they contain many common disease labels. For instance, the ILD disease classes emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) belong to both datasets, and thus can be jointly trained/tested to form a larger and unified dataset. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_46", "text": " Adapting fully convolutional CNN or FCNN to parse every pixel location in the ILD lung CT images or slices, or adapting other methods from CNN based semantic image segmentation using PASCAL or ImageNet, may improve accuracy and efficiency. However, current FCNN approaches (65, 66) lack adequate spatial resolution in their directly output label space. A segmentation label propagation method was recently proposed to provide full pixel-wise labeling of the ILD data images. In this work, we sample image patches from the slice using the ROIs for the ILD provided in the dataset, in order to be consistent with previous methods in patch-level (38, 39, 54) and slice-level classification . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_47", "text": " In this work, we mainly focus on AlexNet and GoogLeNet. AlexNet is the first notably successful CNN architecture on the ImageNet challenge and has rekindled significant research interests on CNN. GoogLeNet is the state-of-the-art deep model, which has outperformed other notable models, such as AlexNet, OverFeat, and VGGNet (67, 68) in various computer vision benchmarks. Likewise, a reasonable assumption is that OverFeat and VGGNet may generate quantitative performance results ranked between AlexNet’s and GoogLeNet’s. For completeness, we include the Overfeat and VGGNet in the following evaluations, to bolster our hypothesis. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_48", "text": " OverFeat is described in as an integrated framework for using CNN for classification, localization and detection. Its architecture is similar to that of AlexNet, but contains far more parameters (e.g., 1024 convolution filters in both “conv4” and “conv5” layers compared to 384 and 256 convolution kernels in the “conv4” and “conv5” layers of AlexNet), and operates more densely (e.g., smaller kernel size of 2 in “pool2” layer “pool5” compared to the kernel size 3 in “pool2” and “pool5” of AlexNet) on the input image. Overfeat is the winning model of the ILSVRC 2013 in detection and classification tasks. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_49", "text": " The VGGNet architecture is introduced in , where it is designed to significantly increase the depth of the existing CNN architectures with 16 or 19 layers. Very small 3×3333\\times 3 size convolutional filters are used in all convolution layers with a convolutional stride of size 1, in order to reduce the number of parameters in deeper networks. Since VGGNet is substantially deeper than the other CNN models, VGGNet is more susceptible to the vanishing gradient problem (69, 70, 71). Hence, the network may be more difficult to train. Training the network requires far more memory and computation time than AlexNet. We use the 16 layer variant as our default VGGNet model in our study. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_50", "text": " The classification accuracy results for ILD slice and patch level classification of five CNN architectures (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet) are shown in Table VI. Based on the analysis in Sec. IV-B, transfer learning is only used for the slice level classification task. From Table VI, quantitative classification accuracy rates increase as the CNN model becomes more complex (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet, in ascending order), for both ILD slice and patch level classification problems. The reported results validate our assumption that OverFeat’s and VGGNet’s performance levels fall between AlexNet’s and GoogLeNet‘s (this observation is consistent with the computer vision findings). CifarNet is designed for images with smaller dimensions (32×32323232\\times 32 images), and thus is not catered to classification tasks involving 256×256256256256\\times 256 images. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_51", "text": " To investigate the performance difference between five-fold cross-validation (CV) in Sec. IV-B and leave-one-patient-out (LOO) validation, this experiment is performed under the LOO protocol. By comparing results in Table III (CV-5) to those in Table VI (LOO), one can see that LOO’s quantitative performances are remarkably better than CV-5’s. For example, in ILD slice-level classification, the accuracy level drastically increases from 0.46 to 0.867 using AlexNet-TL, and from 0.57 to 0.902 for GoogLeNet-TL. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_52", "text": " CNN training is implemented with the Caffe deep learning framework, using a NVidia K40 GPU on Ubuntu 14.04 Linux OS. All models are trained for up to 90 epochs with early stopping criteria, where a model snapshot with low validation loss is taken for the final model. Other hyper-parameters are fixed as follows: momentum: 0.9; weight decay: 0.0005; and a step learning rate schedule with base learning rate of 0.01, decreased by a factor of 10 every 30 epochs. The image batch size is set to 128, except for GoogLeNet’s (64) and VGG-16’s (32), which are the maximum batch sizes that can fit in the NVidia K40 GPU with 12GB of memory capacity. Table VII illustrates the training time and memory requirements of the five CNN architectures on ILD patch-based classification up to 90 epochs. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_53", "text": " Medical datasets are often “biased”, in that the number of healthy samples is much larger than the number of diseased instances, or that the numbers of images per class are uneven. In ILD dataset, the number of fibrosis samples is about 3.5 times greater than the number of emphysema samples. The number of non-LNs is 3∼4similar-to343\\sim 4 times greater than the number of LNs in lymph node detection. Different sampling or resampling rates are routinely applied to both ILD and LN detection to balance the data sample number or scale per class, as in. We refer this as “Equal Prior”. If we use the same sampling rate, that will lead to a “Biased Prior” across different classes. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_54", "text": " Without loss of generality, after GoogLeNet is trained on the training sets under “Equal” or “Biased” priors, we compare its classification results on the balanced validation sets. Evaluating a classifier on a biased validation set will cause unfair assessment of its performance. For instance, a classifier that predicts every image patch as “non-LN” will still achieve a 70%percent7070\\% accuracy rate on a biased set with 3.53.53.5 times as many non-LN samples as LN samples. The classification accuracy results of GoogLeNet trained under two configurations are shown in Table VIII. Overall, it achieves lower accuracy results when trained with a “biased prior” in both tasks, and the accuracy difference for ILD patch-based classification is small. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_55", "text": " In this section, we determine and analyze, via CNN visualization, the reasons for which transfer learning is beneficial to achieve better performance on CAD applications. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_56", "text": " Thoracoabdominal LN Detection. In Figure 12, the first layer convolution filters from five different CNN architectures are visualized. We notice that without transfer learning (57, 6), somewhat blurry filters are learned (AlexNet-RI (256x256), AlexNet-RI (64x64), GoogLeNet-RI (256x256) and GoogLeNet-RI (64x64)). However, in AlexNet-TL (256x256), many higher orders of contrast- or edge-preserving patterns (that enable capturing image appearance details) are evidently learned through fine-tuning from ImageNet. With a smaller input resolution, AlexNet-RI (64x64) and GoogLeNet-RI (64x64) can learn image contrast filters to some degree; whereas, GoogLeNet-RI (256x256) and AlexNet-RI (256x256) have over-smooth low-level filters throughout. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_57", "text": " ILD classification. We focus on analyzing visual CNN optimization traces and activations from the ILD dataset, as its slice-level setting is most similar to ImageNet’s. Indeed, both datasets use full-size images. The traces of the training loss, validation loss and validation accuracy of AlexNet-RI and AlexNet-TL, are shown in Figure 11. For AlexNet-RI in Figure 11 (a), the training loss significantly decreases as the number of training epochs increases, while the validation loss notably increases and the validation accuracy does not improve much before reaching a plateau. With transfer learning and fine-tuning, much better and consistent performances of training loss, validation loss and validation accuracy traces are obtained (see Figure 11 (b)). We begin the optimization problem – that of fine-tuning the ImageNet pre-trained CNN to classify a comprehensive set of images – by initializing the parameters close to an optimal solution. One could compare this process to making adults learn to classify ILDs, as opposed to babies. During the process, the validation loss, having remained at lower values throughout, achieves higher final accuracy levels than the validation loss on a similar problem with random initialization. Meanwhile, the training losses in both cases decrease to values near zero. This indicates that both AlexNet-RI and AlexNet-TL over-fit on the ILD dataset, due to its small instance size. The quantitative results in Table III indicate that AlexNet-TL and GoogLeNet-TL have consistently better classification accuracies than AlexNet-RI and GoogLeNet-RI, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_58", "text": " The last pooling layer (pool-5) activation maps of the ImageNet pre-trained AlexNet (analogical to AlexNet-ImNet) and AlexNet-TL, obtained by processing two input images of Figure 2 (b,c), are shown in Figure 13 (a,b). The last pooling layer activation map summarizes the entire input image by highlighting which relative locations or neural reception fields relative to the image are activated. There are a total of 256 (6x6) reception fields in AlexNet . Pooling units where the relative image location of the disease region is present in the image are highlighted with green boxes. Next, we reconstruct the original ILD images using the process of de-convolution, back-propagating with convolution and un-pooling from the activation maps of the chosen pooling units . From the reconstructed images (Figure 13 bottom), we observe that with fine-tuning, AlexNet-TL detects and localizes objects of interest (ILD disease regions depicted in in Figure 2 (b) and (c)) better than AlexNet-ImNet. The filters shown in Figure 13 that better localize regions on the input images (Figure 2 (b) and (c)) respectively, produce relatively higher activations (in the top 5%) among all 512 reception field responses in the fine-tuned AlexNet-TL model. As observed in , the final CNN classification score can not be driven solely by a single strong activation in the receptions fields, but often by a sparse set of high activations (i.e., varying selective or sparse activations per input image). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_59", "text": " We summarize our findings as follows. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_60", "text": " • Deep CNN architectures with 8, even 22 layers (4, 33), can be useful even for CADe problems where the available training datasets are limited. Previously, CNN models used in medical image analysis applications have often been 2∼5similar-to252\\sim 5 orders of magnitude smaller. • The trade-off between using better learning models and using more training data should be carefully considered when searching for an optimal solution to any CADe problem (e.g., mediastinal and abdominal LN detection). • Limited datasets can be a bottleneck to further advancement of CADe. Building progressively growing (in scale), well annotated datasets is at least as crucial as developing new algorithms. This has been accomplished, for instance, in the field of computer vision. The well-known scene recognition problem has made tremendous progress, thanks to the steady and continuous development of Scene-15, MIT Indoor-67, SUN-397 and Place datasets . • Transfer learning from the large scale annotated natural image datasets (ImageNet) to CADe problems has been consistently beneficial in our experiments. This sheds some light on cross-dataset CNN learning in the medical image domain, e.g., the union of the ILD and LTRC datasets , as suggested in this paper. • Finally, applications of off-the-shelf deep CNN image features to CADe problems can be improved by either exploring the performance-complementary properties of hand-crafted features (10, 9, 12), or by training CNNs from scratch and better fine-tuning CNNs on the target medical image dataset, as evaluated in this paper. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_61", "text": " In this paper, we exploit and extensively evaluate three important, previously under-studied factors on deep convolutional neural networks (CNN) architecture, dataset characteristics, and transfer learning. We evaluate CNN performance on two different computer-aided diagnosis applications: thoraco-abdominal lymph node detection and interstitial lung disease classification. The empirical evaluation, CNN model visualization, CNN performance analysis, and conclusive insights can be generalized to the design of high performance CAD systems for other medical imaging tasks. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" } ]
Did the authors try other interpolation methods besides the bilinear sampling mechanism?
No for warping only the differentiable bilinear sampling mechanism is used [15].
[ 15 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision has failed to recreate similar modeling capabilities for real-world scenes (e.g., where non-rigidity, occlusion and lack of texture are present). So why do humans excel at this task? One hypothesis is that we develop a rich, structural understanding of the world through our past visual experience that has largely consisted of moving around and observing vast numbers of scenes and developing consistent modeling of our observations. From millions of such observations, we have learned about the regularities of the world—roads are flat, buildings are straight, cars are supported by roads etc., and we can apply this knowledge when perceiving a new scene, even from a single monocular image. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_1", "text": " In this work, we mimic this approach by training a model that observes sequences of images and aims to explain its observations by predicting likely camera motion and the scene structure (as shown in Fig. 1). We take an end-to-end approach in allowing the model to map directly from input pixels to an estimate of ego-motion (parameterized as 6-DoF transformation matrices) and the underlying scene structure (parameterized as per-pixel depth maps under a reference view). We are particularly inspired by prior work that has suggested view synthesis as a metric  and recent work that tackles the calibrated, multi-view 3D case in an end-to-end framework . Our method is unsupervised, and can be trained simply using sequences of images with no manual labeling or even camera motion information. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_2", "text": " Our approach builds upon the insight that a geometric view synthesis system only performs consistently well when its intermediate predictions of the scene geometry and the camera poses correspond to the physical ground-truth. While imperfect geometry and/or pose estimation can cheat with reasonable synthesized views for certain types of scenes (e.g., textureless), the same model would fail miserably when presented with another set of scenes with more diverse layout and appearance structures. Thus, our goal is to formulate the entire view synthesis pipeline as the inference procedure of a convolutional neural network, so that by training the network on large-scale video data for the ‘meta’-task of view synthesis the network is forced to learn about intermediate tasks of depth and camera pose estimation in order to come up with a consistent explanation of the visual world. Empirical evaluation on the KITTI  benchmark demonstrates the effectiveness of our approach on both single-view depth and camera pose estimation. Our code will be made available at  https://github.com/tinghuiz/SfMLearner. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_3", "text": " The simultaneous estimation of structure and motion is a well studied problem with an established toolchain of techniques (12, 50, 38). Whilst the traditional toolchain is effective and efficient in many cases, its reliance on accurate image correspondence can cause problems in areas of low texture, complex geometry/photometry, thin structures, and occlusions. To address these issues, several of the pipeline stages have been recently tackled using deep learning, e.g., feature matching , pose estimation , and stereo (10, 27, 53). These learning-based techniques are attractive in that they are able to leverage external supervision during training, and potentially overcome the above issues when applied to test data. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_4", "text": " One important application of geometric scene understanding is the task of novel view synthesis, where the goal is to synthesize the appearance of the scene seen from novel camera viewpoints. A classic paradigm for view synthesis is to first either estimate the underlying 3D geometry explicitly or establish pixel correspondence among input views, and then synthesize the novel views by compositing image patches from the input views (e.g., (4, 55, 43, 6, 9)). Recently, end-to-end learning has been applied to reconstruct novel views by transforming the input based on depth or flow, e.g., DeepStereo , Deep3D  and Appearance Flows . In these methods, the underlying geometry is represented by quantized depth planes (DeepStereo), probabilistic disparity maps (Deep3D) and view-dependent flow fields (Appearance Flows), respectively. Unlike methods that directly map from input views to the target view (e.g., ), warping-based methods are forced to learn intermediate predictions of geometry and/or correspondence. In this work, we aim to distill such geometric reasoning capability from CNNs trained to perform warping-based view synthesis. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_5", "text": " Our work is closely related to a line of recent research on learning single-view 3D inference from registered 2D observations. Garg et al.  propose to learn a single-view depth estimation CNN using projection errors to a calibrated stereo twin for supervision. Concurrently, Deep3D  predicts a second stereo viewpoint from an input image using stereoscopic film footage as training data. A similar approach was taken by Godard et al. , with the addition of a left-right consistency constraint, and a better architecture design that led to impressive performance. Like our approach, these techniques only learn from image observations of the world, unlike methods that require explicit depth for training, e.g., (20, 42, 7, 27, 30). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_6", "text": " These techniques bear some resemblance to direct methods for structure and motion estimation , where the camera parameters and scene depth are adjusted to minimize a pixel-based error function. However, rather than directly minimizing the error to obtain the estimation, the CNN-based methods only take a gradient step for each batch of input instances, which allows the network to learn an implicit prior from a large corpus of related imagery. Several authors have explored building differentiable rendering operations into their models that are trained in this way, e.g., (19, 29, 34). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_7", "text": " While most of the above techniques (including ours) are mainly focused on inferring depth maps as the scene geometry output, recent work (e.g., (13, 41, 46, 52)) has also shown success in learning 3D volumetric representations from 2D observations based on similar principles of projective geometry. Fouhey et al.  further show that it is even possible to learn 3D inference without 3D labels (or registered 2D views) by utilizing scene regularity. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_8", "text": " Another line of related work to ours is visual representation learning from video, where the general goal is to design pretext tasks for learning generic visual features from video data that can later be re-purposed for other vision tasks such as object detection and semantic segmentation. Such pretext tasks include ego-motion estimation (2, 24), tracking , temporal coherence , temporal order verification , and object motion mask prediction . While we focus on inferring the explicit scene geometry and ego-motion in this work, intuitively, the internal representation learned by the deep network (especially the single-view depth CNN) should capture some level of semantics that could generalize to other tasks as well. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_9", "text": " Concurrent to our work, Vijayanarasimhan et al.  independently propose a framework for joint training of depth, camera motion and scene motion from videos. While both methods are conceptually similar, ours is focused on the unsupervised aspect, whereas their framework adds the capability to incorporate supervision (e.g., depth, camera motion or scene motion). There are significant differences in how scene dynamics are modeled during training, in which they explicitly solve for object motion whereas our explainability mask discounts regions undergoing motion, occlusion and other factors. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_10", "text": " Here we propose a framework for jointly training a single-view depth CNN and a camera pose estimation CNN from unlabeled video sequences. Despite being jointly trained, the depth model and the pose estimation model can be used independently during test-time inference. Training examples to our model consist of short image sequences of scenes captured by a moving camera. While our training procedure is robust to some degree of scene motion, we assume that the scenes we are interested in are mostly rigid, i.e., the scene appearance change across different frames is dominated by the camera motion. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_11", "text": " The key supervision signal for our depth and pose prediction CNNs comes from the task of novel view synthesis: given one input view of a scene, synthesize a new image of the scene seen from a different camera pose. We can synthesize a target view given a per-pixel depth in that image, plus the pose and visibility in a nearby view. As we will show next, this synthesis process can be implemented in a fully differentiable manner with CNNs as the geometry and pose estimation modules. Visibility can be handled, along with non-rigidity and other non-modeled factors, using an “explanability” mask, which we discuss later (Sec. 3.3). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_12", "text": " Let us denote <I1,…,IN><I_{1},\\ldots,I_{N}> as a training image sequence with one of the frames Itsubscript𝐼𝑡I_{t} being the target view and the rest being the source views Is(1≤s≤N,s≠t)I_{s}(1\\leq s\\leq N,s\\neq t). The view synthesis objective can be formulated as ℒv​s=∑s∑p|It​(p)−I^s​(p)|,subscriptℒ𝑣𝑠subscript𝑠subscript𝑝subscript𝐼𝑡𝑝subscript^𝐼𝑠𝑝\\mathcal{L}_{vs}=\\sum_{s}\\sum_{p}|I_{t}(p)-\\hat{I}_{s}(p)|~{}, (1) where p𝑝p indexes over pixel coordinates, and I^ssubscript^𝐼𝑠\\hat{I}_{s} is the source view Issubscript𝐼𝑠I_{s} warped to the target coordinate frame based on a depth image-based rendering module  (described in Sec. 3.2), taking the predicted depth D^tsubscript^𝐷𝑡\\hat{D}_{t}, the predicted 4×4444\\times 4 camera transformation matrix111In practice, the CNN estimates the Euler angles and the 3D translation vector, which are then converted to the transformation matrix. T^t→ssubscript^𝑇→𝑡𝑠\\hat{T}_{t\\rightarrow s} and the source view Issubscript𝐼𝑠I_{s} as input. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_13", "text": " Note that the idea of view synthesis as supervision has also been recently explored for learning single-view depth estimation (14, 16) and multi-view stereo . However, to the best of our knowledge, all previous work requires posed image sets during training (and testing too in the case of DeepStereo), while our framework can be applied to standard videos without pose information. Furthermore, it predicts the poses as part of the learning framework. See Figure 2 for an illustration of our learning pipeline for depth and pose estimation. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_14", "text": " As indicated in Eq. 1, a key component of our learning framework is a differentiable depth image-based renderer that reconstructs the target view Itsubscript𝐼𝑡I_{t} by sampling pixels from a source view Issubscript𝐼𝑠I_{s} based on the predicted depth map D^tsubscript^𝐷𝑡\\hat{D}_{t} and the relative pose T^t→ssubscript^𝑇→𝑡𝑠\\hat{T}_{t\\rightarrow s}. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_15", "text": " Let ptsubscript𝑝𝑡p_{t} denote the homogeneous coordinates of a pixel in the target view, and K𝐾K denote the camera intrinsics matrix. We can obtain ptsubscript𝑝𝑡p_{t}’s projected coordinates onto the source view pssubscript𝑝𝑠p_{s} by222For notation simplicity, we omit showing the necessary conversion to homogeneous coordinates along the steps of matrix multiplication. ps∼K​T^t→s​D^t​(pt)​K−1​ptsimilar-tosubscript𝑝𝑠𝐾subscript^𝑇→𝑡𝑠subscript^𝐷𝑡subscript𝑝𝑡superscript𝐾1subscript𝑝𝑡p_{s}\\sim K\\hat{T}_{t\\rightarrow s}\\hat{D}_{t}(p_{t})K^{-1}p_{t} (2) Notice that the projected coordinates pssubscript𝑝𝑠p_{s} are continuous values. To obtain Is​(ps)subscript𝐼𝑠subscript𝑝𝑠I_{s}(p_{s}) for populating the value of I^s​(pt)subscript^𝐼𝑠subscript𝑝𝑡\\hat{I}_{s}(p_{t}) (see Figure 3), we then use the differentiable bilinear sampling mechanism proposed in the spatial transformer networks  that linearly interpolates the values of the 444-pixel neighbors (top-left, top-right, bottom-left, and bottom-right) of pssubscript𝑝𝑠p_{s} to approximate Is​(ps)subscript𝐼𝑠subscript𝑝𝑠I_{s}(p_{s}), i.e. I^s​(pt)=Is​(ps)=∑i∈{t,b},j∈{l,r}wi​j​Is​(psi​j),subscript^𝐼𝑠subscript𝑝𝑡subscript𝐼𝑠subscript𝑝𝑠subscriptformulae-sequence𝑖𝑡𝑏𝑗𝑙𝑟superscript𝑤𝑖𝑗subscript𝐼𝑠superscriptsubscript𝑝𝑠𝑖𝑗\\hat{I}_{s}(p_{t})=I_{s}(p_{s})=\\sum_{i\\in\\{t,b\\},j\\in\\{l,r\\}}w^{ij}I_{s}(p_{s}^{ij}), where wi​jsuperscript𝑤𝑖𝑗w^{ij} is linearly proportional to the spatial proximity between pssubscript𝑝𝑠p_{s} and psi​jsuperscriptsubscript𝑝𝑠𝑖𝑗p_{s}^{ij} , and ∑i,jwi​j=1subscript𝑖𝑗superscript𝑤𝑖𝑗1\\sum_{i,j}w^{ij}=1. A similar strategy is used in  for learning to directly warp between different views, while here the coordinates for pixel warping are obtained through projective geometry that enables the factorization of depth and camera pose. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_16", "text": " Note that when applied to monocular videos the above view synthesis formulation implicitly assumes 1) the scene is static without moving objects; 2) there is no occlusion/disocclusion between the target view and the source views; 3) the surface is Lambertian so that the photo-consistency error is meaningful. If any of these assumptions are violated in a training sequence, the gradients could be corrupted and potentially inhibit training. To improve the robustness of our learning pipeline to these factors, we additionally train a explainability prediction network (jointly and simultaneously with the depth and pose networks) that outputs a per-pixel soft mask E^ssubscript^𝐸𝑠\\hat{E}_{s} for each target-source pair, indicating the network’s belief in where direct view synthesis will be successfully modeled for each target pixel. Based on the predicted E^ssubscript^𝐸𝑠\\hat{E}_{s}, the view synthesis objective is weighted correspondingly by ℒv​s=∑<I1,…,IN⁣>⁣∈𝒮∑pE^s​(p)​|It​(p)−I^s​(p)|.subscriptℒ𝑣𝑠subscriptabsentsubscript𝐼1…subscript𝐼𝑁absent𝒮subscript𝑝subscript^𝐸𝑠𝑝subscript𝐼𝑡𝑝subscript^𝐼𝑠𝑝\\mathcal{L}_{vs}=\\sum_{<I_{1},\\ldots,I_{N}>\\in\\mathcal{S}}\\sum_{p}\\hat{E}_{s}(p)|I_{t}(p)-\\hat{I}_{s}(p)|~{}. (3) Since we do not have direct supervision for E^ssubscript^𝐸𝑠\\hat{E}_{s}, training with the above loss would result in a trivial solution of the network always predicting E^ssubscript^𝐸𝑠\\hat{E}_{s} to be zero, which perfectly minimizes the loss. To resolve this, we add a regularization term ℒr​e​g​(E^s)subscriptℒ𝑟𝑒𝑔subscript^𝐸𝑠\\mathcal{L}_{reg}(\\hat{E}_{s}) that encourages nonzero predictions by minimizing the cross-entropy loss with constant label 111 at each pixel location. In other words, the network is encouraged to minimize the view synthesis objective, but allowed a certain amount of slack for discounting the factors not considered by the model. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_17", "text": " One remaining issue with the above learning pipeline is that the gradients are mainly derived from the pixel intensity difference between I​(pt)𝐼subscript𝑝𝑡I(p_{t}) and the four neighbors of I​(ps)𝐼subscript𝑝𝑠I(p_{s}), which would inhibit training if the correct pssubscript𝑝𝑠p_{s} (projected using the ground-truth depth and pose) is located in a low-texture region or far from the current estimation. This is a well known issue in motion estimation . Empirically, we found two strategies to be effective for overcoming this issue: 1) using a convolutional encoder-decoder architecture with a small bottleneck for the depth network that implicitly constrains the output to be globally smooth and facilitates gradients to propagate from meaningful regions to nearby regions; 2) explicit multi-scale and smoothness loss (e.g., as in (14, 16)) that allows gradients to be derived from larger spatial regions directly. We adopt the second strategy in this work as it is less sensitive to architectural choices. For smoothness, we minimize the L1subscript𝐿1L_{1} norm of the second-order gradients for the predicted depth maps (similar to ). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_18", "text": " Our final objective becomes ℒf​i​n​a​l=∑lℒv​sl+λs​ℒs​m​o​o​t​hl+λe​∑sℒr​e​g​(E^sl),subscriptℒ𝑓𝑖𝑛𝑎𝑙subscript𝑙superscriptsubscriptℒ𝑣𝑠𝑙subscript𝜆𝑠subscriptsuperscriptℒ𝑙𝑠𝑚𝑜𝑜𝑡ℎsubscript𝜆𝑒subscript𝑠subscriptℒ𝑟𝑒𝑔superscriptsubscript^𝐸𝑠𝑙\\mathcal{L}_{final}=\\sum_{l}\\mathcal{L}_{vs}^{l}+\\lambda_{s}\\mathcal{L}^{l}_{smooth}+\\lambda_{e}\\sum_{s}\\mathcal{L}_{reg}(\\hat{E}_{s}^{l})~{}, (4) where l𝑙l indexes over different image scales, s𝑠s indexes over source images, and λssubscript𝜆𝑠\\lambda_{s} and λesubscript𝜆𝑒\\lambda_{e} are the weighting for the depth smoothness loss and the explainability regularization, respectively. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_19", "text": " For single-view depth prediction, we adopt the DispNet architecture proposed in  that is mainly based on an encoder-decoder design with skip connections and multi-scale side predictions (see Figure 4). All conv layers are followed by ReLU activation except for the prediction layers, where we use 1/(α∗s​i​g​m​o​i​d​(x)+β)1𝛼𝑠𝑖𝑔𝑚𝑜𝑖𝑑𝑥𝛽1/(\\alpha*sigmoid(x)+\\beta) with α=10𝛼10\\alpha=10 and β=0.01𝛽0.01\\beta=0.01 to constrain the predicted depth to be always positive within a reasonable range. We also experimented with using multiple views as input to the depth network, but did not find this to improve the results. This is in line with the observations in , where optical flow constraints need to be enforced to utilize multiple views effectively. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_20", "text": " The input to the pose estimation network is the target view concatenated with all the source views (along the color channels), and the outputs are the relative poses between the target view and each of the source views. The network consists of 777 stride-2 convolutions followed by a 1×1111\\times 1 convolution with 6∗(N−1)6𝑁16*(N-1) output channels (corresponding to 333 Euler angles and 333-D translation for each source view). Finally, global average pooling is applied to aggregate predictions at all spatial locations. All conv layers are followed by ReLU except for the last layer where no nonlinear activation is applied. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_21", "text": " The explainability prediction network shares the first five feature encoding layers with the pose network, followed by 555 deconvolution layers with multi-scale side predictions. All conv/deconv layers are followed by ReLU except for the prediction layers with no nonlinear activation. The number of output channels for each prediction layer is 2∗(N−1)2𝑁12*(N-1), with every two channels normalized by softmax to obtain the explainability prediction for the corresponding source-target pair (the second channel after normalization is E^ssubscript^𝐸𝑠\\hat{E}_{s} and used in computing the loss in Eq. 3). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_22", "text": " Here we evaluate the performance of our system, and compare with prior approaches on single-view depth as well as ego-motion estimation. We mainly use the KITTI dataset  for benchmarking, but also use the Make3D dataset  for evaluating cross-dataset generalization ability. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_23", "text": " We implemented the system using the publicly available TensorFlow  framework. For all the experiments, we set λs=0.5/lsubscript𝜆𝑠0.5𝑙\\lambda_{s}=0.5/l (l𝑙l is the downscaling factor for the corresponding scale) and λe=0.2subscript𝜆𝑒0.2\\lambda_{e}=0.2. During training, we used batch normalization  for all the layers except for the output layers, and the Adam  optimizer with β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.999subscript𝛽20.999\\beta_{2}=0.999, learning rate of 0.00020.00020.0002 and mini-batch size of 444. The training typically converges after about 150​K150𝐾150K iterations. All the experiments are performed with image sequences captured with a monocular camera. We resize the images to 128×416128416128\\times 416 during training, but both the depth and pose networks can be run fully-convolutionally for images of arbitrary size at test time. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_24", "text": " We train our system on the split provided by , and exclude all the frames from the testing scenes as well as static sequences with mean optical flow magnitude less than 111 pixel for training. We fix the length of image sequences to be 333 frames, and treat the central frame as the target view and the ±1plus-or-minus1\\pm 1 frames as the source views. We use images captured by both color cameras, but treated them independently when forming training sequences. This results in a total of 44,5404454044,540 sequences, out of which we use 40,1094010940,109 for training and 4,43144314,431 for validation. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_25", "text": " To the best of our knowledge, no previous systems exist that learn single-view depth estimation in an unsupervised manner from monocular videos. Nonetheless, here we provide comparison with prior methods with depth supervision  and recent methods that use calibrated stereo images (i.e. with pose supervision) for training (14, 16). Since the depth predicted by our method is defined up to a scale factor, for evaluation we multiply the predicted depth maps by a scalar s^^𝑠\\hat{s} that matches the median with the ground-truth, i.e. s^=m​e​d​i​a​n​(Dg​t)/m​e​d​i​a​n​(Dp​r​e​d)^𝑠𝑚𝑒𝑑𝑖𝑎𝑛subscript𝐷𝑔𝑡𝑚𝑒𝑑𝑖𝑎𝑛subscript𝐷𝑝𝑟𝑒𝑑\\hat{s}=median(D_{gt})/median(D_{pred}). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_26", "text": " Similar to , we also experimented with first pre-training the system on the larger Cityscapes dataset  (sample predictions are shown in Figure 5), and then fine-tune on KITTI, which results in slight performance improvement. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_27", "text": " Here we evaluate the single-view depth performance on the 697697697 images from the test split of . As shown in Table 1, our unsupervised method performs comparably with several supervised methods (e.g. Eigen et al.  and Garg et al. ), but falls short of concurrent work by Godard et al.  that uses calibrated stereo images (i.e. with pose supervision) with left-right cycle consistency loss for training. For future work, it would be interesting to see if incorporating the similar cycle consistency loss into our framework could further improve the results. Figure 6 provides examples of visual comparison between our results and some supervised baselines over a variety of examples. One can see that although trained in an unsupervised manner, our results are comparable to that of the supervised baselines, and sometimes preserve the depth boundaries and thin structures such as trees and street lights better. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_28", "text": " We show sample predictions made by our initial Cityscapes model and the final model (pre-trained on Cityscapes and then fine-tuned on KITTI) in Figure 7. Due to the domain gap between the two datasets, our Cityscapes model sometimes has difficulty in recovering the complete shape of the car/bushes, and mistakes them with distant objects. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_29", "text": " We also performed an ablation study of the explainability modeling (see Table 1), which turns out only offering a modest performance boost. This is likely because 1) most of the KITTI scenes are static without significant scene motions, and 2) the occlusion/visibility effects only occur in small regions in sequences across a short time span (333-frames), which make the explainability modeling less essential to the success of training. Nonetheless, our explainability prediction network does seem to capture the factors like scene motion and visibility well (see Sec. 4.3), and could potentially be more important for other more challenging datasets. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_30", "text": " To evaluate the generalization ability of our single-view depth model, we directly apply our model trained on Cityscapes + KITTI to the Make3D dataset unseen during training. While there still remains a significant performance gap between our method and others supervised using Make3D ground-truth depth (see Table 2), our predictions are able to capture the global scene layout reasonably well without any training on the Make3D images (see Figure 8). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_31", "text": " To evaluate the performance of our pose estimation network, we applied our system to the official KITTI odometry split (containing 111111 driving sequences with ground truth odometry obtained through the IMU/GPS readings, which we use for evaluation purpose only), and used sequences 000000-080808 for training and 090909-101010 for testing. In this experiment, we fix the length of input image sequences to our system to 555 frames. We compare our ego-motion estimation with two variants of monocular ORB-SLAM  (a well-established SLAM system): 1) ORB-SLAM (full), which recovers odometry using all frames of the driving sequence (i.e. allowing loop closure and re-localization), and 2) ORB-SLAM (short), which runs on 555-frame snippets (same as our input setting). Another baseline we compare with is the dataset mean of car motion (using ground-truth odometry) for 555-frame snippets. To resolve scale ambiguity during evaluation, we first optimize the scaling factor for the predictions made by each method to best align with the ground truth, and then measure the Absolute Trajectory Error (ATE)  as the metric. ATE is computed on 555-frame snippets and averaged over the full sequence.333For evaluating ORB-SLAM (full) we break down the trajectory of the full sequence into 555-frame snippets with the reference coordinate frame adjusted to the central frame of each snippet. As shown in Table 3 and Fig. 9, our method outperforms both baselines (mean odometry and ORB-SLAM (short)) that share the same input setting as ours, but falls short of ORB-SLAM (full), which leverages whole sequences (159115911591 for seq. 090909 and 120112011201 for seq. 101010) for loop closure and re-localization. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_32", "text": " For better understanding of our pose estimation results, we show in Figure 9 the ATE curve with varying amount of side-rotation by the car between the beginning and the end of a sequence. Figure 9 suggests that our method is significantly better than ORB-SLAM (short) when the side-rotation is small (i.e. car mostly driving forward), and comparable to ORB-SLAM (full) across the entire spectrum. The large performance gap between ours and ORB-SLAM (short) suggests that our learned ego-motion could potentially be used as an alternative to the local estimation modules in monocular SLAM systems. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_33", "text": " We visualize example explainability masks predicted by our network in Figure 10. The first three rows suggest that the network has learned to identify dynamic objects in the scene as unexplainable by our model, and similarly, rows 4–5 are examples of objects that disappear from the frame in subsequent views. The last two rows demonstrate the potential downside of explainability-weighted loss: the depth CNN has low confidence in predicting thin structures well, and tends to mask them as unexplainable. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_34", "text": " We have presented an end-to-end learning pipeline that utilizes the task of view synthesis for supervision of single-view depth and camera pose estimation. The system is trained on unlabeled videos, and yet performs comparably with approaches that require ground-truth depth or pose for training. Despite good performance on the benchmark evaluation, our method is by no means close to solving the general problem of unsupervised learning of 3D scene structure inference. A number of major challenges are yet to be addressed: 1) our current framework does not explicitly estimate scene dynamics and occlusions (although they are implicitly taken into account by the explainability masks), both of which are critical factors in 3D scene understanding. Direct modeling of scene dynamics through motion segmentation (e.g. (48, 40)) could be a potential solution; 2) our framework assumes the camera intrinsics are given, which forbids the use of random Internet videos with unknown camera types/calibration – we plan to address this in future work; 3) depth maps are a simplified representation of the underlying 3D scene. It would be interesting to extend our framework to learn full 3D volumetric representations (e.g.  ). ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_35", "text": " Another interesting area for future work would be to investigate in more detail the representation learned by our system. In particular, the pose network likely uses some form of image correspondence in estimating the camera motion, whereas the depth estimation network likely recognizes common structural features of scenes and objects. It would be interesting to probe these, and investigate the extent to which our network already performs, or could be re-purposed to perform, tasks such as object detection and semantic segmentation. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" }, { "id": "1704.07813_all_36", "text": " We thank our colleagues, Sudheendra Vijayanarasimhan, Susanna Ricco, Cordelia Schmid, Rahul Sukthankar, and Katerina Fragkiadaki for their help. We also thank the anonymous reviewers for their valuable comments. TZ would like to thank Shubham Tulsiani for helpful discussions, and Clement Godard for sharing the evaluation code. This work is also partially funded by Intel/NSF VEC award IIS-1539099. ", "title": "Unsupervised Learning of Depth and Ego-Motion from Video" } ]
What is the value of M?
[Prototypical networks compute an M-dimensional representation \mathbf{c}_{k}\in\mathbb{R}^{M}, or prototype, of each class through an embedding function f_{\bm{\phi}}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{M} with learnable parameters \bm{\phi} [5].
[ 5 ]
[ { "id": "1703.05175_all_0", "text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that humans have the ability to perform even one-shot classification, where only a single example of each new class is given, with a high degree of accuracy . ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_1", "text": " Two recent approaches have made significant progress in few-shot learning. Vinyals et al. proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classifier applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM  to produce the updates to a classifier, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_2", "text": " We attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is severely limited, we work under the assumption that a classifier should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a class’s prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class. Classification is performed, as in the few-shot scenario, by finding the nearest class prototype for an embedded query point. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_3", "text": " In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_4", "text": " In few-shot classification we are given a small support set of N𝑁N labeled examples S={(𝐱1,y1),…,(𝐱N,yN)}𝑆subscript𝐱1subscript𝑦1…subscript𝐱𝑁subscript𝑦𝑁S=\\{(\\mathbf{x}_{1},y_{1}),\\ldots,(\\mathbf{x}_{N},y_{N})\\} where each 𝐱i∈ℝDsubscript𝐱𝑖superscriptℝ𝐷\\mathbf{x}_{i}\\in\\mathbb{R}^{D} is the D𝐷D-dimensional feature vector of an example and yi∈{1,…,K}subscript𝑦𝑖1…𝐾y_{i}\\in\\{1,\\ldots,K\\} is the corresponding label. Sksubscript𝑆𝑘S_{k} denotes the set of examples labeled with class k𝑘k. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_5", "text": " Prototypical networks compute an M𝑀M-dimensional representation 𝐜k∈ℝMsubscript𝐜𝑘superscriptℝ𝑀\\mathbf{c}_{k}\\in\\mathbb{R}^{M}, or prototype, of each class through an embedding function fϕ:ℝD→ℝM:subscript𝑓bold-italic-ϕ→superscriptℝ𝐷superscriptℝ𝑀f_{\\bm{\\phi}}:\\mathbb{R}^{D}\\rightarrow\\mathbb{R}^{M} with learnable parameters ϕbold-italic-ϕ\\bm{\\phi}. Each prototype is the mean vector of the embedded support points belonging to its class: 𝐜k=1|Sk|​∑(𝐱i,yi)∈Skfϕ​(𝐱i)subscript𝐜𝑘1subscript𝑆𝑘subscriptsubscript𝐱𝑖subscript𝑦𝑖subscript𝑆𝑘subscript𝑓bold-italic-ϕsubscript𝐱𝑖\\mathbf{c}_{k}=\\frac{1}{|S_{k}|}\\sum_{(\\mathbf{x}_{i},y_{i})\\in S_{k}}f_{\\bm{\\phi}}(\\mathbf{x}_{i}) (1) Given a distance function d:ℝM×ℝM→(0,+∞):𝑑→superscriptℝ𝑀superscriptℝ𝑀0d:\\mathbb{R}^{M}\\times\\mathbb{R}^{M}\\rightarrow(0,+\\infty), prototypical networks produce a distribution over classes for a query point 𝐱𝐱\\mathbf{x} based on a softmax over distances to the prototypes in the embedding space: pϕ​(y=k|𝐱)=exp⁡(−d​(fϕ​(𝐱),𝐜k))∑k′exp⁡(−d​(fϕ​(𝐱),𝐜k′))subscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘subscriptsuperscript𝑘′𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜superscript𝑘′p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x})=\\frac{\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k}))}{\\sum_{k^{\\prime}}\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k^{\\prime}}))} (2) Learning proceeds by minimizing the negative log-probability J​(ϕ)=−log⁡pϕ​(y=k|𝐱)𝐽bold-italic-ϕsubscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱J(\\bm{\\phi})=-\\log p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x}) of the true class k𝑘k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J​(ϕ)𝐽bold-italic-ϕJ(\\bm{\\phi}) for a training episode is provided in Algorithm 1. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_6", "text": " For a particular class of distance functions, known as regular Bregman divergences , the prototypical networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dφsubscript𝑑𝜑d_{\\varphi} is defined as: dφ​(𝐳,𝐳′)=φ​(𝐳)−φ​(𝐳′)−(𝐳−𝐳′)T​∇φ​(𝐳′),subscript𝑑𝜑𝐳superscript𝐳′𝜑𝐳𝜑superscript𝐳′superscript𝐳superscript𝐳′𝑇∇𝜑superscript𝐳′d_{\\varphi}(\\mathbf{z},\\mathbf{z}^{\\prime})=\\varphi(\\mathbf{z})-\\varphi(\\mathbf{z}^{\\prime})-(\\mathbf{z}-\\mathbf{z}^{\\prime})^{T}\\nabla\\varphi(\\mathbf{z}^{\\prime}), (3) where φ𝜑\\varphi is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ‖𝐳−𝐳′‖2superscriptnorm𝐳superscript𝐳′2\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2} and Mahalanobis distance. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_7", "text": " Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_8", "text": " Moreover, any regular exponential family distribution pψ​(𝐳|𝜽)subscript𝑝𝜓conditional𝐳𝜽p_{\\psi}(\\mathbf{z}|\\bm{\\theta}) with parameters 𝜽𝜽\\bm{\\theta} and cumulant function ψ𝜓\\psi can be written in terms of a uniquely determined regular Bregman divergence : pψ​(𝐳|𝜽)=exp⁡{𝐳T​𝜽−ψ​(𝜽)−gψ​(𝐳)}=exp⁡{−dφ​(𝐳,𝝁​(𝜽))−gφ​(𝐳)}subscript𝑝𝜓conditional𝐳𝜽superscript𝐳𝑇𝜽𝜓𝜽subscript𝑔𝜓𝐳subscript𝑑𝜑𝐳𝝁𝜽subscript𝑔𝜑𝐳p_{\\psi}(\\mathbf{z}|\\bm{\\theta})=\\exp\\{\\mathbf{z}^{T}\\bm{\\theta}-\\psi(\\bm{\\theta})-g_{\\psi}(\\mathbf{z})\\}=\\exp\\{-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}))-g_{\\varphi}(\\mathbf{z})\\} (4) Consider now a regular exponential family mixture model with parameters 𝚪={𝜽k,πk}k=1K𝚪superscriptsubscriptsubscript𝜽𝑘subscript𝜋𝑘𝑘1𝐾\\bm{\\Gamma}=\\{\\bm{\\theta}_{k},\\pi_{k}\\}_{k=1}^{K}: p​(𝐳|𝚪)=∑k=1Kπk​pψ​(𝐳|𝜽k)=∑k=1Kπk​exp⁡(−dφ​(𝐳,𝝁​(𝜽k))−gφ​(𝐳))𝑝conditional𝐳𝚪superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑝𝜓conditional𝐳subscript𝜽𝑘superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscript𝑔𝜑𝐳p(\\mathbf{z}|\\bm{\\Gamma})=\\sum_{k=1}^{K}\\pi_{k}p_{\\psi}(\\mathbf{z}|\\bm{\\theta}_{k})=\\sum_{k=1}^{K}\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k}))-g_{\\varphi}(\\mathbf{z})) (5) Given 𝚪𝚪\\bm{\\Gamma}, inference of the cluster assignment y𝑦y for an unlabeled point 𝐳𝐳\\mathbf{z} becomes: p​(y=k|𝐳)=πk​exp⁡(−dφ​(𝐳,𝝁​(𝜽k)))∑k′πk′​exp⁡(−dφ​(𝐳,𝝁​(𝜽k)))𝑝𝑦conditional𝑘𝐳subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscriptsuperscript𝑘′subscript𝜋superscript𝑘′subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘p(y=k|\\mathbf{z})=\\frac{\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))}{\\sum_{k^{\\prime}}\\pi_{k^{\\prime}}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))} (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fϕ​(𝐱)=𝐳subscript𝑓italic-ϕ𝐱𝐳f_{\\phi}(\\mathbf{x})=\\mathbf{z} and 𝐜k=𝝁​(𝜽k)subscript𝐜𝑘𝝁subscript𝜽𝑘\\mathbf{c}_{k}=\\bm{\\mu}(\\bm{\\theta}_{k}). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dφsubscript𝑑𝜑d_{\\varphi}. The choice of distance therefore specifies modeling assumptions about the class-conditional data distribution in the embedding space. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_9", "text": " A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d​(𝐳,𝐳′)=‖𝐳−𝐳′‖2𝑑𝐳superscript𝐳′superscriptnorm𝐳superscript𝐳′2d(\\mathbf{z},\\mathbf{z^{\\prime}})=\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2}, then the model in Equation (2) is equivalent to a linear model with a particular parameterization . To see this, expand the term in the exponent: −‖fϕ​(𝐱)−𝐜k‖2superscriptnormsubscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘2\\displaystyle-\\|f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}\\|^{2} =−fϕ​(𝐱)⊤​fϕ​(𝐱)+2​𝐜k⊤​fϕ​(𝐱)−𝐜k⊤​𝐜kabsentsubscript𝑓bold-italic-ϕsuperscript𝐱topsubscript𝑓bold-italic-ϕ𝐱2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘\\displaystyle=-f_{\\bm{\\phi}}(\\mathbf{x})^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (7) The first term in Equation (7) is constant with respect to the class k𝑘k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2​𝐜k⊤​fϕ​(𝐱)−𝐜k⊤​𝐜k=𝐰k⊤​fϕ​(𝐱)+bk​, where ​𝐰k=2​𝐜k​ and ​bk=−𝐜k⊤​𝐜k2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘superscriptsubscript𝐰𝑘topsubscript𝑓bold-italic-ϕ𝐱subscript𝑏𝑘, where subscript𝐰𝑘2subscript𝐜𝑘 and subscript𝑏𝑘superscriptsubscript𝐜𝑘topsubscript𝐜𝑘2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k}=\\mathbf{w}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+b_{k}\\mbox{, where }\\mathbf{w}_{k}=2\\mathbf{c}_{k}\\mbox{ and }b_{k}=-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (8) We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., (14, 28). ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_10", "text": " Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, 𝐜k=𝐱ksubscript𝐜𝑘subscript𝐱𝑘\\mathbf{c}_{k}=\\mathbf{x}_{k} since there is only one support point per class, and matching networks and prototypical networks become equivalent. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_11", "text": " A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is fixed and greater than 111, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. and Rippel et al. ; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_12", "text": " Vinyals et al. propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_13", "text": " Vinyals et al. and Ravi and Larochelle apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_14", "text": " A straightforward way to construct episodes, used in Vinyals et al. and Ravi and Larochelle , is to choose Ncsubscript𝑁𝑐N_{c} classes and NSsubscript𝑁𝑆N_{S} support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 555-way classification and 111-shot learning, then training episodes could be comprised of Nc=5subscript𝑁𝑐5N_{c}=5, NS=1subscript𝑁𝑆1N_{S}=1. We have found, however, that it can be extremely beneficial to train with a higher Ncsubscript𝑁𝑐N_{c}, or “way”, than will be used at test-time. In our experiments, we tune the training Ncsubscript𝑁𝑐N_{c} on a held-out validation set. Another consideration is whether to match NSsubscript𝑁𝑆N_{S}, or “shot”, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same “shot” number. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_15", "text": " Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector 𝐯ksubscript𝐯𝑘\\mathbf{v}_{k} for each class. These could be determined in advance, or they could be learned from e.g., raw text . Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define 𝐜k=gϑ​(𝐯k)subscript𝐜𝑘subscript𝑔bold-italic-ϑsubscript𝐯𝑘\\mathbf{c}_{k}=g_{\\bm{\\vartheta}}(\\mathbf{v}_{k}) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g𝑔g to have unit length, however we do not constrain the query embedding f𝑓f. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_16", "text": " For few-shot learning, we performed experiments on Omniglot and the miniImageNet version of ILSVRC-2012 with the splits proposed by Ravi and Larochelle . We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) . ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_17", "text": " Omniglot is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. by resizing the grayscale images to 28 ×\\times 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. and is composed of four convolutional blocks. Each block comprises a 64-filter 3 ×\\times 3 convolution, batch normalization layer , a ReLU nonlinearity and a 2 ×\\times 2 max-pooling layer. When applied to the 28 ×\\times 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam . We used an initial learning rate of 10−3superscript10310^{-3} and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_18", "text": " We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher “way”) per training episode rather than fewer. We compare against various baselines, including the neural statistician and both the fine-tuned and non-fine-tuned versions of matching networks . We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_19", "text": " The miniImageNet dataset, originally proposed by Vinyals et al. , is derived from the larger ILSVRC-12 dataset . The splits used by Vinyals et al. consist of 60,000 color images of size 84 ×\\times 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_20", "text": " We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle , which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_21", "text": " We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_22", "text": " In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset . The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. in preparing the data. We use their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024-dimensional features extracted by applying GoogLeNet to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-flipped image222Features downloaded from https://github.com/reedscot/cvpr2016.. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_23", "text": " We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10−4superscript10410^{-4} and weight decay of 10−5superscript10510^{-5}. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_24", "text": " Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE , SJE , and DS-SJE/DA-SJE . We also compare to a recent clustering approach which trains an SVM on a learned feature space obtained by fine-tuning AlexNet . These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_25", "text": " The literature on metric learning is vast (15, 5); we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) learns a Mahalanobis distance to maximize K-nearest-neighbor’s (KNN) leave-one-out accuracy in the transformed space. Salakhutdinov and Hinton extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class’s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_26", "text": " Our approach is also similar to the nearest class mean approach , where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k𝑘k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_27", "text": " Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle . The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_28", "text": " Prototypical networks are also related to the neural statistician from the generative modeling literature, which extends the variational autoencoder (12, 24) to learn generative models of datasets rather than individual points. One component of the neural statistician is the “statistic network” which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_29", "text": " With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither nor uses episodic training, which allows us to help speed up training and regularize the model. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_30", "text": " We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_31", "text": " We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. ", "title": "Prototypical Networks for Few-shot Learning" } ]
What does “cloze-style” mean?
cloze-style indicates unfilled context [26].
[ 26 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al., 2015; Antol et al., 2015; Xiong et al., 2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are often used to extract the most relevant information from the context for answering the question by summarizing the context into a fixed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_1", "text": " In this paper, we introduce the Bi-Directional Attention Flow  (BiDAF) network, a hierarchical multi-stage architecture for modeling the representations of the context paragraph at different levels of granularity (Figure 1). BiDAF includes character-level, word-level, and contextual embeddings, and uses bi-directional attention flow to obtain a query-aware context representation. Our attention mechanism offers following improvements to the previously popular attention paradigms. First, our attention layer is not used to summarize the context paragraph into a fixed-size vector. Instead, the attention is computed for every time step, and the attended vector at each time step, along with the representations from previous layers, is allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. Second, we use a memory-less attention mechanism. That is, while we iteratively compute attention through time as in Bahdanau et al. (2015), the attention at each time step is a function of only the query and the context paragraph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simplification leads to the division of labor between the attention layer and the modeling layer. It forces the attention layer to focus on learning the attention between the query and the context, and enables the modeling layer to focus on learning the interaction within the query-aware context representation (the output of the attention layer). It also allows the attention at each time step to be unaffected from incorrect attendances at previous time steps. Our experiments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_2", "text": " Our BiDAF model111Our code and interactive demo are available at: allenai.github.io/bi-att-flow/ outperforms all previous approaches on the highly-competitive Stanford Question Answering Dataset (SQuAD) test set leaderboard at the time of submission. With a modification to only the output layer, BiDAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, visualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al., 2016). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_3", "text": " Our machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure 1): 1. Character Embedding Layer maps each word to a vector space using character-level CNNs. 2. Word Embedding Layer maps each word to a vector space using a pre-trained word embedding model. 3. Contextual Embedding Layer utilizes contextual cues from surrounding words to refine the embedding of the words. These first three layers are applied to both the query and context. 4. Attention Flow Layer couples the query and context vectors and produces a set of query-aware feature vectors for each word in the context. 5. Modeling Layer employs a Recurrent Neural Network to scan the context. 6. Output Layer provides an answer to the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_4", "text": " Character embedding layer is responsible for mapping each word to a high-dimensional vector space. Let {𝒙1,…​𝒙T}subscript𝒙1…subscript𝒙𝑇\\{\\bm{x}_{1},\\dots\\bm{x}_{T}\\} and {𝒒1,…​𝒒J}subscript𝒒1…subscript𝒒𝐽\\{\\bm{q}_{1},\\dots\\bm{q}_{J}\\} represent the words in the input context paragraph and query, respectively. Following Kim (2014), we obtain the character-level embedding of each word using Convolutional Neural Networks (CNN). Characters are embedded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the input channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a fixed-size vector for each word. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_5", "text": " Word embedding layer also maps each word to a high-dimensional vector space. We use pre-trained word vectors, GloVe (Pennington et al., 2014), to obtain the fixed word embedding of each word. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_6", "text": " The concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al., 2015). The outputs of the Highway Network are two sequences of d𝑑d-dimensional vectors, or more conveniently, two matrices: 𝐗∈ℝd×T𝐗superscriptℝ𝑑𝑇{\\bf X}\\in\\mathbb{R}^{d\\times T} for the context and 𝐐∈ℝd×J𝐐superscriptℝ𝑑𝐽{\\bf Q}\\in\\mathbb{R}^{d\\times J} for the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_7", "text": " We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain 𝐇∈ℝ2​d×T𝐇superscriptℝ2𝑑𝑇{\\bf H}\\in\\mathbb{R}^{2d\\times T} from the context word vectors 𝐗𝐗{\\bf X}, and 𝐔∈ℝ2​d×J𝐔superscriptℝ2𝑑𝐽{\\bf U}\\in\\mathbb{R}^{2d\\times J} from query word vectors 𝐐𝐐{\\bf Q}. Note that each column vector of 𝐇𝐇{\\bf H} and 𝐔𝐔{\\bf U} is 2​d2𝑑2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d𝑑d-dimensional output. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_8", "text": " It is worth noting that the first three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision field. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_9", "text": " Attention flow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al., 2015; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2016), the attention flow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_10", "text": " The inputs to the layer are contextual vector representations of the context 𝐇𝐇{\\bf H} and the query 𝐔𝐔{\\bf U}. The outputs of the layer are the query-aware vector representations of the context words, 𝐆𝐆{\\bf G}, along with the contextual embeddings from the previous layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_11", "text": " In this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, 𝐒∈ℝT×J𝐒superscriptℝ𝑇𝐽{\\bf S}\\in\\mathbb{R}^{T\\times J}, between the contextual embeddings of the context (𝐇𝐇{\\bf H}) and the query (𝐔𝐔{\\bf U}), where 𝐒t​jsubscript𝐒𝑡𝑗{\\bf S}_{tj} indicates the similarity between t𝑡t-th context word and j𝑗j-th query word. The similarity matrix is computed by 𝐒t​j=α​(𝐇:t,𝐔:j)∈ℝsubscript𝐒𝑡𝑗𝛼subscript𝐇:absent𝑡subscript𝐔:absent𝑗ℝ{\\bf S}_{tj}=\\alpha({\\bf H}_{:t},{\\bf U}_{:j})\\in\\mathbb{R} (1) where α𝛼\\alpha is a trainable scalar function that encodes the similarity between its two input vectors, 𝐇:tsubscript𝐇:absent𝑡{\\bf H}_{:t} is t𝑡t-th column vector of 𝐇𝐇{\\bf H}, and 𝐔:jsubscript𝐔:absent𝑗{\\bf U}_{:j} is j𝑗j-th column vector of 𝐔𝐔{\\bf U}, We choose α​(𝐡,𝐮)=𝐰(𝐒)⊤​(𝐡;𝐮;𝐡∘𝐮)𝛼𝐡𝐮subscriptsuperscript𝐰top𝐒𝐡𝐮𝐡𝐮\\alpha({\\bf h},{\\bf u})={\\bf w}^{\\top}_{({\\bf S})}({\\bf h};{\\bf u};{\\bf h}\\circ{\\bf u}), where 𝐰(𝐒)∈ℝ6​dsubscript𝐰𝐒superscriptℝ6𝑑{\\bf w}_{({\\bf S})}\\in\\mathbb{R}^{6d} is a trainable weight vector, ∘\\circ is elementwise multiplication, (;)(;) is vector concatenation across row, and implicit multiplication is matrix multiplication. Now we use 𝐒𝐒{\\bf S} to obtain the attentions and the attended vectors in both directions. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_12", "text": " Context-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let 𝐚t∈ℝJsubscript𝐚𝑡superscriptℝ𝐽{\\bf a}_{t}\\in\\mathbb{R}^{J} represent the attention weights on the query words by t𝑡t-th context word, ∑𝐚t​j=1subscript𝐚𝑡𝑗1\\sum{\\bf a}_{tj}=1 for all t𝑡t. The attention weight is computed by 𝐚t=softmax​(𝐒t:)∈ℝJsubscript𝐚𝑡softmaxsubscript𝐒:𝑡absentsuperscriptℝ𝐽{\\bf a}_{t}=\\mathrm{softmax}({\\bf S}_{t:})\\in\\mathbb{R}^{J}, and subsequently each attended query vector is 𝐔~:t=∑j𝐚t​j​𝐔:jsubscript~𝐔:absent𝑡subscript𝑗subscript𝐚𝑡𝑗subscript𝐔:absent𝑗\\tilde{{\\bf U}}_{:t}=\\sum_{j}{\\bf a}_{tj}{\\bf U}_{:j}. Hence 𝐔~~𝐔\\tilde{{\\bf U}} is a 2​d2𝑑2d-by-T𝑇T matrix containing the attended query vectors for the entire context. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_13", "text": " Query-to-context Attention. Query-to-context (Q2C) attention signifies which context words have the closest similarity to one of the query words and are hence critical for answering the query. We obtain the attention weights on the context words by 𝐛=softmax​(maxc​o​l⁡(𝐒))∈ℝT𝐛softmaxsubscript𝑐𝑜𝑙𝐒superscriptℝ𝑇{\\bf b}=\\mathrm{softmax}(\\max_{col}({\\bf S}))\\in\\mathbb{R}^{T}, where the maximum function (maxc​o​lsubscript𝑐𝑜𝑙\\max_{col}) is performed across the column. Then the attended context vector is 𝐡~=∑t𝐛t​𝐇:t∈ℝ2​d~𝐡subscript𝑡subscript𝐛𝑡subscript𝐇:absent𝑡superscriptℝ2𝑑\\tilde{\\bf h}=\\sum_{t}{\\bf b}_{t}{\\bf H}_{:t}\\in\\mathbb{R}^{2d}. This vector indicates the weighted sum of the most important words in the context with respect to the query. 𝐡~~𝐡\\tilde{\\bf h} is tiled T𝑇T times across the column, thus giving 𝐇~∈ℝ2​d×T~𝐇superscriptℝ2𝑑𝑇\\tilde{\\bf H}\\in\\mathbb{R}^{2d\\times T}. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_14", "text": " Finally, the contextual embeddings and the attention vectors are combined together to yield 𝐆𝐆{\\bf G}, where each column vector can be considered as the query-aware representation of each context word. We define 𝐆𝐆{\\bf G} by 𝐆:t=𝜷​(𝐇:t,𝐔~:t,𝐇~:t)∈ℝd𝐆subscript𝐆:absent𝑡𝜷subscript𝐇:absent𝑡subscript~𝐔:absent𝑡subscript~𝐇:absent𝑡superscriptℝsubscript𝑑𝐆{\\bf G}_{:t}={\\bm{\\beta}}({\\bf H}_{:t},\\tilde{\\bf U}_{:t},\\tilde{\\bf H}_{:t})\\in\\mathbb{R}^{d_{\\bf G}} (2) where 𝐆:tsubscript𝐆:absent𝑡{\\bf G}_{:t} is the t𝑡t-th column vector (corresponding to t𝑡t-th context word), 𝜷𝜷{\\bm{\\beta}} is a trainable vector function that fuses its (three) input vectors, and d𝐆subscript𝑑𝐆d_{\\bf G} is the output dimension of the 𝜷𝜷{\\bm{\\beta}} function. While the 𝜷𝜷{\\bm{\\beta}} function can be an arbitrary trainable neural network, such as multi-layer perceptron, a simple concatenation as following still shows good performance in our experiments: 𝜷​(𝐡,𝐮~,𝐡~)=(𝐡;𝐮~;𝐡∘𝐮~;𝐡∘𝐡~)∈ℝ8​d×T𝜷𝐡~𝐮~𝐡𝐡~𝐮𝐡~𝐮𝐡~𝐡superscriptℝ8𝑑𝑇{\\bm{\\beta}}({\\bf h},\\tilde{\\bf u},\\tilde{\\bf h})=({\\bf h};\\tilde{\\bf u};{\\bf h}\\circ\\tilde{\\bf u};{\\bf h}\\circ\\tilde{\\bf h})\\in\\mathbb{R}^{8d\\times T} (i.e., d𝐆=8​dsubscript𝑑𝐆8𝑑d_{\\bf G}=8d). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_15", "text": " The input to the modeling layer is 𝐆𝐆{\\bf G}, which encodes the query-aware representations of context words. The output of the modeling layer captures the interaction among the context words conditioned on the query. This is different from the contextual embedding layer, which captures the interaction among context words independent of the query. We use two layers of bi-directional LSTM, with the output size of d𝑑d for each direction. Hence we obtain a matrix 𝐌∈ℝ2​d×T𝐌superscriptℝ2𝑑𝑇{\\bf M}\\in\\mathbb{R}^{2d\\times T}, which is passed onto the output layer to predict the answer. Each column vector of 𝐌𝐌{\\bf M} is expected to contain contextual information about the word with respect to the entire context paragraph and the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_16", "text": " The output layer is application-specific. The modular nature of BiDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section 5, we use a slight modification of this output layer for cloze-style comprehension. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_17", "text": " The QA task requires the model to find a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by 𝐩1=softmax​(𝐰(𝐩1)⊤​(𝐆;𝐌)),superscript𝐩1softmaxsuperscriptsubscript𝐰superscript𝐩1top𝐆𝐌{\\bf p}^{1}=\\mathrm{softmax}({\\bf w}_{({\\bf p}^{1})}^{\\top}({\\bf G};{\\bf M})), (3) where 𝐰(𝐩1)∈ℝ10​dsubscript𝐰superscript𝐩1superscriptℝ10𝑑{\\bf w}_{({\\bf p}^{1})}\\in\\mathbb{R}^{10d} is a trainable weight vector. For the end index of the answer phrase, we pass 𝐌𝐌{\\bf M} to another bidirectional LSTM layer and obtain 𝐌2∈ℝ2​d×Tsuperscript𝐌2superscriptℝ2𝑑𝑇{\\bf M}^{2}\\in\\mathbb{R}^{2d\\times T}. Then we use 𝐌2superscript𝐌2{\\bf M}^{2} to obtain the probability distribution of the end index in a similar manner: 𝐩2=softmax​(𝐰(𝐩2)⊤​(𝐆;𝐌2))superscript𝐩2softmaxsuperscriptsubscript𝐰superscript𝐩2top𝐆superscript𝐌2{\\bf p}^{2}=\\mathrm{softmax}({\\bf w}_{({\\bf p}^{2})}^{\\top}({\\bf G};{\\bf M}^{2})) (4) ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_18", "text": " Training. We define the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples: L​(θ)=−1N​∑iNlog⁡(𝐩yi11)+log⁡(𝐩yi22)𝐿𝜃1𝑁subscriptsuperscript𝑁𝑖subscriptsuperscript𝐩1subscriptsuperscript𝑦1𝑖subscriptsuperscript𝐩2subscriptsuperscript𝑦2𝑖L(\\theta)=-\\frac{1}{N}\\sum^{N}_{i}\\log({\\bf p}^{1}_{y^{1}_{i}})+\\log({\\bf p}^{2}_{y^{2}_{i}}) (5) where θ𝜃\\theta is the set of all trainable weights in the model (the weights and biases of CNN filters and LSTM cells, 𝐰(𝐒)subscript𝐰𝐒{\\bf w}_{({\\bf S})}, 𝐰(𝐩1)subscript𝐰superscript𝐩1{\\bf w}_{({\\bf p}^{1})} and 𝐰(𝐩2)subscript𝐰superscript𝐩2{\\bf w}_{({\\bf p}^{2})}), N𝑁N is the number of examples in the dataset, yi1subscriptsuperscript𝑦1𝑖y^{1}_{i} and yi2subscriptsuperscript𝑦2𝑖y^{2}_{i} are the true start and end indices of the i𝑖i-th example, respectively, and 𝐩ksubscript𝐩𝑘{\\bf p}_{k} indicates the k𝑘k-th value of the vector 𝐩𝐩{\\bf p}. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_19", "text": " Test. The answer span (k,l)𝑘𝑙(k,l) where k≤l𝑘𝑙k\\leq l with the maximum value of 𝐩k1​𝐩l2subscriptsuperscript𝐩1𝑘subscriptsuperscript𝐩2𝑙{\\bf p}^{1}_{k}{\\bf p}^{2}_{l} is chosen, which can be computed in linear time with dynamic programming. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_20", "text": " A significant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al., 2013) were too small to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail by Hermann et al. (2015) and Childrens Book Test by Hill et al. (2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_21", "text": " Previous works in end-to-end machine comprehension use attention mechanisms in three distinct ways. The first group (largely inspired by Bahdanau et al. (2015)) uses a dynamic attention mechanism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention. Hermann et al. (2015) argue that the dynamic attention model performs better than using a single fixed query vector to attend on context words on CNN & DailyMail datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weights in the same model drastically improves the accuracy. Wang & Jiang (2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast to these models, BiDAF uses a memory-less attention mechanism. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_22", "text": " The second group computes the attention weights once, which are then fed into an output layer for final prediction (e.g., Kadlec et al. (2016)). Attention-over-attention model (Cui et al., 2016) uses a 2D similarity matrix between the query and context words (similar to Equation 1) to compute the weighted average of query-to-context attention. In contrast to these models, BiDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors flow into the modeling (RNN) layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_23", "text": " The third group (considered as variants of Memory Network (Weston et al., 2015)) repeats computing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al., 2016; Dhingra et al., 2016). Shen et al. (2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our BiDAF model to incorporate multiple hops. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_24", "text": " The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al., 2015; Malinowski et al., 2015). Attention mechanisms have also been successfully employed for the VQA task and can be broadly clustered based on the granularity of their attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al., 2016; Xiong et al., 2016a). At a finer level, each question word attends to each image patch and the highest attention value for each spatial location (Xu & Saenko, 2016) is adopted. A hybrid approach is to combine questions representations at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al., 2015). Several approaches to constructing the attention matrix have been used including element-wise product, element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al., 2016). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_25", "text": " Lu et al. (2016) have recently shown that in addition to attending from the question to image patches, attending from the image back to the question words provides an improvement on the VQA task. This finding in the visual domain is consistent with our finding in the language domain, where our bi-directional attention between the query and context provides improved results. Their model, however, uses the attention weights directly in the output layer and does not take advantage of the attention flow to the modeling layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_26", "text": " In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al., 2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_27", "text": " SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,000 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k train/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_28", "text": " The model architecture used for this task is depicted in Figure 1. Each paragraph and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fed into the model. We use 100 1D filters for CNN char embedding, each with a width of 5. The hidden state size (d𝑑d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler, 2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.50.50.5, for 12 epochs. A dropout (Srivastava et al., 2014) rate of 0.20.20.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.9990.9990.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 12 runs for each question. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_29", "text": " The results of our model and competing approaches on the hidden test are summarized in Table 2(a). BiDAF (ensemble) achieves an EM score of 73.3 and an F1 score of 81.1, outperforming all previous approaches. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_30", "text": " Table 2(b) shows the performance of our model and its ablations on the SQuAD dev set. Both char-level and word-level embeddings contribute towards the model’s performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole, while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi-directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector 𝐔~~𝐔\\tilde{\\bf U} with the average of the output vectors of the question’s contextual embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on both metrics. For ablating Q2C attention, the output of the attention layer, 𝐆𝐆{\\bf G}, does not include terms that have the attended Q2C vectors, 𝐇~~𝐇\\tilde{\\bf H}. To evaluate the attention flow, we study a dynamic attention model, where the attention is dynamically computed within the modeling layer’s LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention is pre-computed before flowing to the modeling layer. Despite being a simpler attention mechanism, our proposed static attention outperforms the dynamically computed attention by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the first 4 layers which are then incorporated by the modeling layer. We also show the performance of BiDAF with several different definitions of α𝛼\\alpha and 𝜷𝜷{\\bm{\\beta}} functions (Equation 1 and 2) in Appendix B. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_31", "text": " We now provide a qualitative analysis of our model on the SQuAD dev set. First, we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequent query words in the dev data and look at the context words that have the highest cosine similarity to the query words (Table 2). At the word embedding layer, query words such as When, Where and Who are not well aligned to possible answers in the context, but this dramatically changes in the contextual embedding layer which has access to context from surrounding words and is just 1 layer below the attention layer. When begins to match years, Where matches locations, and Who matches names. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_32", "text": " We also visualize these two feature spaces using t-SNE in Figure 2. t-SNE is performed on a large fraction of dev data but we only plot data points corresponding to the months of the year. An interesting pattern emerges in the Word space, where May is separated from the rest of the months because May has multiple meanings in the English language. The contextual embedding layer uses contextual cues from surrounding words and is able to separate the usages of the word May. Finally we visualize the attention matrices for some question-context tuples in the dev data in Figure 3. In the first example, Where matches locations and in the second example, many matches quantities and numerical symbols. Also, entities in the question typically attend to the same entities in the context, thus providing a feature for the model to localize possible answers. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_33", "text": " We analyse the performance of our our model with a traditional language-feature-based baseline (Rajpurkar et al., 2016). Figure 2b shows a Venn diagram of the dev set questions correctly answered by the models. Our model is able to answer more than 86% of the questions correctly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the first words in the questions (Figure 2c). Our model outperforms the traditional baseline comfortably in every category. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_34", "text": " We randomly select 50 incorrect questions (based on EM) and categorize them into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowledge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix A for the examples of the error modes. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_35", "text": " We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_36", "text": " In a cloze test, the reader is asked to fill in words that have been removed from a passage, for measuring one’s ability to comprehend text. Hermann et al. (2015) have recently compiled a massive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word, the missing word is always a named entity, anonymized with a random ID. Also, the IDs must be shuffled constantly during test, which is also critical for full anonymization. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_37", "text": " The model architecture used for this task is very similar to that for SQuAD (Section 4) with only a few small changes to adapt it to the cloze test. Since each answer in the CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (𝐩1superscript𝐩1{\\bf p}^{1}); the prediction for the end index (𝐩2superscript𝐩2{\\bf p}^{2}) is omitted from the loss function. Also, we mask out all non-entity words in the final classification layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al. (2016). During training, after we obtain 𝐩1superscript𝐩1{\\bf p}^{1}, we sum all probability values of the entity instances in the context that correspond to the correct answer. Then the loss function is computed from the summed probability. We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al., 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BiDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by parallelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section 4. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_38", "text": " The results of our single-run models and competing approaches on the CNN/DailyMail datasets are summarized in Table 3. ∗ indicates ensemble methods. BiDAF outperforms previous single-run models on both datasets for both val and test data. On the DailyMail test, our single-run model even outperforms the best ensemble method. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_39", "text": " In this paper, we introduce BiDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each component in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct locations in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_40", "text": " This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments. ", "title": "Bidirectional Attention Flow for Machine Comprehension" } ]
What are the difference between plain networks and deep highway networks ?
A highway network is a layer that uses an information highway layer, and a plain network is a general layer [17]. In highway networks, increasing layer depth does not affect performance, but in plain networks, it can [23]. One layer of the plain network is made up of normal computation units, whereas the highway network is made up of block units [25].
[ 17, 23, 25 ]
[ { "id": "1507.06228_all_0", "text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, within just a few years, the top-5 image classification accuracy on the 1000-class ImageNet dataset has increased from ∼similar-to\\sim84% to ∼similar-to\\sim95% (2, 3) using deeper networks with rather small receptive fields (4, 5). Other results on practical machine learning problems have also underscored the superiority of deeper networks in terms of accuracy and/or performance. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_1", "text": " In fact, deep networks can represent certain function classes far more efficiently than shallow ones. This is perhaps most obvious for recurrent nets, the deepest of them all. For example, the n𝑛n bit parity problem can in principle be learned by a large feedforward net with n𝑛n binary input units, 1 output unit, and a single but large hidden layer. But the natural solution for arbitrary n𝑛n is a recurrent net with only 3 units and 5 weights, reading the input bit string one bit at a time, making a single recurrent hidden unit flip its state whenever a new 1 is observed . Related observations hold for Boolean circuits (8, 9) and modern neural networks (10, 11, 12). ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_2", "text": " To deal with the difficulties of training deep networks, some researchers have focused on developing better optimizers (e.g. (13, 14, 15)). Well-designed initialization strategies, in particular the normalized variance-preserving initialization for certain activation functions (16, 17), have been widely adopted for training moderately deep networks. Other similarly motivated strategies have shown promising results in preliminary experiments (18, 19). Experiments showed that certain activation functions based on local competition (20, 21) may help to train deeper networks. Skip connections between layers or to output layers (where error is “injected”) have long been used in neural networks, more recently with the explicit aim to improve the flow of information (22, 23, 2, 24). A related recent technique is based on using soft targets from a shallow teacher network to aid in training deeper student networks in multiple stages , similar to the neural history compressor for sequences, where a slowly ticking teacher recurrent net is “distilled” into a quickly ticking student recurrent net by forcing the latter to predict the hidden units of the former . Finally, deep networks can be trained layer-wise to help in credit assignment (26, 27), but this approach is less attractive compared to direct training. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_3", "text": " Very deep network training still faces problems, albeit perhaps less fundamental ones than the problem of vanishing gradients in standard recurrent networks . The stacking of several non-linear transformations in conventional feed-forward network architectures typically results in poor propagation of activations and gradients. Hence it remains hard to investigate the benefits of very deep networks for a variety of problems. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_4", "text": " To overcome this, we take inspiration from Long Short Term Memory (LSTM) recurrent networks (29, 30). We propose to modify the architecture of very deep feedforward networks such that information flow across layers becomes much easier. This is accomplished through an LSTM-inspired adaptive gating mechanism that allows for computation paths along which information can flow across many layers without attenuation. We call such paths information highways. They yield highway networks, as opposed to traditional ‘plain’ networks.111This paper expands upon a shorter report on Highway Networks . More recently, a similar LSTM-inspired model was also proposed . ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_5", "text": " Our primary contribution is to show that extremely deep highway networks can be trained directly using stochastic gradient descent (SGD), in contrast to plain networks which become hard to optimize as depth increases (Section 3.1). Deep networks with limited computational budget (for which a two-stage training procedure mentioned above was recently proposed ) can also be directly trained in a single stage when converted to highway networks. Their ease of training is supported by experimental results demonstrating that highway networks also generalize well to unseen data. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_6", "text": " We use boldface letters for vectors and matrices, and italicized capital letters to denote transformation functions. 𝟎0\\mathbf{0} and 𝟏1\\mathbf{1} denote vectors of zeros and ones respectively, and 𝐈𝐈\\mathbf{I} denotes an identity matrix. The function σ​(x)𝜎𝑥\\sigma(x) is defined as σ​(x)=11+e−x,x∈ℝformulae-sequence𝜎𝑥11superscript𝑒𝑥𝑥ℝ\\sigma(x)=\\frac{1}{1+e^{-x}},x\\in\\mathbb{R}. The dot operator (⋅⋅\\cdotp) is used to denote element-wise multiplication. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_7", "text": " A plain feedforward neural network typically consists of L𝐿L layers where the lt​hsuperscript𝑙𝑡ℎl^{th} layer (l∈{1,2,…,L}𝑙12…𝐿l\\in\\{1,2,...,L\\}) applies a non-linear transformation H𝐻H (parameterized by 𝐖𝐇,𝐥subscript𝐖𝐇𝐥\\mathbf{W_{H,l}}) on its input 𝐱𝐥subscript𝐱𝐥\\mathbf{x_{l}} to produce its output 𝐲𝐥subscript𝐲𝐥\\mathbf{y_{l}}. Thus, 𝐱𝟏subscript𝐱1\\mathbf{x_{1}} is the input to the network and 𝐲𝐋subscript𝐲𝐋\\mathbf{y_{L}} is the network’s output. Omitting the layer index and biases for clarity, ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_8", "text": " 𝐲=H​(𝐱,𝐖𝐇).𝐲𝐻𝐱subscript𝐖𝐇\\mathbf{y}=H(\\mathbf{x},\\mathbf{W_{H}}). (1) ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_9", "text": " H𝐻H is usually an affine transform followed by a non-linear activation function, but in general it may take other forms, possibly convolutional or recurrent. For a highway network, we additionally define two non-linear transforms T​(𝐱,𝐖𝐓)𝑇𝐱subscript𝐖𝐓T(\\mathbf{x},\\mathbf{W_{T}}) and C​(𝐱,𝐖𝐂)𝐶𝐱subscript𝐖𝐂C(\\mathbf{x},\\mathbf{W_{C}}) such that ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_10", "text": " 𝐲=H​(𝐱,𝐖𝐇)⋅T​(𝐱,𝐖𝐓)+𝐱⋅C​(𝐱,𝐖𝐂).𝐲⋅𝐻𝐱subscript𝐖𝐇𝑇𝐱subscript𝐖𝐓⋅𝐱𝐶𝐱subscript𝐖𝐂\\mathbf{y}=H(\\mathbf{x},\\mathbf{W_{H}})\\cdotp T(\\mathbf{x},\\mathbf{W_{T}})+\\mathbf{x}\\cdot C(\\mathbf{x},\\mathbf{W_{C}}). (2) ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_11", "text": " We refer to T𝑇T as the transform gate and C𝐶C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C=1−T𝐶1𝑇C=1-T, giving ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_12", "text": " 𝐲=H​(𝐱,𝐖𝐇)⋅T​(𝐱,𝐖𝐓)+𝐱⋅(1−T​(𝐱,𝐖𝐓)).𝐲⋅𝐻𝐱subscript𝐖𝐇𝑇𝐱subscript𝐖𝐓⋅𝐱1𝑇𝐱subscript𝐖𝐓\\mathbf{y}=H(\\mathbf{x},\\mathbf{W_{H}})\\cdotp T(\\mathbf{x},\\mathbf{W_{T}})+\\mathbf{x}\\cdot(1-T(\\mathbf{x},\\mathbf{W_{T}})). (3) ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_13", "text": " The dimensionality of 𝐱,𝐲,H​(𝐱,𝐖𝐇)𝐱𝐲𝐻𝐱subscript𝐖𝐇\\mathbf{x},\\mathbf{y},H(\\mathbf{x},\\mathbf{W_{H}}) and T​(𝐱,𝐖𝐓)𝑇𝐱subscript𝐖𝐓T(\\mathbf{x},\\mathbf{W_{T}}) must be the same for Equation 3 to be valid. Note that this layer transformation is much more flexible than Equation 1. In particular, observe that for particular values of T𝑇T, ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_14", "text": " 𝐲={𝐱,if ​T​(𝐱,𝐖𝐓)=𝟎,H​(𝐱,𝐖𝐇),if ​T​(𝐱,𝐖𝐓)=𝟏.𝐲cases𝐱if 𝑇𝐱subscript𝐖𝐓0𝐻𝐱subscript𝐖𝐇if 𝑇𝐱subscript𝐖𝐓1\\mathbf{y}=\\begin{cases}\\mathbf{x},&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{0},\\\\ H(\\mathbf{x},\\mathbf{W_{H}}),&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{1}.\\end{cases} (4) ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_15", "text": " Similarly, for the Jacobian of the layer transform, ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_16", "text": " d​𝐲d​𝐱={𝐈,if ​T​(𝐱,𝐖𝐓)=𝟎,H′​(𝐱,𝐖𝐇),if ​T​(𝐱,𝐖𝐓)=𝟏.𝑑𝐲𝑑𝐱cases𝐈if 𝑇𝐱subscript𝐖𝐓0superscript𝐻′𝐱subscript𝐖𝐇if 𝑇𝐱subscript𝐖𝐓1\\frac{d\\mathbf{y}}{d\\mathbf{x}}=\\begin{cases}\\mathbf{I},&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{0},\\\\ H^{\\prime}(\\mathbf{x},\\mathbf{W_{H}}),&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{1}.\\end{cases} (5) ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_17", "text": " Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior between that of H𝐻H and that of a layer which simply passes its inputs through. Just as a plain layer consists of multiple computing units such that the it​hsuperscript𝑖𝑡ℎi^{th} unit computes yi=Hi​(𝐱)subscript𝑦𝑖subscript𝐻𝑖𝐱y_{i}=H_{i}(\\mathbf{x}), a highway network consists of multiple blocks such that the it​hsuperscript𝑖𝑡ℎi^{th} block computes a block state Hi​(𝐱)subscript𝐻𝑖𝐱H_{i}(\\mathbf{x}) and transform gate output Ti​(𝐱)subscript𝑇𝑖𝐱T_{i}(\\mathbf{x}). Finally, it produces the block output yi=Hi​(𝐱)∗Ti​(𝐱)+xi∗(1−Ti​(𝐱))subscript𝑦𝑖subscript𝐻𝑖𝐱subscript𝑇𝑖𝐱subscript𝑥𝑖1subscript𝑇𝑖𝐱y_{i}=H_{i}(\\mathbf{x})*T_{i}(\\mathbf{x})+x_{i}*(1-T_{i}(\\mathbf{x})), which is connected to the next layer.222Our pilot experiments on training very deep networks were successful with a more complex block design closely resembling an LSTM block “unrolled in time”. Here we report results only for a much simplified form. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_18", "text": " As mentioned earlier, Equation 3 requires that the dimensionality of 𝐱,𝐲,H​(𝐱,𝐖𝐇)𝐱𝐲𝐻𝐱subscript𝐖𝐇\\mathbf{x},\\mathbf{y},H(\\mathbf{x},\\mathbf{W_{H}}) and T​(𝐱,𝐖𝐓)𝑇𝐱subscript𝐖𝐓T(\\mathbf{x},\\mathbf{W_{T}}) be the same. To change the size of the intermediate representation, one can replace 𝐱𝐱\\mathbf{x} with 𝐱^^𝐱\\mathbf{\\hat{x}} obtained by suitably sub-sampling or zero-padding 𝐱𝐱\\mathbf{x}. Another alternative is to use a plain layer (without highways) to change dimensionality, which is the strategy we use in this study. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_19", "text": " Convolutional highway layers utilize weight-sharing and local receptive fields for both H𝐻H and T𝑇T transforms. We used the same sized receptive fields for both, and zero-padding to ensure that the block state and transform gate feature maps match the input size. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_20", "text": " We use the transform gate defined as T​(𝐱)=σ​(𝐖𝐓T​𝐱+𝐛𝐓)𝑇𝐱𝜎superscriptsubscript𝐖𝐓𝑇𝐱subscript𝐛𝐓T(\\mathbf{x})=\\sigma(\\mathbf{W_{T}}^{T}\\mathbf{x}+\\mathbf{b_{T}}), where 𝐖𝐓subscript𝐖𝐓\\mathbf{W_{T}} is the weight matrix and 𝐛𝐓subscript𝐛𝐓\\mathbf{b_{T}} the bias vector for the transform gates. This suggests a simple initialization scheme which is independent of the nature of H𝐻H: bTsubscript𝑏𝑇b_{T} can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly inspired by the proposal to initially bias the gates in an LSTM network, to help bridge long-term temporal dependencies early in learning. Note that σ​(x)∈(0,1),∀x∈ℝformulae-sequence𝜎𝑥01for-all𝑥ℝ\\sigma(x)\\in(0,1),\\forall x\\in\\mathbb{R}, so the conditions in Equation 4 can never be met exactly. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_21", "text": " In our experiments, we found that a negative bias initialization for the transform gates was sufficient for training to proceed in very deep networks for various zero-mean initial distributions of WHsubscript𝑊𝐻W_{H} and different activation functions used by H𝐻H. In pilot experiments, SGD did not stall for networks with more than 1000 layers. Although the initial bias is best treated as a hyperparameter, as a general guideline we suggest values of -1, -2 and -3 for convolutional highway networks of depth approximately 10, 20 and 30. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_22", "text": " All networks were trained using SGD with momentum. An exponentially decaying learning rate was used in Section 3.1. For the rest of the experiments, a simpler commonly used strategy was employed where the learning rate starts at a value λ𝜆\\lambda and decays according to a fixed schedule by a factor γ𝛾\\gamma. λ𝜆\\lambda, γ𝛾\\gamma and the schedule were selected once based on validation set performance on the CIFAR-10 dataset, and kept fixed for all experiments. All convolutional highway networks utilize the rectified linear activation function to compute the block state H𝐻H. To provide a better estimate of the variability of classification results due to random initialization, we report our results in the format Best (mean ±plus-or-minus\\pm std.dev.) based on 5 runs wherever available. Experiments were conducted using Caffe and Brainstorm (https://github.com/IDSIA/brainstorm) frameworks. Source code, hyperparameter search results and related scripts are publicly available at http://people.idsia.ch/~rupesh/very_deep_learning/. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_23", "text": " To support the hypothesis that highway networks do not suffer from increasing depth, we conducted a series of rigorous optimization experiments, comparing them to plain networks with normalized initialization (16, 17). ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_24", "text": " We trained both plain and highway networks of varying varying depths on the MNIST digit classification dataset. All networks are thin: each layer has 50 blocks for highway networks and 71 units for plain networks, yielding roughly identical numbers of parameters (≈\\approx5000) per layer. In all networks, the first layer is a fully connected plain layer followed by 9, 19, 49, or 99 fully connected plain or highway layers. Finally, the network output is produced by a softmax layer. We performed a random search of 100 runs for both plain and highway networks to find good settings for the following hyperparameters: initial learning rate, momentum, learning rate exponential decay factor & activation function (either rectified linear or tanh). For highway networks, an additional hyperparameter was the initial value for the transform gate bias (between -1 and -10). Other weights were initialized using the same normalized initialization as plain networks. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_25", "text": " The training curves for the best performing networks for each depth are shown in Figure 1. As expected, 10 and 20-layer plain networks exhibit very good performance (mean loss <1​e−4absent1superscript𝑒4<1e^{-4}), which significantly degrades as depth increases, even though network capacity increases. Highway networks do not suffer from an increase in depth, and 50/100 layer highway networks perform similar to 10/20 layer networks. The 100-layer highway network performed more than 2 orders of magnitude better compared to a similarly-sized plain network. It was also observed that highway networks consistently converged significantly faster than plain ones. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_26", "text": " As a sanity check for the generalization capability of highway networks, we trained 10-layer convolutional highway networks on MNIST, using two architectures, each with 9 convolutional layers followed by a softmax output. The number of filter maps (width) was set to 16 and 32 for all the layers. We obtained test set performance competitive with state-of-the-art methods with much fewer parameters, as show in Table 1. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_27", "text": " Maxout networks can cope much better with increased depth than those with traditional activation functions . However, Romero et. al. recently reported that training on CIFAR-10 through plain backpropogation was only possible for maxout networks with a depth up to 5 layers when the number of parameters was limited to ∼similar-to\\sim250K and the number of multiplications to ∼similar-to\\sim30M. Similar limitations were observed for higher computational budgets. Training of deeper networks was only possible through the use of a two-stage training procedure and addition of soft targets produced from a pre-trained shallow teacher network (hint-based training). ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_28", "text": " We found that it was easy to train highway networks with numbers of parameters and operations comparable to those of fitnets in a single stage using SGD. As shown in Table 2, Highway A and Highway B, which are based on the architectures of Fitnet A and Fitnet B, respectively, obtain similar or higher accuracy on the test set. We were also able to train thinner and deeper networks: for example a 32-layer highway network consisting of alternating receptive fields of size 3x3 and 1x1 with ∼similar-to\\sim1.25M parameters performs better than the earlier teacher network . ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_29", "text": " It is possible to obtain high performance on the CIFAR-10 and CIFAR-100 datasets by utilizing very large networks and extensive data augmentation. This approach was popularized by Ciresan et. al. and recently extended by Graham . Since our aim is only to demonstrate that deeper networks can be trained without sacrificing ease of training or generalization ability, we only performed experiments in the more common setting of global contrast normalization, small translations and mirroring of images. Following Lin et. al. , we replaced the fully connected layer used in the networks in the previous section with a convolutional layer with a receptive field of size one and a global average pooling layer. The hyperparameters from the last section were re-used for both CIFAR-10 and CIFAR-100, therefore it is quite possible to obtain much better results with better architectures/hyperparameters. The results are tabulated in Table 3. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_30", "text": " Figure 2 illustrates the inner workings of the best333obtained via random search over hyperparameters to minimize the best training set error achieved using each configuration 50 hidden layer fully-connected highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The first three columns show the bias, the mean activity over all training samples, and the activity for a single random sample for each transform gate respectively. Block outputs for the same single sample are displayed in the last column. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_31", "text": " The transform gate biases of the two networks were initialized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases decreased further during training. For the CIFAR-100 network the biases increase with depth forming a gradient. Curiously this gradient is inversely correlated with the average activity of the transform gates, as seen in the second column. This indicates that the strong negative biases at low depths are not used to shut down the gates, but to make them more selective. This behavior is also suggested by the fact that the transform gate activity for a single example (column 3) is very sparse. The effect is more pronounced for the CIFAR-100 network, but can also be observed to a lesser extent in the MNIST network. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_32", "text": " The last column of Figure 2 displays the block outputs and visualizes the concept of “information highways”. Most of the outputs stay constant over many layers forming a pattern of stripes. Most of the change in outputs happens in the early layers (≈15absent15\\approx 15 for MNIST and ≈40absent40\\approx 40 for CIFAR-100). ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_33", "text": " One possible advantage of the highway architecture over hard-wired shortcut connections is that the network can learn to dynamically adjust the routing of the information based on the current input. This begs the question: does this behaviour manifest itself in trained networks or do they just learn a static routing that applies to all inputs similarly. A partial answer can be found by looking at the mean transform gate activity (second column) and the single example transform gate outputs (third column) in Figure 2. Especially for the CIFAR-100 case, most transform gates are active on average, while they show very selective activity for the single example. This implies that for each sample only a few blocks perform transformation but different blocks are utilized by different samples. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_34", "text": " This data-dependent routing mechanism is further investigated in Figure 3. In each of the columns we show how the average over all samples of one specific class differs from the total average shown in the second column of Figure 2. For MNIST digits 0 and 7 substantial differences can be seen within the first 15 layers, while for CIFAR class numbers 0 and 1 the differences are sparser and spread out over all layers. In both cases it is clear that the mean activity pattern differs between classes. The gating system acts not just as a mechanism to ease training, but also as an important part of the computation in a trained network. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_35", "text": " Since we bias all the transform gates towards being closed, in the beginning every layer mostly copies the activations of the previous layer. Does training indeed change this behaviour, or is the final network still essentially equivalent to a network with a much fewer layers? To shed light on this issue, we investigated the extent to which lesioning a single layer affects the total performance of trained networks from Section 3.1. By lesioning, we mean manually setting all the transform gates of a layer to 0 forcing it to simply copy its inputs. For each layer, we evaluated the network on the full training set with the gates of that layer closed. The resulting performance as a function of the lesioned layer is shown in Figure 4. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_36", "text": " For MNIST (left) it can be seen that the error rises significantly if any one of the early layers is removed, but layers 15−45154515-45 seem to have close to no effect on the final performance. About 60% of the layers don’t learn to contribute to the final result, likely because MNIST is a simple dataset that doesn’t require much depth. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_37", "text": " We see a different picture for the CIFAR-100 dataset (right) with performance degrading noticeably when removing any of the first ≈40absent40\\approx 40 layers. This suggests that for complex problems a highway network can learn to utilize all of its layers, while for simpler problems like MNIST it will keep many of the unneeded layers idle. Such behavior is desirable for deep networks in general, but appears difficult to obtain using plain networks. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_38", "text": " Alternative approaches to counter the difficulties posed by depth mentioned in Section 1 often have several limitations. Learning to route information through neural networks with the help of competitive interactions has helped to scale up their application to challenging problems by improving credit assignment , but they still suffer when depth increases beyond ≈\\approx20 even with careful initialization . Effective initialization methods can be difficult to derive for a variety of activation functions. Deep supervision has been shown to hurt performance of thin deep networks . ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_39", "text": " Very deep highway networks, on the other hand, can directly be trained with simple gradient descent methods due to their specific architecture. This property does not rely on specific non-linear transformations, which may be complex convolutional or recurrent transforms, and derivation of a suitable initialization scheme is not essential. The additional parameters required by the gating mechanism help in routing information through the use of multiplicative connections, responding differently to different inputs, unlike fixed “skip” connections. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_40", "text": " A possible objection is that many layers might remain unused if the transform gates stay closed. Our experiments show that this possibility does not affect networks adversely—deep and narrow highway networks can match/exceed the accuracy of wide and shallow maxout networks, which would not be possible if layers did not perform useful computations. Additionally, we can exploit the structure of highways to directly evaluate the contribution of each layer as shown in Figure 4. For the first time, highway networks allow us to examine how much computation depth is needed for a given problem, which can not be easily done with plain networks. ", "title": "Training Very Deep Networks" }, { "id": "1507.06228_all_41", "text": " We thank NVIDIA Corporation for their donation of GPUs and acknowledge funding from the EU project NASCENCE (FP7-ICT-317662). We are grateful to Sepp Hochreiter and Thomas Unterthiner for helpful comments and Jan Koutník for help in conducting experiments. ", "title": "Training Very Deep Networks" } ]
How can rely on this latency prediction model?
The latency of randomly generated models shows that the latency model is accurate [48].
[ 48 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a challenging problem in various areas. A popular hardware solution is to develop a hardware accelerator, called neural processing unit (NPU), that achieves higher performance per watt than CPUs or GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_1", "text": " For a given hardware platform, several software techniques have been proposed to accelerate CNNs by approximate computing since deep learning applications can tolerate a certain range of computation inaccuracy. Some examples in this software approach are filter pruning (Li et al., 2016), quantization (Park et al., 2017), low-rank approximation (Kim et al., 2015). Accelerating CNNs is helpful to improve the accuracy by running a more compute-intensive CNN with higher accuracy within a given time budget. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_2", "text": " On the other hand, various algorithmic solutions have been proposed to improve the CNN architecture by introducing new operations, optimizing the hyper-parameters, or searching for better network architecture. New operations such as depth-wise convolution(DWConv) (Chollet, 2017) and mobile inverted bottleneck (MBConv) (Sandler et al., 2018) have been developed to replace the regular full convolution. Recently, automated neural architecture search (NAS) emerges as the default technique to find a CNN architecture with higher accuracy than manually-designed architectures, particularly image classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_3", "text": " A NAS technique explores a predefined search space and estimates the performance for each candidate architecture to find an optimal one with the highest accuracy under a given latency constraint. Thus there are three factors that affect the performance of NAS, as shown in Figure 1: search space, search strategy, and performance estimation. The search space of a NAS technique is usually restricted by a supernet that defines the topology of the largest network to explore. Since the performance of a network depends on the hardware platform, the NAS technique needs to be customized to a given hardware platform. While numerous NAS techniques have been proposed with various search strategies recently, their assumed hardware platforms are mostly GPUs. In this paper, we present a customized NAS technique for an NPU, which produces a CNN architecture with a better accuracy-latency tradeoff than existing models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_4", "text": " One of the most closely related work is the recently proposed NAS technique tailored for Google’s Edge-TPU (Gupta and Akin, 2020). While MBConv is widely used for GPU-aware NAS techniques, they prefer to use a single full convolution by fusing expansion layer and DWConv layer in some parts of the network, observing that the Edge-TPU runs the fused full convolution faster even though the required number of MAC (multiply-accumulate) operations is much larger. It confirms that the number of MAC operations is not a proper measure of latency, and platform-specific performance estimation is required. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_5", "text": " Since an NPU is much faster than a GPU, it enables us to explore the wider search space for NAS under a given latency constraint. Since there are many factors to define the search space, such as the number of layers, channels, kernel sizes, and so on, the search space grows exponentially as the allowed computation complexity grows. Hence, reducing the search space, as well as the search time, is very challenging for NPU-aware NAS techniques. While the aforementioned work for Google’s Edge TPU trains each architecture candidate from scratch to estimate the performance, it is not computationally efficient. In contrast, we adopt a fast differentiable hardware-aware One-Shot NAS, called Single-Path NAS (Stamoulis et al., 2019), in order to reduce the search time. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_6", "text": " Figure 2 shows an overview of the proposed NAS methodology that consists of three steps. In the first step, we change the supernet structure of the Single-Path NAS, which has a hierarchical structure based on MobileNetV2 (Sandler et al., 2018): A supernet structure consists of a series of stages that contain a series of blocks containing an MBConv micro-architecture inside. Since the network accuracy depends on the supernet structure, we make two extensions on the supernet structure to widen the search space. First, we allow stages to have a different number of blocks, called depth of the stage, considering the effect of stage depth on the accuracy and the latency. Second, we add parallel layers with different kernel sizes in each block, adopting the idea of mixed depthwise convolution (Tan and Le, 2019b) (MixConv). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_7", "text": " With the extended supernet structure, we apply the Single-Path NAS, which is also extended to support the extended supernet structure. In this step, we assume a shorter latency constraint than the required to reduce the search space and the search time. The last step is to scale up the baseline CNN adopting the compound scaling technique proposed in  (Tan and Le, 2019a) until the latency constraint is met. The proposed NAS methodology is named as S3NAS since it consists of 3 steps: Supernet design, SinglePath NAS, and Scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_8", "text": " For accurate latency estimation, an analytical latency estimator is devised, based on a cycle-level NPU simulator that runs an entire CNN considering the memory access overhead accurately. Since the NPU assumed in this paper can execute depth-wise separable convolution (DWConv), squeeze-and-excitation (SE), and h-swish activation function efficiently, the proposed supernet prefers DWConv to regular convolution. Observing that the accuracy is improved by around 1% if SE and h-swish activation function are used, we add a post-processing phase after a CNN network is found by NAS to add SE layers and to replace ReLU to h-swish activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_9", "text": " Experiments show that the proposed NAS technique could improve the accuracy-latency tradeoff over existing SoTA CNN models. Our best model achieves 82.72% top-1 accuracy on ImageNet with 11.66ms latency without any special data augmentation. Note that the latency is estimated by cycle-accurate simulation. For a fair comparison with the related work, the latency of each compared network is also estimated with the same simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_10", "text": " After an automated NAS technique based on reinforcement learning successfully found a better CNN architecture than manually-designed architectures (Zoph and Le, 2016), extensive research has been conducted to develop various NAS techniques based on reinforcement learning (Zoph et al., 2018; Tan et al., 2019). However, these NAS techniques are computationally intensive because they train each candidate architectures from scratch to estimate the goodness of it. Thus, one-shot neural architecture search approach (Pham et al., 2018) was introduced to reduce the search cost. In this approach, an over-parameterized super-model network is defined, and architecture search is performed by parameter optimization to reduce the complexity of the network. Gradient-based differentiable search has gained increasing popularity, and various NAS techniques have been proposed with different super-models and hyper-parameters (Pham et al., 2018; Guo et al., 2019; Chu et al., 2019; Liu et al., 2018; Cai et al., 2018). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_11", "text": " Among diverse techniques to decrease the search cost, Single-Path NAS (Stamoulis et al., 2019) was recently proposed to find a good architecture faster than the existing differentiable NAS techniques. This technique is extended to broaden the search space by including the squeeze-and-excitation (SE) block in the search space (Stamoulis et al., 2020). Our work is grounded on the original Single-Path NAS technique. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_12", "text": " Finding a hardware-friendly neural architecture has been facilitated as NAS algorithm improved. MNASNet (Tan et al., 2019) added a latency term in the objective function to discover better architectures with a given latency constraint on their target hardware platform. EfficientNet (Tan and Le, 2019a), whose search method is similar to MNASNet, introduced a novel scaling method, called compound scaling, to find more accurate networks as the latency constraint or FLOPS increases. Instead of finding a network directly for a given long latency constraint, they scale up the depth and the width of a small network with shorter latency and the input image size in a balanced way. They could achieve a set of networks with state-of-the-art performance over a range of latency constraints. They removed SE blocks and swish activation function from their search space for hardware platforms that do not support them efficiently to name the resultant network as EfficientNet-lite. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_13", "text": " While EfficientNet searches a set of networks over a range of latency constraints by scaling up, Once-For-All (Cai et al., 2019) network takes an opposite approach, scaling down. They first train a super-graph architecture by a novel method called progressive shrinking and search a sub-graph network that achieves good accuracy for a given latency constraint without re-training but cheap fine-tuning. They claim that a scaled-down network from the super-graph gives better accuracy than a network that is trained from scratch. They could find more accurate networks than EfficientNet for small latency constraints. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_14", "text": " To explore more efficient neural architectures on specific hardware, some NAS methods have proposed to define the design space of architecture exploration, tailored for the hardware platform. Gupta et al. (Gupta and Akin, 2020) devised a building block named fused inverted bottleneck convolution block and showed that this block is often more efficient than MBConv on their target NPU, Edge-TPU. They adopted compound scaling method to find high-performing architectures on Edge-TPU. Our work is closely related to this method. We devise a building block that consists of parallel DWConv layers with different kernel sizes, based on a preliminary experiment to find that it is better than the other alternative building blocks in terms of performance per latency (Tan and Le, 2019b). And we increase the search space by allowing stages to have a different number of blocks in the baseline supernet. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_15", "text": " A neural network typically consists of multiple stages, a sequence of blocks with the same number of output channels (width). There are studies on how to assign the number of blocks (depth) to each stage. Meng et al. (Meng et al., 2020) observed that the way of assigning depth to each stage affects the accuracy. Moreover, they argued that the good depth assignment of each stage could be inherited from the shallow ones as the total depth is increased, and proposed a layer-growing NAS method that could significantly reduce the search space. Furthermore, Radosavovic et al. (Radosavovic et al., 2020) discovered that among neural architectures with similar computational complexity, the ones whose stage width and depth have a quantized linear relationship tend to have higher accuracy. Based on similar observations, we apply this design principle to change the structure of the conventional One-Shot NAS supernet. In addition, we argue that placing more blocks in a stage with a larger width is beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_16", "text": " While the original DWConv block uses a single kernel size for depthwise convolution, mixing multiple kernel sizes for depthwise convolution was recently proposed, named as MixConv (Tan and Le, 2019b). Mixing multiple kernel sizes can be understood as having parallel branches inside a block. It is shown that MixConv is more efficient than ordinary DWConv (Tan and Le, 2019b). There exist some recent NAS methods (Mei et al., 2019; Chu et al., 2020) that also broaden their search space using DWConv with multiple kernel sizes to find better neural architectures. We adopt this approach in the supernet and formulate a differentiable latency model of this operation, enabling a latency-aware differentiable One-Shot NAS with MixConv. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_17", "text": " In this section, we will briefly review the Single-Path NAS technique and our target NPU. Before going further, we define some terminologies used in this paper, as shown in Figure 3. A neural architecture consists of stages at the top level. A stage consists of a sequence of blocks whose output feature maps have the same dimension. In the proposed supernet, a block is defined as MBConv that typically starts with 1×1 conv (expansion layer) and ends with 1×1 conv. Adopting the MixConv approach, the depthwise convolution layer consists of parallel superkernels whose kernel size will be determined during the NAS process. The width of block denotes the number of channels in the final output feature map of the block, and the width of stage is the width of the final block in the stage. We will call the total number of blocks starting from the very first block in the network up to the last block in a specific stage S, as the cumulative depth up to stage S. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_18", "text": " Differentiable NAS methods usually define architecture parameters to choose which convolution layer to use in the block, training each convolution layer independently. Single-Path NAS (Stamoulis et al., 2019) reduce the search cost by decreasing the number of trainable parameters by sharing the kernel weights between convolution layers. The key idea is designing an over-parameterized depthwise convolution kernel named superkernel, and letting each depthwise convolution kernel of candidate MBConvs directly inherit the weights of this superkernel. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_19", "text": " Let 𝐰k,esubscript𝐰𝑘𝑒\\mathbf{w}_{k,e} denote the depthwise convolution kernel of candidate MBConv with kernel size k and expansion ratio e (MBConvk,e). First, they introduce a large 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6}, which is the DWConv kernel of MBConv5,6. Then, the inner core of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} can be considered as 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6}, a DWConv kernel of MBConv3,6. A superkernel containing these two kernel size options can be expressed as Figure 4: (1) 𝐰∗,6=𝐰3,6+𝟙​(use​kernel​size​ 5)⋅𝐰5\\3,6subscript𝐰6subscript𝐰36⋅1usekernelsize5subscript𝐰\\536\\mathbf{w}_{*,6}=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\rm{use\\leavevmode\\nobreak\\ kernel\\leavevmode\\nobreak\\ size\\leavevmode\\nobreak\\ 5})\\cdot\\mathbf{w}_{5\\backslash 3,6} where 𝐰5\\3,esubscript𝐰\\53𝑒\\mathbf{w}_{5\\backslash 3,e} means the outer part, 𝐰5,e−𝐰3,esubscript𝐰5𝑒subscript𝐰3𝑒\\mathbf{w}_{5,e}-\\mathbf{w}_{3,e}. Next, they formulate conditions to determine the kernel size. They define a certain threshold value t𝑡t and compare the norm of the kernel weights with the threshold. If the norm of a subset weight is larger than the threshold, it remains in the supernet. To this end, Eq. (1) is changed as follows: (2) 𝐰∗,6​(tk=5)=𝐰3,6+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅𝐰5\\3,6subscript𝐰6subscript𝑡𝑘5subscript𝐰36⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscript𝐰\\536\\mathbf{w}_{*,6}(t_{k=5})=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot\\mathbf{w}_{5\\backslash 3,6} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_20", "text": " The threshold value is also trainable to be automatically chosen during training. To enable back-propagation, they relax 𝟙​(x>t)1𝑥𝑡\\mathbbm{1}(x>t) to σ​(x−t)𝜎𝑥𝑡\\sigma(x-t) when computing gradients. In addition, they optimize kernel weights and threshold values simultaneously. For a given tight search time, this method is shown to be more effective than the other methods (Stamoulis et al., 2020). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_21", "text": " Moreover, we can vary the number of channels by varying the expansion ratio of each block: we can use only the first half channels of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} and 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6} as 𝐰5,3subscript𝐰53\\mathbf{w}_{5,3} and 𝐰3,3subscript𝐰33\\mathbf{w}_{3,3}, respectively. By defining another set of trainable thresholds, the following formula is defined to determine the expansion ratio: (3) 𝐰∗,∗​(te=3,te=6,tk=5)=𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝐰∗,3​(tk=5)+𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝟙​(∥𝐰∗,6\\3​(tk=5)∥2>te=6)⋅𝐰∗,6\\3​(tk=5)subscript𝐰subscript𝑡𝑒3subscript𝑡𝑒6subscript𝑡𝑘5⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒3subscript𝐰3subscript𝑡𝑘5⋅⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒31superscriptdelimited-∥∥subscript𝐰\\63subscript𝑡𝑘52subscript𝑡𝑒6subscript𝐰\\63subscript𝑡𝑘5\\mathbf{w}_{*,*}(t_{e=3},t_{e=6},t_{k=5})=\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbf{w}_{*,3}(t_{k=5})+\\\\ \\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}(t_{k=5})\\rVert^{2}>t_{e=6})\\cdot\\mathbf{w}_{*,6\\backslash 3}(t_{k=5}) where 𝐰k,6\\3subscript𝐰𝑘\\63\\mathbf{w}_{k,6\\backslash 3} means the remaining half of channels, 𝐰k,6−𝐰k,3subscript𝐰𝑘6subscript𝐰𝑘3\\mathbf{w}_{k,6}-\\mathbf{w}_{k,3}. Note that if te=3subscript𝑡𝑒3t_{e=3} is sufficiently large, all channels can be removed to make the block a plain skip connection. Thus, they replace the original depthwise convolution kernel of MBConv5,6 with 𝐰∗,∗subscript𝐰\\mathbf{w}_{*,*}, yielding a differentiable and searchable MBConv with respect to the kernel size and expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_22", "text": " They also design a differentiable latency-aware loss function to consider hardware latency in the search algorithm. To this end, they define a function to estimate latency as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_23", "text": " (4) Lel=𝟙(∥𝐰∗,3∥2>te=3)⋅(P5,3l+𝟙(∥𝐰∗,6\\3∥2>te=6)⋅(P5,6l−P5,3l))subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰32subscript𝑡𝑒3subscriptsuperscript𝑃𝑙53⋅1superscriptdelimited-∥∥subscript𝐰\\632subscript𝑡𝑒6subscriptsuperscript𝑃𝑙56subscriptsuperscript𝑃𝑙53\\begin{split}L^{l}_{e}=&\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}\\rVert^{2}>t_{e=3})\\cdot(P^{l}_{5,3}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}\\rVert^{2}>t_{e=6})\\cdot(P^{l}_{5,6}-P^{l}_{5,3}))\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_24", "text": " (5) Ll=P3,6l/P5,6l⋅Lel+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅Lel⋅(1−P3,6l/P5,6l)superscript𝐿𝑙⋅subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscriptsuperscript𝐿𝑙𝑒1subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56\\begin{split}L^{l}=&P^{l}_{3,6}/P^{l}_{5,6}\\cdot L^{l}_{e}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot L^{l}_{e}\\cdot(1-P^{l}_{3,6}/P^{l}_{5,6})\\end{split} where Pk,elsubscriptsuperscript𝑃𝑙𝑘𝑒P^{l}_{k,e} is a profiled latency value for MBConvk,e for the l𝑙lth block in the supernet. Note that they used P3,6lsubscriptsuperscript𝑃𝑙36P^{l}_{3,6}, P5,3lsubscriptsuperscript𝑃𝑙53P^{l}_{5,3}, and P5,6lsubscriptsuperscript𝑃𝑙56P^{l}_{5,6} only to formulate Llsuperscript𝐿𝑙L^{l}, and the latency for MBConv3,3 is approximated using these values. Here is the latency-aware loss function designed: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_25", "text": " (6) C​E+λ⋅l​o​g​(∑lLl)𝐶𝐸⋅𝜆𝑙𝑜𝑔subscript𝑙superscript𝐿𝑙CE+\\lambda\\cdot log(\\sum_{l}L^{l}) Finally, they search for a neural architecture in two phases. First, they train the supernet by randomly choosing one of the candidate subgraphs in each training step. In this phase, they use CrossEntropy loss only. Next, they enable latency-aware loss function and train the supernet with the loss function, to decide the threshold values. By doing this, they could get a high-quality neural architecture with only eight epochs of ImageNet training set.111In our implementation, we changed the probability of selecting each candidate MBConvs to be equal. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_26", "text": " Even though the proposed methodology can be applied to any type of NPU, the current implementation is made for an adder-tree type NPU, called MIDAP (Kang et al., 2019). It has a fully-pipelined micro-architecture that consists of separate hardware modules and memory modules for convolution, activation function, and various reduction operations. Since it enables us to make a fully static schedule of operations without resource contention in the data path, we can estimate the end-to-end latency of a CNN quite accurately analytically. Unexpected delay may incur from off-chip DRAM delay that is not fully hidden by double buffering. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_27", "text": " Another good feature of MIDAP is that it efficiently supports the following operations that would lower the MAC (multiply-accumulate) utilization in other NPUs that have many MAC units: pooling, DWConv, and squeeze-and-excitation (SE). For DWConv operation, it does not use an adder tree but an alternative hardware logic that consists of a set of individual accumulators connected to the multiply units. For pooling and SE operations, reduction logic is included in the pipeline. Note that MIDAP has not been implemented as a real hardware chip yet but as a virtual prototype with a cycle-accurate simulator. Thanks to the cycle-accurate simulator that considers the DRAM access contention and parametrized DRAM access delay, we could build an accurate analytical model for end-to-end latency estimation, based on the profiling result with the simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_28", "text": " Inverted bottleneck with depth-wise convolution (MBConv) (Sandler et al., 2018) is a popular building block in recent mobile-friendly networks. However, it is not efficiently supported in existing NPUs that do not have specialized hardware units for DWConv (Gholami et al., 2018; Gupta and Akin, 2020). Thus Gupta et al. (Gupta and Akin, 2020) replaced an MBConv block with a fused building block that fuses an expansion layer and DWConv in MBConv into a single full convolution. Even though the fused block increases the number of multiplications significantly, it improves the MAC utilization larger so that the fused block is observed faster than MBConv on their target NPU, EdgeTPU. By adding this building block to their search space, they could successfully obtain different neural architectures for EdgeTPU from those for GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_29", "text": " Since DWConv is efficiently supported in MIDAP, however, the improvement of MAC utilization by fusing does not outweigh the increased computation complexity, which is observed in preliminary experiments. The experiment setup is similar to main experiment setup that will be explained in section 5.2. The experimental result is shown in Table 1. The latency constraint for fused block experiment is set to 7.0ms, while others are set to 2.15ms. In the combined experiment, we use the fused block in the 1st and the 2nd stages, and MBConv for the remaining stages since the latency gap between two building blocks is too high. As shown in the table, MBConv block shows the best tradeoff between accuracy and latency. Hence we prefer MBConv to the fused building block as the basic building block in the supernet for MIDAP. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_30", "text": " In this section, we explain the proposed S3NAS methodology that consists of three steps as displayed in Figure 2. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_31", "text": " The number of blocks is one of the key parameters in neural networks. It is observed that the total number of blocks affects the accuracy of neural architecture (He et al., 2016; Tan and Le, 2019a). In conventional One-Shot NAS methods, each stage in the supernet has the same number of blocks (Cai et al., 2018; Stamoulis et al., 2019; Wu et al., 2019). On the other hand, some recent studies (Meng et al., 2020; Radosavovic et al., 2020) report that the way of assigning the number of blocks in each stage has a noticeable impact on the accuracy, even with the same number of blocks in total. Hence we allow stages in the supernet to have a different number of blocks. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_32", "text": " We investigate the impact of assigning the number of blocks in the supernet with another preliminary experiment. We construct a network based on MobileNetV2, which has four blocks in every stage, and observe the change of accuracy as we reduce two blocks in a different stage in each experiment. Figure 5 shows that MBConvs with larger width has more impact on accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_33", "text": " As the number of multiplications in a DWConv is W×H×C×K2𝑊𝐻𝐶superscript𝐾2W\\times H\\times C\\times K^{2}, the later stage of DWConv tends to have shorter latency since the reduction of H×W𝐻𝑊H\\times W is larger than the increase of C𝐶C. Thus the impact on the latency by increasing the number of blocks in a later stage is not significant as displayed in Figure 5. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_34", "text": " Thus, we place more blocks to stages with larger width in the supernet, making the cumulative depth up to a specific stage is proportional to the width of the stage, which is similar to PyramidNet (Han et al., 2017). A recent study (Radosavovic et al., 2020) also claims that neural architectures with a linear relationship between the cumulative depth and the width tend to have higher accuracy with a similar amount of computation complexity. Our experiment shows that our modification to supernet enhances the efficiency of the search result in terms of accuracy as well as latency (Table 4). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_35", "text": " Another feature of the proposed supernet is to use mixed convolution (MixConv) that mixes different kernel sizes in the depth-wise convolution layer (Tan and Le, 2019b). Some recent NAS methods (Mei et al., 2019; Chu et al., 2020) also broaden their search space using DWConv with various kernel sizes and could find better neural architectures. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_36", "text": " Figure 6 depicts our building block structure. This block starts and ends with 1×1 convolution, with N𝑁N searchable superkernels in the middle. Each searchable superkernel is designed similarly to Eq. (3), while we may use different threshold values in each superkernel. The kernel sizes and expansion ratios are selected among predetermined values. If the j𝑗j-th searchable superkernel chooses an expansion ratio ejsubscript𝑒𝑗e_{j}, the j𝑗j-th kernel has ejsubscript𝑒𝑗e_{j} times more channels than the first 1×1 convolution. Compared with the original MixConv suggested in (Tan and Le, 2019b), the proposed building block supports more diverse combinations of kernel sizes and expansion ratios. It enhances the efficiency of search results on our target NPU (Table 5). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_37", "text": " We finish this subsection by highlighting the merit of Single-Path NAS on building a MixConv-based differentiable NAS. Conventional multi-path NAS methods would have difficulties when adding inverted bottleneck convolution with MixConv to their search space. Since the number of possible choices of such blocks grows proportionally to the partition number, multi-path NAS methods would introduce a significant increase in memory requirements and the search time. On the contrary, MixConv can be efficiently supported in Single-Path NAS, as explained below. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_38", "text": " We use a different latency estimation model, and a loss formula from the original SinglePath NAS technique explained in section 3.1. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_39", "text": " Suppose we concatenate N𝑁N searchable superkernels to build a MixConv-based building block, and let k→=(k1,⋯,kN),e→=(e1,⋯,eN)formulae-sequence→𝑘subscript𝑘1⋯subscript𝑘𝑁→𝑒subscript𝑒1⋯subscript𝑒𝑁\\vec{k}=(k_{1},\\cdots,k_{N}),\\vec{e}=(e_{1},\\cdots,e_{N}) where kj,ejsubscript𝑘𝑗subscript𝑒𝑗k_{j},e_{j} denote the kernel size and the expansion ratio of the j𝑗jth searchable superkernel. The estimated latency of a DWConv operation depends on the kernel size and the expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_40", "text": " For latency formulation, we first define two condition variables, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} and Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}}, that denote whether the j𝑗jth searchable superkernel chooses the kernel size kjsubscript𝑘𝑗k_{j} and the expansion ratio ejsubscript𝑒𝑗e_{j}, respectively; For example, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} is 1 if and only if the j𝑗jth searchable superkernel chooses kjsubscript𝑘𝑗k_{j}, and 0 otherwise. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_41", "text": " Let κ1<⋯<κKsubscript𝜅1⋯subscript𝜅𝐾\\kappa_{1}<\\cdots<\\kappa_{K} be the candidate kernel sizes, and 0=ϵ1<⋯<ϵE0subscriptitalic-ϵ1⋯subscriptitalic-ϵ𝐸0=\\epsilon_{1}<\\cdots<\\epsilon_{E} denote the candidate expansion ratios of the j𝑗jth searchable superkernel, respectively. Suppose kj=κcsubscript𝑘𝑗subscript𝜅𝑐k_{j}=\\kappa_{c}, then Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} can be formulated as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_42", "text": " (7) Fj,kj=(∏2≤i≤c𝟙​(∥𝐰j,κi\\κi−1,ϵE∥2>tj,κi))⋅fj,kj​, wherefj,kj={𝟙​(∥𝐰j,κc+1\\κc,ϵE∥2<tj,κc+1),if ​c<K1,if ​c=Ksubscript𝐹𝑗subscript𝑘𝑗⋅subscriptproduct2𝑖𝑐1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑖subscript𝜅𝑖1subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑖subscript𝑓𝑗subscript𝑘𝑗, wheresubscript𝑓𝑗subscript𝑘𝑗cases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑐1subscript𝜅𝑐subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑐1if 𝑐𝐾1if 𝑐𝐾\\begin{split}F_{j,k_{j}}&=\\left(\\prod_{2\\leq i\\leq c}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{i}\\backslash\\kappa_{i-1},\\epsilon_{E}}\\rVert^{2}>t_{j,\\kappa_{i}})\\right)\\cdot f_{j,k_{j}}\\text{, where}\\\\ f_{j,k_{j}}&=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{c+1}\\backslash\\kappa_{c},\\epsilon_{E}}\\rVert^{2}<t_{j,\\kappa_{c+1}}),&\\text{if }c<K\\\\ 1,&\\text{if }c=K\\end{cases}\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_43", "text": " Figure 7 depicts an example of this formula when the j𝑗jth searchable superkernel that has four candidate kernel sizes κ1<⋯<κ4subscript𝜅1⋯subscript𝜅4\\kappa_{1}<\\cdots<\\kappa_{4} chooses κ2subscript𝜅2\\kappa_{2} as the kernel size: kj=κ2subscript𝑘𝑗subscript𝜅2k_{j}=\\kappa_{2}. It means that weight 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} and 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}} are used, but the remaining weights starting from 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} are not used. Since 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} is always used, it is not included in the formula. To use 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}}, the norm of it has to be larger than tj,κ2subscript𝑡𝑗subscript𝜅2t_{j,\\kappa_{2}} while the norm of 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} should not be larger than tj,κ3subscript𝑡𝑗subscript𝜅3t_{j,\\kappa_{3}} to avoid the use of larger kernel sizes. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_44", "text": " We can formulate Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}} similarly: Gj,ejsubscript𝐺𝑗subscript𝑒𝑗\\displaystyle G_{j,e_{j}} =(∏2≤i≤d𝟙​(∥𝐰j,∗,ϵi\\ϵi−1∥2>tj,ϵi))⋅gj,ej​, whereabsent⋅subscriptproduct2𝑖𝑑1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑖subscriptitalic-ϵ𝑖12subscript𝑡𝑗subscriptitalic-ϵ𝑖subscript𝑔𝑗subscript𝑒𝑗, where\\displaystyle=\\left(\\prod_{2\\leq i\\leq d}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{i}\\backslash\\epsilon_{i-1}}\\rVert^{2}>t_{j,\\epsilon_{i}})\\right)\\cdot g_{j,e_{j}}\\text{, where} gj,ejsubscript𝑔𝑗subscript𝑒𝑗\\displaystyle g_{j,e_{j}} ={𝟙​(∥𝐰j,∗,ϵd+1\\ϵd∥2<tj,ϵd+1),if ​d<E1,if ​d=Eabsentcases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑑1subscriptitalic-ϵ𝑑2subscript𝑡𝑗subscriptitalic-ϵ𝑑1if 𝑑𝐸1if 𝑑𝐸\\displaystyle=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{d+1}\\backslash\\epsilon_{d}}\\rVert^{2}<t_{j,\\epsilon_{d+1}}),&\\text{if }d<E\\\\ 1,&\\text{if }d=E\\end{cases} when ej=ϵdsubscript𝑒𝑗subscriptitalic-ϵ𝑑e_{j}=\\epsilon_{d}. Then the condition for a MixConv-based building block to choose k→,e→→𝑘→𝑒\\vec{k},\\vec{e} can be expressed as ∏jNFj,kj​Gj,ejsuperscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_45", "text": " Now, the estimated latency of a single block is formulated as follows: (8) L=∑k→,e→(P​(k→,e→)​∏jNFj,kj​Gj,ej)𝐿subscript→𝑘→𝑒𝑃→𝑘→𝑒superscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗L=\\sum_{\\vec{k},\\vec{e}}(P(\\vec{k},\\vec{e})\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}) where P​(k→,e→)𝑃→𝑘→𝑒P(\\vec{k},\\vec{e}) denotes the profiled latency value of a MixConv-based building block corresponding to k→,e→→𝑘→𝑒\\vec{k},\\vec{e}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_46", "text": " Unlike the original Single-Path NAS that approximates the latency in Eq. (5) in some cases, we use the profiled latency value in all cases. Note that an expansion ratio can be zero, and if only one superkernel has a nonzero expansion ratio, the MixConv block is reduced to a plain MBConv block. Finally, we can estimate the latency by summing up these estimated latencies for all superkernels in the block, ∑L𝐿\\sum L. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_47", "text": " Since each superkernel is treated independently, some superkernels may have the same kernel size and expansion ratio. Then, even if two superkernel configurations express an equivalent block, as illustrated in Figure 8, they may have different estimated latency values, which is an artifact of the proposed profiling-based latency estimation method. To avoid this artifact, we enforce that there is only one kernel for each kernel size in the MixConv block. That is, we merge two kernels of the same size into one; For instance, the left MixConv is translated to the right MixConv in Figure 8 before latency estimation. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_48", "text": " Figure 9 shows the estimated latency and simulated latency of randomly generated 100 models on our search space. It validates the accuracy of the proposed latency model, whose mean absolute percentage error(MAPE) is about 0.16%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_49", "text": " The existing hardware-aware differentiable NAS methods mostly define some hyperparameters to balance between accuracy and latency, including SinglePath NAS, whose loss function is defined as Eq. (6). Since there is no information on the target latency in the loss function, in case there is a strict latency constraint, they have to pay additional search costs for the hyperparameters to let the final architecture have no larger latency than the constraint. In addition, this process needs to be repeated whenever the target latency is changed. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_50", "text": " We propose to modify the loss function to activate the latency-aware loss term only when the estimated latency is larger than the latency constraint as follows: (9) C​E+λ1⋅l​o​g​(1+λ2⋅R​e​L​U​((∑L)−T))𝐶𝐸⋅subscript𝜆1𝑙𝑜𝑔1⋅subscript𝜆2𝑅𝑒𝐿𝑈𝐿𝑇CE+\\lambda_{1}\\cdot log(1+\\lambda_{2}\\cdot ReLU((\\sum L)-T)) Although this is not a panacea, this modification significantly eases the search process, which will be discussed in section 5.2 with various experiments. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_51", "text": " In the second step, we intentionally use shorter latency to reduce the search space for the baseline network. After finding the baseline network with a shorter latency, we apply compound scaling to find an architecture with the final latency constraint. In this step, we conduct post-processing to add SE block and h-swish activation function if beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_52", "text": " It is well known that increasing depth (He et al., 2016), width (Zagoruyko and Komodakis, 2016), or input image size improves accuracy while it increases latency. However, if only one of these three factors is increased, the accuracy improvement is quickly saturated. Observing this fact, Tan et al. (Tan and Le, 2019a) proposed a compound scaling method that increases all three factors together. A scaling coefficient is defined for each factor. By judiciously assigning the scaling coefficients in a balanced fashion, they could improve the accuracy much larger than scaling a single factor only. Adopting this approach, we apply the compound scaling to the baseline architecture obtained in the previous step. Based on the ratio between the true latency constraint and the assumed latency constraint in the second step, we find the scaling coefficients considering the estimated latency increment. To keep the linear relationship between the width and cumulative depth, we use the same scaling coefficient for width and depth, differently from (Tan and Le, 2019a). Note that how to realize scaling depends on the baseline architecture. While the baseline architecture assumed in (Tan and Le, 2019a) has a series of identical blocks in each stage, a stage consists of heterogeneous blocks in our baseline architecture. Thus depth scaling is not realized by merely adding new blocks in each stage. We need to choose what types of blocks to add in each stage. We increase the number of blocks with more parameters first. To compute how many blocks to add in a stage, we multiply the depth of the stage by depth coefficient and round the multiplication result. Width scaling is applied to all blocks equally. Finally, we consider latency when we scale. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_53", "text": " In addition to compound scaling, we add two components in the post-processing step: h-swish activation function and squeeze-and-excitation (SE) block. A recent study (Park and Yoo, 2020) reports that SE and the h-swish activation function are no hurdles for 8-bit quantization. They could quantize a network with SE and h-swish without noticeable accuracy loss. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_54", "text": " Extensive studies have been conducted to find a better activation function than ReLU, and the swish activation function (Ramachandran et al., 2017) was found. Several neural networks (Tan and Le, 2019b; Mei et al., 2019; Tan and Le, 2019a) use swish activation function instead of ReLU to improve accuracy. Howard et al. (Howard et al., 2019) proposed a quantization-friendly version of the swish activation function called h-swish that has a similar impact on accuracy. So, we replace ReLU with h-swish (Howard et al., 2019) activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_55", "text": " Squeeze-and-Excitation(SE) is a lightweight operation which is shown to be beneficial to accuracy (Hu et al., 2018). Figure 10 depicts the structure of a SE block. For a given input feature map, it first computes the importance of the feature channels a representative value for global spatial information of each feature channel by global average pooling. After such squeeze operation generates channel-wise statistics, excitation operation captures channel-wise dependencies by two cascaded fully-connected layers to produce activation values, which represents the importance of each feature channel. Finally, channel-wise multiplication is performed between the activation values induced by the excitation operation and the input feature map for each channel. SE block is used in many recent architectures (Tan and Le, 2019a; Howard et al., 2019; Radosavovic et al., 2020). By adding SE blocks to the baseline network, we also observe the accuracy improvement. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_56", "text": " Figure 11 depicts an example distribution of activation values produced by two different SE blocks for three different images. The authors of the original paper (Hu et al., 2018) conjectured that if such distribution from a SE block does not differ widely between image classes, the SE block is not important. Thus, after training, they obtained averaged activation values of a SE block over multiple images in the same class. They compared the distributions of the averaged values over different image classes. They observed that removing the SE blocks that have similar distributions over different image classes incurs only a marginal loss in accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_57", "text": " Inspired by this observation, we propose to remove SE blocks selectively to minimize the additional computation cost caused by SE blocks. We obtain activation values from a SE block for each input image and measure how the distribution of activation values varies over different input images. For each channel c, we calculate the standard deviation σcsubscript𝜎𝑐\\sigma_{c} of activation values over different images. If σcsubscript𝜎𝑐\\sigma_{c} is small in most channels, the activation values from the SE block does not differ much over images. Conceptually, it implies that the SE block does not help to discriminate further which channel is more influential. From the engineering perspective, it means that channel-wise multiplication of a SE block is similar to constant multiplication, which can be handled by the following convolutional layer. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_58", "text": " We define a metric as the average of standard deviation values σcsubscript𝜎𝑐\\sigma_{c} over all channels that represent the diverseness of the activation distribution over different images. If the metric value is small, we remove the SE block. For example, in Figure 11, our metric of the SE block on the left side has a value of 0.021, while the right side has a value of 0.118, more than 5x larger than the left side; The left side is a better candidate for SE block removal. When we remove SE blocks according to this metric, the accuracy is found to be similar, while the latency got shorter (Table 6). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_59", "text": " We evaluate the proposed NAS technique for image classification with the ImageNet dataset. The current implementation is made for MIDAP (Kang et al., 2019) that can perform DWConv and SE operations efficiently so that MBConv is preferred to full 3-D convolution as the basic building block, as explained above. Latencies on the target NPU are obtained with the cycle-accurate simulator222https://github.com/cap-lab/MidapSim. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_60", "text": " A superkernel has two parameters to search: expansion ratio and kernel size. To limit the search space, we choose the expansion ratio among 0, 2, 4, and 6, and the kernel size between 3 and 5 when MBConv or full convolution is used as the building block. In the case of the MixConv-based building block, we use N𝑁N=3 superkenels whose expansion ratio is 0 or 2; The sum of the expansion ratio of three superkernels has the same range as the expansion ratio of a single MBConv block. To allow three superkernels to have different kernel sizes, we let one of three superkernels be able to have 7 as the kernel size. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_61", "text": " In the first phase of the neural architecture search, we train the supernet by randomly choosing one of the candidate subgraphs in each training step. We train the supernet for 8 epochs, with λ1=0subscript𝜆10\\lambda_{1}=0 in the loss function of Eq. 9, focusing only on the accuracy. We decrease the learning rate by 0.97 every 2.4 epochs, starting from 0.064. The other setting for network training is displayed in Table 4. Gradient clipping with a value of 10 is used in this phase. In the second phase, we set λ1=15,λ2=100formulae-sequencesubscript𝜆115subscript𝜆2100\\lambda_{1}=15,\\lambda_{2}=100 to consider latency in the loss function, and optimize the weights and threshold values of supernet for 2 epochs. After this second phase finishes, the final architecture topology is decided. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_62", "text": " Next, we train the final architecture again to determine the filter weights for 350 epochs with the ImageNet again, using the same setting described in Table 4. Unlike the search phase, the learning rate is increased from 0 to 0.064 in the first 5 epochs, then decayed by 0.97 every 2.4 epochs. Since we observed that the batch size is critical to accuracy when using the EfficientNet training code, we use a large batch size. Both network architecture search and final training are conducted on Google Cloud TPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_63", "text": " In the proposed NAS technique, two major extensions are made to the supernet, compared with the original SinglePath NAS technique. Table 3 shows the proposed supernet architecture with configuration parameters, block types and depths. It starts with a 7x7 convolution layer, followed by 5 stages that have a different number of blocks for feature extraction and 2 fully-connected networks for classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_64", "text": " The first extension is to allow stages to have a different number of blocks. To verify the goodness of this extension, we design two kinds of MBConv-based supernet with 20 blocks in total: a supernet with constant depth(baseline), a supernet with linear depth where the cumulative depth up to a specific stage is proportional to the width of the stage. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_65", "text": " As shown in Table 4, a supernet with linear depth outperforms a supernet with constant depth in terms of accuracy with similar latency. It confirms that this simple change of block assignment in supernet gives notable accuracy boost with the same latency constraint, without any additional optimization techniques. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_66", "text": " The second extension is to use multiple parallel superkernels in an MBConv block. To verify the benefit of it, we compare two different supernets with the same number of blocks in each stage. The accuracy and latency performance of the baseline supernet is the same as the previous experimental result shown in Table 4. Table 5 shows that the extended supernet with MixConv-based building blocks gives a better accuracy-latency tradeoff. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_67", "text": " We apply the proposed NAS method with the supernet architecture described above. The depth of 5 stages is set to 3,4,7,4,113474113,4,7,4,11, respectively. The latency constraint is set to 2.5 ms that corresponds to the latency of EfficientNet-B1 on our target NPU, MIDAP. Table 6 compares our search results with the state-of-the-art models: EdgeTPU (Gupta and Akin, 2020), EfficientNet (Tan and Le, 2019a), Once-For-All (Cai et al., 2019). The latency of the other models is obtained by running the network on the MIDAP cycle-accurate simulator. We compare the accuracy without quantization, assuming that quantization effects will be similar to all models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_68", "text": " As shown in Table 6, the baseline model, ours-M, found by the proposed NAS technique has higher accuracy than the other models on our target NPU; ours-M achieves more than 1.7% higher top-1 accuracy than EfficientNet-lite2 with similar latency. Moreover, it is 0.5% higher than EfficientNet-B1, even without using SE and h-swish activation function. Note that the number of parameters and the number of FLOPS in ours-M is larger than EfficientNet-B1. It implies that the complexity of the network is not a direct indicator of the end-to-end latency of the network. The end-to-end latency depends on the NPU architecture, and the proposed NAS technique could find a larger network with shorter latency by adding the latency factor to the loss function directly. The main benefit comes from different block assignment to stages. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_69", "text": " We improve the baseline network by adding the h-swish activation function and squeeze-and-excitation(SE) block to get the ours-M+ model. Figure 12 shows the topology of ours-M+ architecture in which the height of each block is proportional to the expansion ratio of the block. Compared with the baseline network, ours-M, we achieve around 1% accuracy boost with ours-M+, paying the cost of 16% latency increase. This model outperforms the other models, 0.5% higher accuracy and 14% faster than EfficientNet-B2. Since EfficientNet-B2 is too large to run with the default configuration on MIDAP, we increase the memory size for filter weights. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_70", "text": " Next, we applied compound scaling (Tan and Le, 2019a) to ours-M+ to obtain ours-L+ and ours-XL+. When we determine scaling coefficients, we keep the linear relationship between the cumulative depth and width of each stage, and scale the input image size more aggressively than (Tan and Le, 2019a). We make the number of filters to be multiples of 16 to maximize the MAC unit utilization on MIDAP. When we train our scaled model, we set the dropout ratio to 0.4, similar to EfficientNet-B4 training. The accuracy of ours-L+ is higher than EfficientNet-B3 and EfficientNet-lite4, while the accuracy of ours-XL+ is similar to EfficientNet-B4. Note that the difference between the searched network and the EfficientNet decreases as the network size increases. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_71", "text": " Finally, we selectively removed SE blocks from ours-XL+, resulting in ours-XL-rmSE+. We collected the activation values using randomly sampled 10K images from the training dataset and calculated the metric explained in Sec. 4.3.3. After removing SE blocks from ours-XL+ based on the metric, only about 60% of the blocks in the network have SE blocks. As a result, we could make the latency shorter, while the accuracy was slightly improved than ours-XL+. This model achieves 82.72% top-1 accuracy with only 11.66ms latency. It is much better than EfficientNet-EdgeTPU-L (Gupta and Akin, 2020) that achieves 80.62% FP32 top-1 accuracy with more than 20ms on EdgeTPU. Our architecture on MIDAP is about 2 times faster with 2.1% higher accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_72", "text": " Finally, we compare the search time. Since the TPU is faster than GPU, we report the wall clock time and the estimated GPU time (in parenthesis) that is 10 times longer than the wall clock time in the last column of Table 6 Our method takes 3 hours, which is much faster than the other methods. Note that we compare the total time to get one architecture from scratch without trained weights. Once-For-All (Cai et al., 2019) would require only short fine-tuning time after a neural architecture is searched. In contrast, we need to train the network after a network architecture is found. It took 40 hours on TPUv3 to train ours-M+. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_73", "text": " While most NAS techniques are not compared with a random search method, the authors (Li and Talwalkar, 2019) reported that a random search method is highly competitive. So we conducted an experiment to compare the proposed NAS technique with two random search methods, exploring the same search space defined by the supernet structure of ours-M. First, we designed a simple random search method that has the similar time complexity of the proposed technique. In this method, we randomly generate 15 models having a similar latency with ours-M, from the same search space. Then we train each of them for 1 epoch with cosine learning rate decay. After evaluating each of them, we choose the architecture with the topmost top-1 accuracy and fully train it. In the second method, called random selection, we randomly generate 20 models having a similar latency with ours-M and train them fully and take the architecture with the highest top-1 accuracy. Since the random selection method performs search and training simultaneously, it is slower than the proposed technique by the number of randomly generated models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_74", "text": " Comparison results are reported in Table 6. It is confirmed that both random selection and random search are quite competitive, but noticeably inferior to ours-M in terms of accuracy. In detail, the worst case of random selection showed 0.8% lower accuracy than ours-M. The best performance obtained from 20 randomly generated models is 79.19%, still lower than the accuracy of ours-M. Note that random search and random selection show similar performance that is no smaller than the other networks. It means that the search space defined by the supernet architecture has a more significant effect on the accuracy than the search method. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_75", "text": " There are two methods to find an architecture with a loose latency constraint. One is to use compound scaling that scales a small network with shorter latency, and the other is to search a network directly. To compare these two methods, we first scaled ours-M using the same scaling coefficients that we used to scale ours-M+ to ours-L+ and trained it. When conducting a direct search, we scaled the depth and width of the supernet and the input image size first and applied the proposed NAS technique for the scaled supernet. We used batch size 512 instead of 1024 during the architecture search due to the memory limitation of TPU. The comparison result is shown in Table 7 in terms of top-1 accuracy(%) and the latency on the target NPU(ms). Two results were similar while direct search needed 10 hours on TPUv3; It means that compound scaling is an effective method to find a large network fast. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_76", "text": " To examine how SE and h-swish impact accuracy individually, we compare four combinations as displayed in Table 8. The baseline is ours-M that does not use SE and h-swish activation function. Replacing ReLU with h-swish gives a marginal improvement on accuracy while adding SE blocks improves the accuracy noticeably. Adding both SE and h-swish activation function improves the accuracy by around 1%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_77", "text": " In this work, we propose a fast NPU-aware NAS methodology extending the Single-Path NAS technique (Stamoulis et al., 2019). We modify the supernet architecture by varying the number of blocks in stages and adding mixed depthwise convolution (Tan and Le, 2019b) to the search space. By modifying the loss function to directly include the target latency estimated by a cycle-accurate simulator of the target NPU, we could find a better baseline architecture with a shorter latency than the latency constraint. Using a tight latency constraint, we can reduce the search space to find the baseline network fast. Afterward, we apply compound scaling to find a larger network than the baseline network, and add SE blocks and h-swish activation functions in the post-processing step. Through the proposed NAS methodology, we could obtain a network with 82.72% accuracy with 11.66ms latency on our target NPU, without special data augmentation in training. It dominates the existing network models on the target NPU. It confirms the importance of supernet architecture design for a given NPU and effectiveness of the three-step approach in the proposed NAS methodology: supernet design, SinglePath NAS with a tighter latency constraint, and compound scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" } ]
What is the issue with EMNIST dataset?
To be able to use it seamlessly one needs to not only extend the deep learning framework’s MNIST helpers but also change the underlying deep neural network to classify these extra classes [3].
[ 3 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can do so much the simple MNIST dataset has become the most widely used testbed in deep learning, surpassing CIFAR-10 (Krizhevsky and Hinton, 2009) and ImageNet (Deng et al., 2009) in its popularity via Google trends111https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet. Despite its simplicity its usage does not seem to be decreasing despite calls for it in the deep learning community. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_1", "text": " The reason MNIST is so popular has to do with its size, allowing deep learning researchers to quickly check and prototype their algorithms. This is also complemented by the fact that all machine learning libraries (e.g. scikit-learn) and deep learning frameworks (e.g. Tensorflow, Pytorch) provide helper functions and convenient examples that use MNIST out of the box. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_2", "text": " Our aim with this work is to create a good benchmark dataset which has all the accessibility of MNIST, namely its small size, straightforward encoding and permissive license. We took the approach of sticking to the 101010 classes 70,0007000070,000 grayscale images in the size of 28×28282828\\times 28 as in the original MNIST. In fact, the only change one needs to use this dataset is to change the URL from where the MNIST dataset is fetched. Moreover, Fashion-MNIST poses a more challenging classification task than the simple MNIST digits data, whereas the latter has been trained to accuracies above 99.7% as reported in Wan et al. (2013); Ciregan et al. (2012). ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_3", "text": " We also looked at the EMNIST dataset provided by Cohen et al. (2017), an extended version of MNIST that extends the number of classes by introducing uppercase and lowercase characters. However, to be able to use it seamlessly one needs to not only extend the deep learning framework’s MNIST helpers, but also change the underlying deep neural network to classify these extra classes. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_4", "text": " Fashion-MNIST is based on the assortment on Zalando’s website222Zalando is the Europe’s largest online fashion platform. http://www.zalando.com. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762×10007621000762\\times 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_5", "text": " We use the front look thumbnail images of 70,0007000070,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, white-color products are not included in the dataset as they have low contrast to the background. The thumbnails (51×73517351\\times 73) are then fed into the following conversion pipeline, which is visualized in Figure 1. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_6", "text": " 1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5%percent55\\% of the maximum possible intensity in RGB space. 3. Resizing the longest edge of the image to 282828 by subsampling the pixels, i.e. some rows and columns are skipped over. 4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.01.01.0, with increasing effect near outlines. 5. Extending the shortest edge to 282828 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_7", "text": " For the class labels, we use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product contains only one silhouette code. Table 2 gives a summary of all class labels in Fashion-MNIST with examples for each class. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_8", "text": " Finally, the dataset is divided into a training and a test set. The training set receives a randomly-selected 6,00060006,000 examples from each class. Images and labels are stored in the same file format as the MNIST data set, which is designed for storing vectors and multidimensional matrices. The result files are listed in Table 1. We sort examples by their labels while storing, resulting in smaller label files after compression comparing to the MNIST. It is also easier to retrieve examples with a certain class label. The data shuffling job is therefore left to the algorithm developer. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_9", "text": " We provide some classification results in LABEL:tbl:benchmark to form a benchmark on this data set. All algorithms are repeated 555 times by shuffling the training data and the average accuracy on the test set is reported. The benchmark on the MNIST dataset is also included for a side-by-side comparison. A more comprehensive table with explanations on the algorithms can be found on https://github.com/zalandoresearch/fashion-mnist. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "id": "1708.07747_all_10", "text": " This paper introduced Fashion-MNIST, a fashion product images dataset intended to be a drop-in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithm. The images in Fashion-MNIST are converted to a format that matches that of the MNIST dataset, making it immediately compatible with any machine learning package capable of working with the original MNIST dataset. ", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" } ]
How much does the late interaction decrease computational costs, and how close is the performance of the late interaction model to the early interaction model?
In contrast with this trend, ColBERT (which employs late interaction over BERT performs no worse than the original adaptation of BERT for ranking and is only marginally less effective than BERT and our training of BERT [55].
[ 55 ]
[ { "id": "2004.12832_all_0", "text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). In contrast to prior learning-to-rank methods that rely on hand-crafted features, these models employ embedding-based representations of queries and documents and directly model local interactions (i.e., fine-granular relationships) between their contents. Among them, a recent approach has emerged that fine-tunes deep pre-trained language models (LMs) like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) for estimating relevance. By computing deeply-contextualized semantic representations of query–document pairs, these LMs help bridge the pervasive vocabulary mismatch (Zhao, 2012; Mitra et al., 2018) between documents and queries (Qiao et al., 2019). Indeed, in the span of just a few months, a number of ranking models based on BERT have achieved state-of-the-art results on various retrieval benchmarks (Nogueira and Cho, 2019; MacAvaney et al., 2019; Dai and Callan, 2019b; Yilmaz et al., 2019) and have been proprietarily adapted for deployment by Google111https://blog.google/products/search/search-language-understanding-bert/ and Bing222https://azure.microsoft.com/en-us/blog/bing-delivers-its-largest-improvement-in-search-experience-using-azure-gpus/. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_1", "text": " However, the remarkable gains delivered by these LMs come at a steep increase in computational cost. Hofstätter et al. (Hofstätter and Hanbury, 2019) and MacAvaney et al. (MacAvaney et al., 2019) observe that BERT-based models in the literature are 100-1000×\\times more computationally expensive than prior models—some of which are arguably not inexpensive to begin with (Ji et al., 2019). This quality–cost tradeoff is summarized by Figure 1, which compares two BERT-based rankers (Nogueira and Cho, 2019; Nogueira et al., 2019b) against a representative set of ranking models. The figure uses MS MARCO Ranking (Nguyen et al., 2016), a recent collection of 9M passages and 1M queries from Bing’s logs. It reports retrieval effectiveness (MRR@10) on the official validation set as well as average query latency (log-scale) using a high-end server that dedicates one Tesla V100 GPU per query for neural re-rankers. Following the re-ranking setup of MS MARCO, ColBERT (re-rank), the Neural Matching Models, and the Deep LMs re-rank the MS MARCO’s official top-1000 documents per query. Other methods, including ColBERT (full retrieval), directly retrieve the top-1000 results from the entire collection. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_2", "text": " As the figure shows, BERT considerably improves search precision, raising MRR@10 by almost 7% against the best previous methods; simultaneously, it increases latency by up to tens of thousands of milliseconds even with a high-end GPU. This poses a challenging tradeoff since raising query response times by as little as 100ms is known to impact user experience and even measurably diminish revenue (Kohavi et al., 2013). To tackle this problem, recent work has started exploring using Natural Language Understanding (NLU) techniques to augment traditional retrieval models like BM25 (Robertson et al., 1995). For example, Nogueira et al. (Nogueira et al., 2019c, a) expand documents with NLU-generated queries before indexing with BM25 scores and Dai & Callan (Dai and Callan, 2019a) replace BM25’s term frequency with NLU-estimated term importance. Despite successfully reducing latency, these approaches generally reduce precision substantially relative to BERT. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_3", "text": " To reconcile efficiency and contextualization in IR, we propose ColBERT, a ranking model based on contextualized late interaction over BERT. As the name suggests, ColBERT proposes a novel late interaction paradigm for estimating relevance between a query q𝑞q and a document d𝑑d. Under late interaction, q𝑞q and d𝑑d are separately encoded into two sets of contextual embeddings, and relevance is evaluated using cheap and pruning-friendly computations between both sets—that is, fast computations that enable ranking without exhaustively evaluating every possible candidate. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_4", "text": " Figure 2 contrasts our proposed late interaction approach with existing neural matching paradigms. On the left, Figure 2 (a) illustrates representation-focused rankers, which independently compute an embedding for q𝑞q and another for d𝑑d and estimate relevance as a single similarity score between two vectors (Huang et al., 2013; Zamani et al., 2018). Moving to the right, Figure 2 (b) visualizes typical interaction-focused rankers. Instead of summarizing q𝑞q and d𝑑d into individual embeddings, these rankers model word- and phrase-level relationships across q𝑞q and d𝑑d and match them using a deep neural network (e.g., with CNNs/MLPs (Mitra et al., 2017) or kernels (Xiong et al., 2017)). In the simplest case, they feed the neural network an interaction matrix that reflects the similiarity between every pair of words across q𝑞q and d𝑑d. Further right, Figure 2 (c) illustrates a more powerful interaction-based paradigm, which models the interactions between words within as well as across q𝑞q and d𝑑d at the same time, as in BERT’s transformer architecture (Nogueira and Cho, 2019). ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_5", "text": " These increasingly expressive architectures are in tension. While interaction-based models (i.e., Figure 2 (b) and (c)) tend to be superior for IR tasks (Guo et al., 2019; Mitra et al., 2018), a representation-focused model—by isolating the computations among q𝑞q and d𝑑d—makes it possible to pre-compute document representations offline (Zamani et al., 2018), greatly reducing the computational load per query. In this work, we observe that the fine-grained matching of interaction-based models and the pre-computation of document representations of representation-based models can be combined by retaining yet judiciously delaying the query–document interaction. Figure 2 (d) illustrates an architecture that precisely does so. As illustrated, every query embedding interacts with all document embeddings via a MaxSim operator, which computes maximum similarity (e.g., cosine similarity), and the scalar outputs of these operators are summed across query terms. This paradigm allows ColBERT to exploit deep LM-based representations while shifting the cost of encoding documents offline and amortizing the cost of encoding the query once across all ranked documents. Additionally, it enables ColBERT to leverage vector-similarity search indexes (e.g., (Johnson et al., 2017; Abuzaid et al., 2019)) to retrieve the top-k𝑘k results directly from a large document collection, substantially improving recall over models that only re-rank the output of term-based retrieval. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_6", "text": " As Figure 1 illustrates, ColBERT can serve queries in tens or few hundreds of milliseconds. For instance, when used for re-ranking as in “ColBERT (re-rank)”, it delivers over 170×\\times speedup (and requires 14,000×\\times fewer FLOPs) relative to existing BERT-based models, while being more effective than every non-BERT baseline (§4.2 & 4.3). ColBERT’s indexing—the only time it needs to feed documents through BERT—is also practical: it can index the MS MARCO collection of 9M passages in about 3 hours using a single server with four GPUs (§4.5), retaining its effectiveness with a space footprint of as little as few tens of GiBs. Our extensive ablation study (§4.4) shows that late interaction, its implementation via MaxSim operations, and crucial design choices within our BERT-based encoders are all essential to ColBERT’s effectiveness. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_7", "text": " Our main contributions are as follows. (1) We propose late interaction (§3.1) as a paradigm for efficient and effective neural ranking. (2) We present ColBERT (§3.2 & 3.3), a highly-effective model that employs novel BERT-based query and document encoders within the late interaction paradigm. (3) We show how to leverage ColBERT both for re-ranking on top of a term-based retrieval model (§3.5) and for searching a full collection using vector similarity indexes (§3.6). (4) We evaluate ColBERT on MS MARCO and TREC CAR, two recent passage search collections. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_8", "text": " Neural Matching Models. Over the past few years, IR researchers have introduced numerous neural architectures for ranking. In this work, we compare against KNRM (Xiong et al., 2017; Dai et al., 2018), Duet (Mitra et al., 2017; Mitra and Craswell, 2019), ConvKNRM (Dai et al., 2018), and fastText+ConvKNRM (Hofstätter et al., 2019a). KNRM proposes a differentiable kernel-pooling technique for extracting matching signals from an interaction matrix, while Duet combines signals from exact-match-based as well as embedding-based similarities for ranking. Introduced in 2018, ConvKNRM learns to match n𝑛n-grams in the query and the document. Lastly, fastText+ConvKNRM (abbreviated fT+ConvKNRM) tackles the absence of rare words from typical word embeddings lists by adopting sub-word token embeddings. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_9", "text": " In 2018, Zamani et al. (Zamani et al., 2018) introduced SNRM, a representation-focused IR model that encodes each query and each document as a single, sparse high-dimensional vector of “latent terms”. By producing a sparse-vector representation for each document, SNRM is able to use a traditional IR inverted index for representing documents, allowing fast end-to-end retrieval. Despite highly promising results and insights, SNRM’s effectiveness is substantially outperformed by the state of the art on the datasets with which it was evaluated (e.g., see (Yang et al., 2019; MacAvaney et al., 2019)). While SNRM employs sparsity to allow using inverted indexes, we relax this assumption and compare a (dense) BERT-based representation-focused model against our late-interaction ColBERT in our ablation experiments in §4.4. For a detailed overview of existing neural ranking models, we refer the readers to two recent surveys of the literature (Mitra et al., 2018; Guo et al., 2019). ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_10", "text": " Language Model Pretraining for IR. Recent work in NLU emphasizes the importance pre-training language representation models in an unsupervised fashion before subsequently fine-tuning them on downstream tasks. A notable example is BERT (Devlin et al., 2018), a bi-directional transformer-based language model whose fine-tuning advanced the state of the art on various NLU benchmarks. Nogueira et al. (Nogueira and Cho, 2019), MacAvaney et al. (MacAvaney et al., 2019), and Dai & Callan (Dai and Callan, 2019b) investigate incorporating such LMs (mainly BERT, but also ELMo (Peters et al., 2018)) on different ranking datasets. As illustrated in Figure 2 (c), the common approach (and the one adopted by Nogueira et al. on MS MARCO and TREC CAR) is to feed the query–document pair through BERT and use an MLP on top of BERT’s (CLS) output token to produce a relevance score. Subsequent work by Nogueira et al. (Nogueira et al., 2019b) introduced duoBERT, which fine-tunes BERT to compare the relevance of a pair of documents given a query. Relative to their single-document BERT, this gives duoBERT a 1% MRR@10 advantage on MS MARCO while increasing the cost by at least 1.4×\\times. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_11", "text": " BERT Optimizations. As discussed in §1, these LM-based rankers can be highly expensive in practice. While ongoing efforts in the NLU literature for distilling (Jiao et al., 2019; Tang et al., 2019), compressing (Zafrir et al., 2019), and pruning (Michel et al., 2019) BERT can be instrumental in narrowing this gap, they generally achieve significantly smaller speedups than our re-designed architecture for IR, due to their generic nature, and more aggressive optimizations often come at the cost of lower quality. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_12", "text": " Efficient NLU-based Models. Recently, a direction emerged that employs expensive NLU computation offline. This includes doc2query (Nogueira et al., 2019c) and DeepCT (Dai and Callan, 2019a). The doc2query model expands each document with a pre-defined number of synthetic queries queries generated by a seq2seq transformer model that is trained to generate queries given a document. It then relies on a BM25 index for retrieval from the (expanded) documents. DeepCT uses BERT to produce the term frequency component of BM25 in a context-aware manner, essentially representing a feasible realization of the term-independence assumption with neural networks (Mitra et al., 2019). Lastly, docTTTTTquery (Nogueira et al., 2019a) is identical to doc2query except that it fine-tunes a pre-trained model (namely, T5 (Raffel et al., 2019)) for generating the predicted queries. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_13", "text": " Concurrently with our drafting of this paper, Hofstätter et al. (Hofstätter et al., 2019b) published their Transformer-Kernel (TK) model. At a high level, TK improves the KNRM architecture described earlier: while KNRM employs kernel pooling on top of word-embedding-based interaction, TK uses a Transformer (Vaswani et al., 2017) component for contextually encoding queries and documents before kernel pooling. TK establishes a new state-of-the-art for non-BERT models on MS MARCO (Dev); however, the best non-ensemble MRR@10 it achieves is 31% while ColBERT reaches up to 36%. Moreover, due to indexing document representations offline and employing a MaxSim-based late interaction mechanism, ColBERT is much more scalable, enabling end-to-end retrieval which is not supported by TK. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_14", "text": " ColBERT prescribes a simple framework for balancing the quality and cost of neural IR, particularly deep language models like BERT. As introduced earlier, delaying the query–document interaction can facilitate cheap neural re-ranking (i.e., through pre-computation) and even support practical end-to-end neural retrieval (i.e., through pruning via vector-similarity search). ColBERT addresses how to do so while still preserving the effectiveness of state-of-the-art models, which condition the bulk of their computations on the joint query–document pair. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_15", "text": " Even though ColBERT’s late-interaction framework can be applied to a wide variety of architectures (e.g., CNNs, RNNs, transformers, etc.), we choose to focus this work on bi-directional transformer-based encoders (i.e., BERT) owing to their state-of-the-art effectiveness yet very high computational cost. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_16", "text": " Figure 3 depicts the general architecture of ColBERT, which comprises: (a) a query encoder fQsubscript𝑓𝑄f_{Q}, (b) a document encoder fDsubscript𝑓𝐷f_{D}, and (c) the late interaction mechanism. Given a query q𝑞q and document d𝑑d, fQsubscript𝑓𝑄f_{Q} encodes q𝑞q into a bag of fixed-size embeddings Eqsubscript𝐸𝑞E_{q} while fDsubscript𝑓𝐷f_{D} encodes d𝑑d into another bag Edsubscript𝐸𝑑E_{d}. Crucially, each embeddings in Eqsubscript𝐸𝑞E_{q} and Edsubscript𝐸𝑑E_{d} is contextualized based on the other terms in q𝑞q or d𝑑d, respectively. We describe our BERT-based encoders in §3.2. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_17", "text": " Using Eqsubscript𝐸𝑞E_{q} and Edsubscript𝐸𝑑E_{d}, ColBERT computes the relevance score between q𝑞q and d𝑑d via late interaction, which we define as a summation of maximum similarity (MaxSim) operators. In particular, we find the maximum cosine similarity of each v∈Eq𝑣subscript𝐸𝑞v\\in E_{q} with vectors in Edsubscript𝐸𝑑E_{d}, and combine the outputs via summation. Besides cosine, we also evaluate squared L2 distance as a measure of vector similarity. Intuitively, this interaction mechanism softly searches for each query term tqsubscript𝑡𝑞t_{q}—in a manner that reflects its context in the query—against the document’s embeddings, quantifying the strength of the “match” via the largest similarity score between tqsubscript𝑡𝑞t_{q} and a document term tdsubscript𝑡𝑑t_{d}. Given these term scores, it then estimates the document relevance by summing the matching evidence across all query terms. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_18", "text": " While more sophisticated matching is possible with other choices such as deep convolution and attention layers (i.e., as in typical interaction-focused models), a summation of maximum similarity computations has two distinctive characteristics. First, it stands out as a particularly cheap interaction mechanism, as we examine its FLOPs in §4.2. Second, and more importantly, it is amenable to highly-efficient pruning for top-k𝑘k retrieval, as we evaluate in §4.3. This enables using vector-similarity algorithms for skipping documents without materializing the full interaction matrix or even considering each document in isolation. Other cheap choices (e.g., a summation of average similarity scores, instead of maximum) are possible; however, many are less amenable to pruning. In §4.4, we conduct an extensive ablation study that empirically verifies the advantage of our MaxSim-based late interaction against alternatives. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_19", "text": " Prior to late interaction, ColBERT encodes each query or document into a bag of embeddings, employing BERT-based encoders. We share a single BERT model among our query and document encoders but distinguish input sequences that correspond to queries and documents by prepending a special token (Q) to queries and another token (D) to documents. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_20", "text": " Query Encoder. Given a textual query q𝑞q, we tokenize it into its BERT-based WordPiece (Wu et al., 2016) tokens q1​q2​…​qlsubscript𝑞1subscript𝑞2…subscript𝑞𝑙q_{1}q_{2}...q_{l}. We prepend the token (Q) to the query. We place this token right after BERT’s sequence-start token (CLS). If the query has fewer than a pre-defined number of tokens Nqsubscript𝑁𝑞N_{q}, we pad it with BERT’s special (mask) tokens up to length Nqsubscript𝑁𝑞N_{q} (otherwise, we truncate it to the first Nqsubscript𝑁𝑞N_{q} tokens). This padded sequence of input tokens is then passed into BERT’s deep transformer architecture, which computes a contextualized representation of each token. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_21", "text": " We denote the padding with masked tokens as query augmentation, a step that allows BERT to produce query-based embeddings at the positions corresponding to these masks. Query augmentation is intended to serve as a soft, differentiable mechanism for learning to expand queries with new terms or to re-weigh existing terms based on their importance for matching the query. As we show in §4.4, this operation is essential for ColBERT’s effectiveness. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_22", "text": " Given BERT’s representation of each token, our encoder passes the contextualized output representations through a linear layer with no activations. This layer serves to control the dimension of ColBERT’s embeddings, producing m𝑚m-dimensional embeddings for the layer’s output size m𝑚m. As we discuss later in more detail, we typically fix m𝑚m to be much smaller than BERT’s fixed hidden dimension. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_23", "text": " While ColBERT’s embedding dimension has limited impact on the efficiency of query encoding, this step is crucial for controlling the space footprint of documents, as we show in §4.5. In addition, it can have a significant impact on query execution time, particularly the time taken for transferring the document representations onto the GPU from system memory (where they reside before processing a query). In fact, as we show in §4.2, gathering, stacking, and transferring the embeddings from CPU to GPU can be the most expensive step in re-ranking with ColBERT. Finally, the output embeddings are normalized so each has L2 norm equal to one. The result is that the dot-product of any two embeddings becomes equivalent to their cosine similarity, falling in the (−1,1)11(-1,1) range. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_24", "text": " Document Encoder. Our document encoder has a very similar architecture. We first segment a document d𝑑d into its constituent tokens d1​d2​…​dmsubscript𝑑1subscript𝑑2…subscript𝑑𝑚d_{1}d_{2}...d_{m}, to which we prepend BERT’s start token (CLS) followed by our special token (D) that indicates a document sequence. Unlike queries, we do not append (mask) tokens to documents. After passing this input sequence through BERT and the subsequent linear layer, the document encoder filters out the embeddings corresponding to punctuation symbols, determined via a pre-defined list. This filtering is meant to reduce the number of embeddings per document, as we hypothesize that (even contextualized) embeddings of punctuation are unnecessary for effectiveness. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_25", "text": " In summary, given q=q0​q1​…​ql𝑞subscript𝑞0subscript𝑞1…subscript𝑞𝑙q=q_{0}q_{1}...q_{l} and d=d0​d1​…​dn𝑑subscript𝑑0subscript𝑑1…subscript𝑑𝑛d=d_{0}d_{1}...d_{n}, we compute the bags of embeddings Eqsubscript𝐸𝑞E_{q} and Edsubscript𝐸𝑑E_{d} in the following manner, where ##\\# refers to the (mask) tokens: (1) Eqsubscript𝐸𝑞\\displaystyle E_{q} :=Normalize​(CNN​(BERT​(`​`​(Q)​q0​q1​…​ql​#​#​…​#​\")))assignabsentNormalizeCNNBERT``delimited-()𝑄subscript𝑞0subscript𝑞1…subscript𝑞𝑙##…#\"\\displaystyle:=\\texttt{Normalize}(\\;\\texttt{CNN}(\\;\\texttt{BERT}(``(Q)q_{0}q_{1}...q_{l}\\#\\#...\\#\")\\;)\\;) (2) Edsubscript𝐸𝑑\\displaystyle E_{d} :=Filter​(Normalize​(CNN​(BERT​(`​`​(D)​d0​d1​…​dn​\"))))assignabsentFilterNormalizeCNNBERT``delimited-()𝐷subscript𝑑0subscript𝑑1…subscript𝑑𝑛\"\\displaystyle:=\\texttt{Filter}(\\;\\texttt{Normalize}(\\;\\texttt{CNN}(\\;\\texttt{BERT}(``(D)d_{0}d_{1}...d_{n}\")\\;)\\;)\\;) ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_26", "text": " Given the representation of a query q𝑞q and a document d𝑑d, the relevance score of d𝑑d to q𝑞q, denoted as Sq,dsubscript𝑆𝑞𝑑S_{q,d}, is estimated via late interaction between their bags of contextualized embeddings. As mentioned before, this is conducted as a sum of maximum similarity computations, namely cosine similarity (implemented as dot-products due to the embedding normalization) or squared L2 distance. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_27", "text": " (3) Sq,dsubscript𝑆𝑞𝑑\\displaystyle S_{q,d} :=∑i∈(|Eq|)maxj∈(|Ed|)⁡Eqi⋅EdjTassignabsentsubscript𝑖delimited-()subscript𝐸𝑞subscript𝑗delimited-()subscript𝐸𝑑⋅subscript𝐸subscript𝑞𝑖superscriptsubscript𝐸subscript𝑑𝑗𝑇\\displaystyle:=\\sum_{i\\in(|E_{q}|)}\\max_{j\\in(|E_{d}|)}E_{q_{i}}\\cdot E_{d_{j}}^{T} ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_28", "text": " ColBERT is differentiable end-to-end. We fine-tune the BERT encoders and train from scratch the additional parameters (i.e., the linear layer and the (Q) and (D) markers’ embeddings) using the Adam (Kingma and Ba, 2014) optimizer. Notice that our interaction mechanism has no trainable parameters. Given a triple ⟨q,d+,d−⟩𝑞superscript𝑑superscript𝑑\\langle q,d^{+},d^{-}\\rangle with query q𝑞q, positive document d+superscript𝑑d^{+} and negative document d−superscript𝑑d^{-}, ColBERT is used to produce a score for each document individually and is optimized via pairwise softmax cross-entropy loss over the computed scores of d+superscript𝑑d^{+} and d−superscript𝑑d^{-}. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_29", "text": " By design, ColBERT isolates almost all of the computations between queries and documents, largely to enable pre-computing document representations offline. At a high level, our indexing procedure is straight-forward: we proceed over the documents in the collection in batches, running our document encoder fDsubscript𝑓𝐷f_{D} on each batch and storing the output embeddings per document. Although indexing a set of documents is an offline process, we incorporate a few simple optimizations for enhancing the throughput of indexing. As we show in §4.5, these optimizations can considerably reduce the offline cost of indexing. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_30", "text": " To begin with, we exploit multiple GPUs, if available, for faster encoding of batches of documents in parallel. When batching, we pad all documents to the maximum length of a document within the batch.333The public BERT implementations we saw simply pad to a pre-defined length. To make capping the sequence length on a per-batch basis more effective, our indexer proceeds through documents in groups of B𝐵B (e.g., B=𝐵absentB= 100,000) documents. It sorts these documents by length and then feeds batches of b𝑏b (e.g., b=𝑏absentb= 128) documents of comparable length through our encoder. This length-based bucketing is sometimes refered to as a BucketIterator in some libraries (e.g., allenNLP). Lastly, while most computations occur on the GPU, we found that a non-trivial portion of the indexing time is spent on pre-processing the text sequences, primarily BERT’s WordPiece tokenization. Exploiting that these operations are independent across documents in a batch, we parallelize the pre-processing across the available CPU cores. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_31", "text": " Once the document representations are produced, they are saved to disk using 32-bit or 16-bit values to represent each dimension. As we describe in §3.5 and 3.6, these representations are either simply loaded from disk for ranking or are subsequently indexed for vector-similarity search, respectively. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_32", "text": " Recall that ColBERT can be used for re-ranking the output of another retrieval model, typically a term-based model, or directly for end-to-end retrieval from a document collection. In this section, we discuss how we use ColBERT for ranking a small set of k𝑘k (e.g., k=1000𝑘1000k=1000) documents given a query q𝑞q. Since k𝑘k is small, we rely on batch computations to exhaustively score each document (unlike our approach in §3.6). To begin with, our query serving sub-system loads the indexed documents representations into memory, representing each document as a matrix of embeddings. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_33", "text": " Given a query q𝑞q, we compute its bag of contextualized embeddings Eqsubscript𝐸𝑞E_{q} (Equation 1) and, concurrently, gather the document representations into a 3-dimensional tensor D𝐷D consisting of k𝑘k document matrices. We pad the k𝑘k documents to their maximum length to facilitate batched operations, and move the tensor D𝐷D to the GPU’s memory. On the GPU, we compute a batch dot-product of Eqsubscript𝐸𝑞E_{q} and D𝐷D, possibly over multiple mini-batches. The output materializes a 3-dimensional tensor that is a collection of cross-match matrices between q𝑞q and each document. To compute the score of each document, we reduce its matrix across document terms via a max-pool (i.e., representing an exhaustive implementation of our MaxSim computation) and reduce across query terms via a summation. Finally, we sort the k𝑘k documents by their total scores. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_34", "text": " Relative to existing neural rankers (especially, but not exclusively, BERT-based ones), this computation is very cheap that, in fact, its cost is dominated by the cost of gathering and transferring the pre-computed embeddings. To illustrate, ranking k𝑘k documents via typical BERT rankers requires feeding BERT k𝑘k different inputs each of length l=|q|+|di|𝑙𝑞subscript𝑑𝑖l=|q|+|d_{i}| for query q𝑞q and documents disubscript𝑑𝑖d_{i}, where attention has quadratic cost in the length of the sequence. In contrast, ColBERT feeds BERT only a single, much shorter sequence of length l=|q|𝑙𝑞l=|q|. Consequently, ColBERT is not only cheaper, it also scales much better with k𝑘k as we examine in §4.2. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_35", "text": " As mentioned before, ColBERT’s late-interaction operator is specifically designed to enable end-to-end retrieval from a large collection, largely to improve recall relative to term-based retrieval approaches. This section is concerned with cases where the number of documents to be ranked is too large for exhaustive evaluation of each possible candidate document, particularly when we are only interested in the highest scoring ones. Concretely, we focus here on retrieving the top-k𝑘k results directly from a large document collection with N𝑁N (e.g., N=10,000,000𝑁10000000N=10,000,000) documents, where k≪Nmuch-less-than𝑘𝑁k\\ll N. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_36", "text": " To do so, we leverage the pruning-friendly nature of the MaxSim operations at the backbone of late interaction. Instead of applying MaxSim between one of the query embeddings and all of one document’s embeddings, we can use fast vector-similarity data structures to efficiently conduct this search between the query embedding and all document embeddings across the full collection. For this, we employ an off-the-shelf library for large-scale vector-similarity search, namely faiss (Johnson et al., 2017) from Facebook.444https://github.com/facebookresearch/faissIn particular, at the end of offline indexing (§3.4), we maintain a mapping from each embedding to its document of origin and then index all document embeddings into faiss. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_37", "text": " Subsequently, when serving queries, we use a two-stage procedure to retrieve the top-k𝑘k documents from the entire collection. Both stages rely on ColBERT’s scoring: the first is an approximate stage aimed at filtering while the second is a refinement stage. For the first stage, we concurrently issue Nqsubscript𝑁𝑞N_{q} vector-similarity queries (corresponding to each of the embeddings in Eqsubscript𝐸𝑞E_{q}) onto our faiss index. This retrieves the top-k′superscript𝑘′k^{\\prime} (e.g., k′=k/2superscript𝑘′𝑘2k^{\\prime}=k/2) matches for that vector over all document embeddings. We map each of those to its document of origin, producing Nq×k′subscript𝑁𝑞superscript𝑘′N_{q}\\times k^{\\prime} document IDs, only K≤Nq×k′𝐾subscript𝑁𝑞superscript𝑘′K\\leq N_{q}\\times k^{\\prime} of which are unique. These K𝐾K documents likely contain one or more embeddings that are highly similar to the query embeddings. For the second stage, we refine this set by exhaustively re-ranking only those K𝐾K documents in the usual manner described in §3.5. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_38", "text": " In our faiss-based implementation, we use an IVFPQ index (“inverted file with product quantization”). This index partitions the embedding space into P𝑃P (e.g., P=1000𝑃1000P=1000) cells based on k𝑘k-means clustering and then assigns each document embedding to its nearest cell based on the selected vector-similarity metric. For serving queries, when searching for the top-k′superscript𝑘′k^{\\prime} matches for a single query embedding, only the nearest p𝑝p (e.g., p=10𝑝10p=10) partitions are searched. To improve memory efficiency, every embedding is divided into s𝑠s (e.g., s=16𝑠16s=16) sub-vectors, each represented using one byte. Moreover, the index conducts the similarity computations in this compressed domain, leading to cheaper computations and thus faster search. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_39", "text": " We now turn our attention to empirically testing ColBERT, addressing the following research questions. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_40", "text": " RQ1: In a typical re-ranking setup, how well can ColBERT bridge the existing gap (highlighted in §1) between highly-efficient and highly-effective neural models? (§4.2) ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_41", "text": " RQ2: Beyond re-ranking, can ColBERT effectively support end-to-end retrieval directly from a large collection? (§4.3) ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_42", "text": " RQ3: What does each component of ColBERT (e.g., late interaction, query augmentation) contribute to its quality? (§4.4) ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_43", "text": " RQ4: What are ColBERT’s indexing-related costs in terms of offline computation and memory overhead? (§4.5) ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_44", "text": " Similar to related work (Nogueira et al., 2019c; Dai and Callan, 2019a; Nogueira et al., 2019b), we conduct our experiments on the MS MARCO Ranking (Nguyen et al., 2016) (henceforth, MS MARCO) and TREC Complex Answer Retrieval (TREC-CAR) (Dietz et al., 2017) datasets. Both of these recent datasets provide large training data of the scale that facilitates training and evaluating deep neural networks. We describe both in detail below. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_45", "text": " MS MARCO. MS MARCO is a dataset (and a corresponding competition) introduced by Microsoft in 2016 for reading comprehension and adapted in 2018 for retrieval. It is a collection of 8.8M passages from Web pages, which were gathered from Bing’s results to 1M real-world queries. Each query is associated with sparse relevance judgements of one (or very few) documents marked as relevant and no documents explicitly indicated as irrelevant. Per the official evaluation, we use MRR@10 to measure effectiveness. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_46", "text": " We use three sets of queries for evaluation. The official development and evaluation sets contain roughly 7k queries. However, the relevance judgements of the evaluation set are held-out by Microsoft and effectiveness results can only be obtained by submitting to the competition’s organizers. We submitted our main re-ranking ColBERT model for the results in §4.2. In addition, the collection includes roughly 55k queries (with labels) that are provided as additional validation data. We re-purpose a random sample of 5k queries among those (i.e., ones not in our development or training sets) as a “local” evaluation set. Along with the official development set, we use this held-out set for testing our models as well as baselines in §4.3. We do so to avoid submitting multiple variants of the same model at once, as the organizers discourage too many submissions by the same team. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_47", "text": " TREC CAR. Introduced by Dietz (Dietz et al., 2017) et al. in 2017, TREC CAR is a synthetic dataset based on Wikipedia that consists of about 29M passages. Similar to related work (Nogueira and Cho, 2019), we use the first four of five pre-defined folds for training and the fifth for validation. This amounts to roughly 3M queries generated by concatenating the title of a Wikipedia page with the heading of one of its sections. That section’s passages are marked as relevant to the corresponding query. Our evaluation is conducted on the test set used in TREC 2017 CAR, which contains 2,254 queries. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_48", "text": " Our ColBERT models are implemented using Python 3 and PyTorch 1. We use the popular transformers555https://github.com/huggingface/transformers library for the pre-trained BERT model. Similar to (Nogueira and Cho, 2019), we fine-tune all ColBERT models with learning rate 3×10−63superscript1063\\times 10^{-6} with a batch size 32. We fix the number of embeddings per query at Nq=32subscript𝑁𝑞32N_{q}=32. We set our ColBERT embedding dimension m𝑚m to be 128; §4.5 demonstrates ColBERT’s robustness to a wide range of embedding dimensions. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_49", "text": " For MS MARCO, we initialize the BERT components of the ColBERT query and document encoders using Google’s official pre-trained BERTbasebase{}_{\\textnormal{base}} model. Further, we train all models for 200k iterations. For TREC CAR, we follow related work (Nogueira and Cho, 2019; Dai and Callan, 2019a) and use a different pre-trained model to the official ones. To explain, the official BERT models were pre-trained on Wikipedia, which is the source of TREC CAR’s training and test sets. To avoid leaking test data into train, Nogueira and Cho’s (Nogueira and Cho, 2019) pre-train a randomly-initialized BERT model on the Wiki pages corresponding to training subset of TREC CAR. They release their BERTlargelarge{}_{\\textnormal{large}} pre-trained model, which we fine-tune for ColBERT’s experiments on TREC CAR. Since fine-tuning this model is significantly slower than BERTbasebase{}_{\\textnormal{base}}, we train on TREC CAR for only 125k iterations. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_50", "text": " In our re-ranking results, unless stated otherwise, we use 4 bytes per dimension in our embeddings and employ cosine as our vector-similarity function. For end-to-end ranking, we use (squared) L2 distance, as we found our faiss index was faster at L2-based retrieval. For our faiss index, we set the number of partitions to P=𝑃absentP=2,000, and search the nearest p=10𝑝10p=10 to each query embedding to retrieve k′=k=1000superscript𝑘′𝑘1000k^{\\prime}=k=1000 document vectors per query embedding. We divide each embedding into s=16𝑠16s=16 sub-vectors, each encoded using one byte. To represent the index used for the second stage of our end-to-end retrieval procedure, we use 16-bit values per dimension. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_51", "text": " To evaluate the latency of neural re-ranking models in §4.2, we use a single Tesla V100 GPU that has 32 GiBs of memory on a server with two Intel Xeon Gold 6132 CPUs, each with 14 physical cores (24 hyperthreads), and 469 GiBs of RAM. For the mostly CPU-based retrieval experiments in §4.3 and the indexing experiments in §4.5, we use another server with the same CPU and system memory specifications but which has four Titan V GPUs attached, each with 12 GiBs of memory. Across all experiments, only one GPU is dedicated per query for retrieval (i.e., for methods with neural computations) but we use up to all four GPUs during indexing. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_52", "text": " In this section, we examine ColBERT’s efficiency and effectiveness at re-ranking the top-k𝑘k results extracted by a bag-of-words retrieval model, which is the most typical setting for testing and deploying neural ranking models. We begin with the MS MARCO dataset. We compare against KNRM, Duet, and fastText+ConvKNRM, a representative set of neural matching models that have been previously tested on MS MARCO. In addition, we compare against the natural adaptation of BERT for ranking by Nogueira and Cho (Nogueira and Cho, 2019), in particular, BERTbasebase{}_{\\textnormal{base}} and its deeper counterpart BERTlargelarge{}_{\\textnormal{large}}. We also report results for “BERTbasebase{}_{\\textnormal{base}} (our training)”, which is based on Nogueira and Cho’s base model (including hyperparameters) but is trained with the same loss function as ColBERT (§3.3) for 200k iterations, allowing for a more direct comparison of the results. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_53", "text": " We report the competition’s official metric, namely MRR@10, on the validation set (Dev) and the evaluation set (Eval). We also report the re-ranking latency, which we measure using a single Tesla V100 GPU, and the FLOPs per query for each neural ranking model. For ColBERT, our reported latency subsumes the entire computation from gathering the document representations, moving them to the GPU, tokenizing then encoding the query, and applying late interaction to compute document scores. For the baselines, we measure the scoring computations on the GPU and exclude the CPU-based text preprocessing (similar to (Hofstätter and Hanbury, 2019)). In principle, the baselines can pre-compute the majority of this preprocessing (e.g., document tokenization) offline and parallelize the rest across documents online, leaving only a negligible cost. We estimate the FLOPs per query of each model using the torchprofile666https://github.com/mit-han-lab/torchprofile library. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_54", "text": " We now proceed to study the results, which are reported in Table 1. To begin with, we notice the fast progress from KNRM in 2017 to the BERT-based models in 2019, manifesting itself in over 16% increase in MRR@10. As described in §1, the simultaneous increase in computational cost is difficult to miss. Judging by their rather monotonic pattern of increasingly larger cost and higher effectiveness, these results appear to paint a picture where expensive models are necessary for high-quality ranking. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_55", "text": " In contrast with this trend, ColBERT (which employs late interaction over BERTbasebase{}_{\\textnormal{base}}) performs no worse than the original adaptation of BERTbasebase{}_{\\textnormal{base}} for ranking by Nogueira and Cho (Nogueira and Cho, 2019; Nogueira et al., 2019b) and is only marginally less effective than BERTlargelarge{}_{\\textnormal{large}} and our training of BERTbasebase{}_{\\textnormal{base}} (described above). While highly competitive in effectiveness, ColBERT is orders of magnitude cheaper than BERTbasebase{}_{\\textnormal{base}}, in particular, by over 170×\\times in latency and 13,900×\\times in FLOPs. This highlights the expressiveness of our proposed late interaction mechanism, particularly when coupled with a powerful pre-trained LM like BERT. While ColBERT’s re-ranking latency is slightly higher than the non-BERT re-ranking models shown (i.e., by 10s of milliseconds), this difference is explained by the time it takes to gather, stack, and transfer the document embeddings to the GPU. In particular, the query encoding and interaction in ColBERT consume only 13 milliseconds of its total execution time. We note that ColBERT’s latency and FLOPs can be considerably reduced by padding queries to a shorter length, using smaller vector dimensions (the MRR@10 of which is tested in §4.5), employing quantization of the document vectors, and storing the embeddings on GPU if sufficient memory exists. We leave these directions for future work. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_56", "text": " Diving deeper into the quality–cost tradeoff between BERT and ColBERT, Figure 4 demonstrates the relationships between FLOPs and effectiveness (MRR@10) as a function of the re-ranking depth k𝑘k when re-ranking the top-k𝑘k results by BM25, comparing ColBERT and BERTbasebase{}_{\\textnormal{base}} (our training). We conduct this experiment on MS MARCO (Dev). We note here that as the official top-1000 ranking does not provide the BM25 order (and also lacks documents beyond the top-1000 per query), the models in this experiment re-rank the Anserini (Yang et al., 2018) toolkit’s BM25 output. Consequently, both MRR@10 values at k=1000𝑘1000k=1000 are slightly higher from those reported in Table 1. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_57", "text": " Studying the results in Figure 4, we notice that not only is ColBERT much cheaper than BERT for the same model size (i.e., 12-layer “base” transformer encoder), it also scales better with the number of ranked documents. In part, this is because ColBERT only needs to process the query once, irrespective of the number of documents evaluated. For instance, at k=10𝑘10k=10, BERT requires nearly 180×\\times more FLOPs than ColBERT; at k=1000𝑘1000k=1000, BERT’s overhead jumps to 13,900×\\times. It then reaches 23,000×\\times at k=2000𝑘2000k=2000. In fact, our informal experimentation shows that this orders-of-magnitude gap in FLOPs makes it practical to run ColBERT entirely on the CPU, although CPU-based re-ranking lies outside our scope. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_58", "text": " Having studied our results on MS MARCO, we now consider TREC CAR, whose official metric is MAP. Results are summarized in Table 3, which includes a number of important baselines (BM25, doc2query, and DeepCT) in addition to re-ranking baselines that have been tested on this dataset. These results directly mirror those with MS MARCO. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_59", "text": " Beyond cheap re-ranking, ColBERT is amenable to top-k𝑘k retrieval directly from a full collection. Table 2 considers full retrieval, wherein each model retrieves the top-1000 documents directly from MS MARCO’s 8.8M documents per query. In addition to MRR@10 and latency in milliseconds, the table reports Recall@50, Recall@200, and Recall@1000, important metrics for a full-retrieval model that essentially filters down a large collection on a per-query basis. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_60", "text": " We compare against BM25, in particular MS MARCO’s official BM25 ranking as well as a well-tuned baseline based on the Anserini toolkit.777http://anserini.io/ While many other traditional models exist, we are not aware of any that substantially outperform Anserini’s BM25 implementation (e.g., see RM3 in (Nogueira et al., 2019c), LMDir in (Dai and Callan, 2019a), or Microsoft’s proprietary feature-based RankSVM on the leaderboard). ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_61", "text": " We also compare against doc2query, DeepCT, and docTTTTTquery. All three rely on a traditional bag-of-words model (primarily BM25) for retrieval. Crucially, however, they re-weigh the frequency of terms per document and/or expand the set of terms in each document before building the BM25 index. In particular, doc2query expands each document with a pre-defined number of synthetic queries generated by a seq2seq transformer model (which docTTTTquery replaced with a pre-trained language model, T5 (Raffel et al., 2019)). In contrast, DeepCT uses BERT to produce the term frequency component of BM25 in a context-aware manner. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_62", "text": " For the latency of Anserini’s BM25, doc2query, and docTTTTquery, we use the authors’ (Nogueira et al., 2019c, a) Anserini-based implementation. While this implementation supports multi-threading, it only utilizes parallelism across different queries. We thus report single-threaded latency for these models, noting that simply parallelizing their computation over shards of the index can substantially decrease their already-low latency. For DeepCT, we only estimate its latency using that of BM25 (as denoted by (est.) in the table), since DeepCT re-weighs BM25’s term frequency without modifying the index otherwise.888In practice, a myriad of reasons could still cause DeepCT’s latency to differ slightly from BM25’s. For instance, the top-k𝑘k pruning strategy employed, if any, could interact differently with a changed distribution of scores. As discussed in §4.1, we use ColBERTL2L2{}_{\\textnormal{L2}} for end-to-end retrieval, which employs negative squared L2 distance as its vector-similarity function. For its latency, we measure the time for faiss-based candidate filtering and the subsequent re-ranking. In this experiment, faiss uses all available CPU cores. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_63", "text": " Looking at Table 2, we first see Anserini’s BM25 baseline at 18.7 MRR@10, noticing its very low latency as implemented in Anserini (which extends the well-known Lucene system), owing to both very cheap operations and decades of bag-of-words top-k𝑘k retrieval optimizations. The three subsequent baselines, namely doc2query, DeepCT, and docTTTTquery, each brings a decisive enhancement to effectiveness. These improvements come at negligible overheads in latency, since these baselines ultimately rely on BM25-based retrieval. The most effective among these three, docTTTTquery, demonstrates a massive 9% gain over vanilla BM25 by fine-tuning the recent language model T5. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_64", "text": " Shifting our attention to ColBERT’s end-to-end retrieval effectiveness, we see its major gains in MRR@10 over all of these end-to-end models. In fact, using ColBERT in the end-to-end setup is superior in terms of MRR@10 to re-ranking with the same model due to the improved recall. Moving beyond MRR@10, we also see large gains in Recall@k𝑘k for k𝑘k equals to 50, 200, and 1000. For instance, its Recall@50 actually exceeds the official BM25’s Recall@1000 and even all but docTTTTTquery’s Recall@200, emphasizing the value of end-to-end retrieval (instead of just re-ranking) with ColBERT. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_65", "text": " The results from §4.2 indicate that ColBERT is highly effective despite the low cost and simplicity of its late interaction mechanism. To better understand the source of this effectiveness, we examine a number of important details in ColBERT’s interaction and encoder architecture. For this ablation, we report MRR@10 on the validation set of MS MARCO in Figure 5, which shows our main re-ranking ColBERT model (E), with MRR@10 of 34.9%. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_66", "text": " Due to the cost of training all models, we train a copy of our main model that retains only the first 5 layers of BERT out of 12 (i.e., model (D)) and similarly train all our ablation models for 200k iterations with five BERT layers. To begin with, we ask if the fine-granular interaction in late interaction is necessary. Model (A) tackles this question: it uses BERT to produce a single embedding vector for the query and another for the document, extracted from BERT’s (CLS) contextualized embedding and expanded through a linear layer to dimension 4096 (which equals Nq×128=32×128subscript𝑁𝑞12832128N_{q}\\times 128=32\\times 128). Relevance is estimated as the inner product of the query’s and the document’s embeddings, which we found to perform better than cosine similarity for single-vector re-ranking. As the results show, this model is considerably less effective than ColBERT, reinforcing the importance of late interaction. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_67", "text": " Subsequently, we ask if our MaxSim-based late interaction is better than other simple alternatives. We test a model (B) that replaces ColBERT’s maximum similarity with average similarity. The results suggest the importance of individual terms in the query paying special attention to particular terms in the document. Similarly, the figure emphasizes the importance of our query augmentation mechanism: without query augmentation (C), ColBERT has a noticeably lower MRR@10. Lastly, we see the impact of end-to-end retrieval not only on recall but also on MRR@10. By retrieving directly from the full collection, ColBERT is able to retrieve to the top-10 documents missed entirely from BM25’s top-1000. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_68", "text": " Lastly, we examine the indexing throughput and space footprint of ColBERT. Figure 6 reports indexing throughput on MS MARCO documents with ColBERT and four other ablation settings, which individually enable optimizations described in §3.4 on top of basic batched indexing. Based on these throughputs, ColBERT can index MS MARCO in about three hours. Note that any BERT-based model must incur the computational cost of processing each document at least once. While ColBERT encodes each document with BERT exactly once, existing BERT-based rankers would repeat similar computations on possibly hundreds of documents for each query. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_69", "text": " Table 4 reports the space footprint of ColBERT under various settings as we reduce the embeddings dimension and/or the bytes per dimension. Interestingly, the most space-efficient setting, that is, re-ranking with cosine similarity with 24-dimensional vectors stored as 2-byte floats, is only 1% worse in MRR@10 than the most space-consuming one, while the former requires only 27 GiBs to represent the MS MARCO collection. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_70", "text": " In this paper, we introduced ColBERT, a novel ranking model that employs contextualized late interaction over deep LMs (in particular, BERT) for efficient retrieval. By independently encoding queries and documents into fine-grained representations that interact via cheap and pruning-friendly computations, ColBERT can leverage the expressiveness of deep LMs while greatly speeding up query processing. In addition, doing so allows using ColBERT for end-to-end neural retrieval directly from a large document collection. Our results show that ColBERT is more than 170×\\times faster and requires 14,000×\\times fewer FLOPs/query than existing BERT-based models, all while only minimally impacting quality and while outperforming every non-BERT baseline. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" }, { "id": "2004.12832_all_71", "text": " Acknowledgments. OK was supported by the Eltoukhy Family Graduate Fellowship at the Stanford School of Engineering. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Cisco, SAP, and the NSF under CAREER grant CNS-1651570. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. ", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" } ]
How to construct the affine transformation matrix?
[The paper predicts an affine transformation matrix by a mini-network, called T-net, and directly applies this transformation to the coordinates of input points [32].
[ 32 ]
[ { "id": "1612.00593_all_0", "text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform weight sharing and other kernel optimizations. Since point clouds or meshes are not in a regular format, most researchers typically transform such data to regular 3D voxel grids or collections of images (e.g, views) before feeding them to a deep net architecture. This data representation transformation, however, renders the resulting data unnecessarily voluminous — while also introducing quantization artifacts that can obscure natural invariances of the data. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_1", "text": " For this reason we focus on a different input representation for 3D geometry using simply point clouds – and name our resulting deep nets PointNets. Point clouds are simple and unified structures that avoid the combinatorial irregularities and complexities of meshes, and thus are easier to learn from. The PointNet, however, still has to respect the fact that a point cloud is just a set of points and therefore invariant to permutations of its members, necessitating certain symmetrizations in the net computation. Further invariances to rigid motions also need to be considered. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_2", "text": " Our PointNet is a unified architecture that directly takes point clouds as input and outputs either class labels for the entire input or per point segment/part labels for each point of the input. The basic architecture of our network is surprisingly simple as in the initial stages each point is processed identically and independently. In the basic setting each point is represented by just its three coordinates (x,y,z)𝑥𝑦𝑧(x,y,z). Additional dimensions may be added by computing normals and other local or global features. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_3", "text": " Key to our approach is the use of a single symmetric function, max pooling. Effectively the network learns a set of optimization functions/criteria that select interesting or informative points of the point cloud and encode the reason for their selection. The final fully connected layers of the network aggregate these learnt optimal values into the global descriptor for the entire shape as mentioned above (shape classification) or are used to predict per point labels (shape segmentation). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_4", "text": " Our input format is easy to apply rigid or affine transformations to, as each point transforms independently. Thus we can add a data-dependent spatial transformer network that attempts to canonicalize the data before the PointNet processes them, so as to further improve the results. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_5", "text": " We provide both a theoretical analysis and an experimental evaluation of our approach. We show that our network can approximate any set function that is continuous. More interestingly, it turns out that our network learns to summarize an input point cloud by a sparse set of key points, which roughly corresponds to the skeleton of objects according to visualization. The theoretical analysis provides an understanding why our PointNet is highly robust to small perturbation of input points as well as to corruption through point insertion (outliers) or deletion (missing data). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_6", "text": " On a number of benchmark datasets ranging from shape classification, part segmentation to scene segmentation, we experimentally compare our PointNet with state-of-the-art approaches based upon multi-view and volumetric representations. Under a unified architecture, not only is our PointNet much faster in speed, but it also exhibits strong performance on par or even better than state of the art. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_7", "text": " The key contributions of our work are as follows: • We design a novel deep net architecture suitable for consuming unordered point sets in 3D; • We show how such a net can be trained to perform 3D shape classification, shape part segmentation and scene semantic parsing tasks; • We provide thorough empirical and theoretical analysis on the stability and efficiency of our method; • We illustrate the 3D features computed by the selected neurons in the net and develop intuitive explanations for its performance. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_8", "text": " The problem of processing unordered sets by neural nets is a very general and fundamental problem – we expect that our ideas can be transferred to other domains as well. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_9", "text": " Most existing features for point cloud are handcrafted towards specific tasks. Point features often encode certain statistical properties of points and are designed to be invariant to certain transformations, which are typically classified as intrinsic (2, 24, 3) or extrinsic  (20, 19, 14, 10, 5). They can also be categorized as local features and global features. For a specific task, it is not trivial to find the optimal feature combination. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_10", "text": " 3D data has multiple popular representations, leading to various approaches for learning. Volumetric CNNs: (28, 17, 18) are the pioneers applying 3D convolutional neural networks on voxelized shapes. However, volumetric representation is constrained by its resolution due to data sparsity and computation cost of 3D convolution. FPNN  and Vote3D  proposed special methods to deal with the sparsity problem; however, their operations are still on sparse volumes, it’s challenging for them to process very large point clouds. Multiview CNNs: (23, 18) have tried to render 3D point cloud or shapes into 2D images and then apply 2D conv nets to classify them. With well engineered image CNNs, this line of methods have achieved dominating performance on shape classification and retrieval tasks . However, it’s nontrivial to extend them to scene understanding or other 3D tasks such as point classification and shape completion. Spectral CNNs: Some latest works (4, 16) use spectral CNNs on meshes. However, these methods are currently constrained on manifold meshes such as organic objects and it’s not obvious how to extend them to non-isometric shapes such as furniture. Feature-based DNNs: (6, 8) firstly convert the 3D data into a vector, by extracting traditional shape features and then use a fully connected net to classify the shape. We think they are constrained by the representation power of the features extracted. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_11", "text": " From a data structure point of view, a point cloud is an unordered set of vectors. While most works in deep learning focus on regular input representations like sequences (in speech and language processing), images and volumes (video or 3D data), not much work has been done in deep learning on point sets. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_12", "text": " One recent work from Oriol Vinyals et al  looks into this problem. They use a read-process-write network with attention mechanism to consume unordered input sets and show that their network has the ability to sort numbers. However, since their work focuses on generic sets and NLP applications, there lacks the role of geometry in the sets. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_13", "text": " We design a deep learning framework that directly consumes unordered point sets as inputs. A point cloud is represented as a set of 3D points {Pi|i=1,…,n}conditional-setsubscript𝑃𝑖𝑖1…𝑛\\{P_{i}|\\ i=1,...,n\\}, where each point Pisubscript𝑃𝑖P_{i} is a vector of its (x,y,z)𝑥𝑦𝑧(x,y,z) coordinate plus extra feature channels such as color, normal etc. For simplicity and clarity, unless otherwise noted, we only use the (x,y,z)𝑥𝑦𝑧(x,y,z) coordinate as our point’s channels. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_14", "text": " For the object classification task, the input point cloud is either directly sampled from a shape or pre-segmented from a scene point cloud. Our proposed deep network outputs k𝑘k scores for all the k𝑘k candidate classes. For semantic segmentation, the input can be a single object for part region segmentation, or a sub-volume from a 3D scene for object region segmentation. Our model will output n×m𝑛𝑚n\\times m scores for each of the n𝑛n points and each of the m𝑚m semantic sub-categories. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_15", "text": " The architecture of our network (Sec 4.2) is inspired by the properties of point sets in ℝnsuperscriptℝ𝑛\\mathbb{R}^{n} (Sec 4.1). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_16", "text": " Our input is a subset of points from an Euclidean space. It has three main properties: ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_17", "text": " • Unordered. Unlike pixel arrays in images or voxel arrays in volumetric grids, point cloud is a set of points without specific order. In other words, a network that consumes N𝑁N 3D point sets needs to be invariant to N!𝑁N! permutations of the input set in data feeding order. • Interaction among points. The points are from a space with a distance metric. It means that points are not isolated, and neighboring points form a meaningful subset. Therefore, the model needs to be able to capture local structures from nearby points, and the combinatorial interactions among local structures. • Invariance under transformations. As a geometric object, the learned representation of the point set should be invariant to certain transformations. For example, rotating and translating points all together should not modify the global point cloud category nor the segmentation of the points. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_18", "text": " Our full network architecture is visualized in Fig 2, where the classification network and the segmentation network share a great portion of structures. Please read the caption of Fig 2 for the pipeline. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_19", "text": " Our network has three key modules: the max pooling layer as a symmetric function to aggregate information from all the points, a local and global information combination structure, and two joint alignment networks that align both input points and point features. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_20", "text": " We will discuss our reason behind these design choices in separate paragraphs below. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_21", "text": " In order to make a model invariant to input permutation, three strategies exist: 1) sort input into a canonical order; 2) treat the input as a sequence to train an RNN, but augment the training data by all kinds of permutations; 3) use a simple symmetric function to aggregate the information from each point. Here, a symmetric function takes n𝑛n vectors as input and outputs a new vector that is invariant to the input order. For example, ++ and ∗* operators are symmetric binary functions. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_22", "text": " While sorting sounds like a simple solution, in high dimensional space there in fact does not exist an ordering that is stable w.r.t. point perturbations in the general sense. This can be easily shown by contradiction. If such an ordering strategy exists, it defines a bijection map between a high-dimensional space and a 1​d1𝑑1d real line. It is not hard to see, to require an ordering to be stable w.r.t point perturbations is equivalent to requiring that this map preserves spatial proximity as the dimension reduces, a task that cannot be achieved in the general case. Therefore, sorting does not fully resolve the ordering issue, and it’s hard for a network to learn a consistent mapping from input to output as the ordering issue persists. As shown in experiments (Fig 5), we find that applying a MLP directly on the sorted point set performs poorly, though slightly better than directly processing an unsorted input. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_23", "text": " The idea to use RNN considers the point set as a sequential signal and hopes that by training the RNN with randomly permuted sequences, the RNN will become invariant to input order. However in “OrderMatters”  the authors have shown that order does matter and cannot be totally omitted. While RNN has relatively good robustness to input ordering for sequences with small length (dozens), it’s hard to scale to thousands of input elements, which is the common size for point sets. Empirically, we have also shown that model based on RNN does not perform as well as our proposed method (Fig 5). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_24", "text": " Our idea is to approximate a general function defined on a point set by applying a symmetric function on transformed elements in the set: f​({x1,…,xn})≈g​(h​(x1),…,h​(xn)),𝑓subscript𝑥1…subscript𝑥𝑛𝑔ℎsubscript𝑥1…ℎsubscript𝑥𝑛\\displaystyle f(\\{x_{1},\\dots,x_{n}\\})\\approx g(h(x_{1}),\\dots,h(x_{n})), (1) where f:2ℝN→ℝ:𝑓→superscript2superscriptℝ𝑁ℝf:2^{\\mathbb{R}^{N}}\\rightarrow\\mathbb{R}, h:ℝN→ℝK:ℎ→superscriptℝ𝑁superscriptℝ𝐾h:\\mathbb{R}^{N}\\rightarrow\\mathbb{R}^{K} and g:ℝK×⋯×ℝK⏟n→ℝ:𝑔→subscript⏟superscriptℝ𝐾⋯superscriptℝ𝐾𝑛ℝg:\\underbrace{\\mathbb{R}^{K}\\times\\dots\\times\\mathbb{R}^{K}}_{n}\\rightarrow\\mathbb{R} is a symmetric function. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_25", "text": " Empirically, our basic module is very simple: we approximate hℎh by a multi-layer perceptron network and g𝑔g by a composition of a single variable function and a max pooling function. This is found to work well by experiments. Through a collection of hℎh, we can learn a number of f𝑓f’s to capture different properties of the set. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_26", "text": " While our key module seems simple, it has interesting properties (see Sec 5.3) and can achieve strong performace (see Sec 5.1) in a few different applications. Due to the simplicity of our module, we are also able to provide theoretical analysis as in Sec 4.3. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_27", "text": " The output from the above section forms a vector (f1,…,fK)subscript𝑓1…subscript𝑓𝐾(f_{1},\\dots,f_{K}), which is a global signature of the input set. We can easily train a SVM or multi-layer perceptron classifier on the shape global features for classification. However, point segmentation requires a combination of local and global knowledge. We can achieve this by a simple yet highly effective manner. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_28", "text": " Our solution can be seen in Fig 2 (Segmentation Network). After computing the global point cloud feature vector, we feed it back to per point features by concatenating the global feature with each of the point features. Then we extract new per point features based on the combined point features - this time the per point feature is aware of both the local and global information. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_29", "text": " With this modification our network is able to predict per point quantities that rely on both local geometry and global semantics. For example we can accurately predict per-point normals (fig in supplementary), validating that the network is able to summarize information from the point’s local neighborhood. In experiment session, we also show that our model can achieve state-of-the-art performance on shape part segmentation and scene segmentation. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_30", "text": " The semantic labeling of a point cloud has to be invariant if the point cloud undergoes certain geometric transformations, such as rigid transformation. We therefore expect that the learnt representation by our point set is invariant to these transformations. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_31", "text": " A natural solution is to align all input set to a canonical space before feature extraction. Jaderberg et al.  introduces the idea of spatial transformer to align 2D images through sampling and interpolation, achieved by a specifically tailored layer implemented on GPU. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_32", "text": " Our input form of point clouds allows us to achieve this goal in a much simpler way compared with . We do not need to invent any new layers and no alias is introduced as in the image case. We predict an affine transformation matrix by a mini-network (T-net in Fig 2) and directly apply this transformation to the coordinates of input points. The mini-network itself resembles the big network and is composed by basic modules of point independent feature extraction, max pooling and fully connected layers. More details about the T-net are in the supplementary. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_33", "text": " This idea can be further extended to the alignment of feature space, as well. We can insert another alignment network on point features and predict a feature transformation matrix to align features from different input point clouds. However, transformation matrix in the feature space has much higher dimension than the spatial transform matrix, which greatly increases the difficulty of optimization. We therefore add a regularization term to our softmax training loss. We constrain the feature transformation matrix to be close to orthogonal matrix: Lr​e​g=‖I−A​AT‖F2,subscript𝐿𝑟𝑒𝑔superscriptsubscriptnorm𝐼𝐴superscript𝐴𝑇𝐹2L_{reg}=\\|I-AA^{T}\\|_{F}^{2}, (2) where A𝐴A is the feature alignment matrix predicted by a mini-network. An orthogonal transformation will not lose information in the input, thus is desired. We find that by adding the regularization term, the optimization becomes more stable and our model achieves better performance. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_34", "text": " We first show the universal approximation ability of our neural network to continuous set functions. By the continuity of set functions, intuitively, a small perturbation to the input point set should not greatly change the function values, such as classification or segmentation scores. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_35", "text": " Formally, let 𝒳={S:S⊆(0,1)m​ and ​|S|=n}𝒳conditional-set𝑆𝑆superscript01𝑚 and 𝑆𝑛\\mathcal{X}=\\{S:S\\subseteq(0,1)^{m}\\text{ and }|S|=n\\}, f:𝒳→ℝ:𝑓→𝒳ℝf:\\mathcal{X}\\rightarrow\\mathbb{R} is a continuous set function on 𝒳𝒳\\mathcal{X} w.r.t to Hausdorff distance dH​(⋅,⋅)subscript𝑑𝐻⋅⋅d_{H}(\\cdot,\\cdot), i.e., ∀ϵ>0,∃δ>0formulae-sequencefor-allitalic-ϵ0𝛿0\\forall\\epsilon>0,\\exists\\delta>0, for any S,S′∈𝒳𝑆superscript𝑆′𝒳S,S^{\\prime}\\in\\mathcal{X}, if dH​(S,S′)<δsubscript𝑑𝐻𝑆superscript𝑆′𝛿d_{H}(S,S^{\\prime})<\\delta, then |f​(S)−f​(S′)|<ϵ𝑓𝑆𝑓superscript𝑆′italic-ϵ|f(S)-f(S^{\\prime})|<\\epsilon. Our theorem says that f𝑓f can be arbitrarily approximated by our network given enough neurons at the max pooling layer, i.e., K𝐾K in (1) is sufficiently large. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_36", "text": " The proof to this theorem can be found in our supplementary material. The key idea is that in the worst case the network can learn to convert a point cloud into a volumetric representation, by partitioning the space into equal-sized voxels. In practice, however, the network learns a much smarter strategy to probe the space, as we shall see in point function visualizations. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_37", "text": " Theoretically and experimentally we find that the expressiveness of our network is strongly affected by the dimension of the max pooling layer, i.e., K𝐾K in (1). Here we provide an analysis, which also reveals properties related to the stability of our model. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_38", "text": " We define 𝐮=MAXxi∈S​{h​(xi)}𝐮subscript𝑥𝑖𝑆MAXℎsubscript𝑥𝑖\\mathbf{u}=\\underset{x_{i}\\in S}{\\mbox{MAX}}\\{h(x_{i})\\} to be the sub-network of f𝑓f which maps a point set in (0,1)msuperscript01𝑚(0,1)^{m} to a K𝐾K-dimensional vector. The following theorem tells us that small corruptions or extra noise points in the input set are not likely to change the output of our network: ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_39", "text": " We explain the implications of the theorem. (a) says that f​(S)𝑓𝑆f(S) is unchanged up to the input corruption if all points in 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} are preserved; it is also unchanged with extra noise points up to 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S}. (b) says that 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} only contains a bounded number of points, determined by K𝐾K in (1). In other words, f​(S)𝑓𝑆f(S) is in fact totally determined by a finite subset 𝒞S⊆Ssubscript𝒞𝑆𝑆\\mathcal{C}_{S}\\subseteq S of less or equal to K𝐾K elements. We therefore call 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} the critical point set of S𝑆S and K𝐾K the bottleneck dimension of f𝑓f. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_40", "text": " Combined with the continuity of hℎh, this explains the robustness of our model w.r.t point perturbation, corruption and extra noise points. The robustness is gained in analogy to the sparsity principle in machine learning models. Intuitively, our network learns to summarize a shape by a sparse set of key points. In experiment section we see that the key points form the skeleton of an object. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_41", "text": " Experiments are divided into four parts. First, we show PointNets can be applied to multiple 3D recognition tasks (Sec 5.1). Second, we provide detailed experiments to validate our network design (Sec 5.2). At last we visualize what the network learns (Sec 5.3) and analyze time and space complexity (Sec 5.4). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_42", "text": " In this section we show how our network can be trained to perform 3D object classification, object part segmentation and semantic scene segmentation 111More application examples such as correspondence and point cloud based CAD model retrieval are included in supplementary material.. Even though we are working on a brand new data representation (point sets), we are able to achieve comparable or even better performance on benchmarks for several tasks. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_43", "text": " Our network learns global point cloud feature that can be used for object classification. We evaluate our model on the ModelNet40  shape classification benchmark. There are 12,311 CAD models from 40 man-made object categories, split into 9,843 for training and 2,468 for testing. While previous methods focus on volumetric and mult-view image representations, we are the first to directly work on raw point cloud. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_44", "text": " We uniformly sample 1024 points on mesh faces according to face area and normalize them into a unit sphere. During training we augment the point cloud on-the-fly by randomly rotating the object along the up-axis and jitter the position of each points by a Gaussian noise with zero mean and 0.02 standard deviation. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_45", "text": " In Table 1, we compare our model with previous works as well as our baseline using MLP on traditional features extracted from point cloud (point density, D2, shape contour etc.). Our model achieved state-of-the-art performance among methods based on 3D input (volumetric and point cloud). With only fully connected layers and max pooling, our net gains a strong lead in inference speed and can be easily parallelized in CPU as well. There is still a small gap between our method and multi-view based method (MVCNN ), which we think is due to the loss of fine geometry details that can be captured by rendered images. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_46", "text": " Part segmentation is a challenging fine-grained 3D recognition task. Given a 3D scan or a mesh model, the task is to assign part category label (e.g. chair leg, cup handle) to each point or face. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_47", "text": " We evaluate on ShapeNet part data set from , which contains 16,881 shapes from 16 categories, annotated with 50 parts in total. Most object categories are labeled with two to five parts. Ground truth annotations are labeled on sampled points on the shapes. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_48", "text": " We formulate part segmentation as a per-point classification problem. Evaluation metric is mIoU on points. For each shape S of category C, to calculate the shape’s mIoU: For each part type in category C, compute IoU between groundtruth and prediction. If the union of groundtruth and prediction points is empty, then count part IoU as 1. Then we average IoUs for all part types in category C to get mIoU for that shape. To calculate mIoU for the category, we take average of mIoUs for all shapes in that category. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_49", "text": " In this section, we compare our segmentation version PointNet (a modified version of Fig 2, Segmentation Network) with two traditional methods and that both take advantage of point-wise geometry features and correspondences between shapes, as well as our own 3D CNN baseline. See supplementary for the detailed modifications and network architecture for the 3D CNN. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_50", "text": " In Table 2, we report per-category and mean IoU(%) scores. We observe a 2.3% mean IoU improvement and our net beats the baseline methods in most categories. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_51", "text": " We also perform experiments on simulated Kinect scans to test the robustness of these methods. For every CAD model in the ShapeNet part data set, we use Blensor Kinect Simulator  to generate incomplete point clouds from six random viewpoints. We train our PointNet on the complete shapes and partial scans with the same network architecture and training setting. Results show that we lose only 5.3% mean IoU. In Fig 3, we present qualitative results on both complete and partial data. One can see that though partial data is fairly challenging, our predictions are reasonable. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_52", "text": " Our network on part segmentation can be easily extended to semantic scene segmentation, where point labels become semantic object classes instead of object part labels. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_53", "text": " We experiment on the Stanford 3D semantic parsing data set . The dataset contains 3D scans from Matterport scanners in 6 areas including 271 rooms. Each point in the scan is annotated with one of the semantic labels from 13 categories (chair, table, floor, wall etc. plus clutter). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_54", "text": " To prepare training data, we firstly split points by room, and then sample rooms into blocks with area 1m by 1m. We train our segmentation version of PointNet to predict per point class in each block. Each point is represented by a 9-dim vector of XYZ, RGB and normalized location as to the room (from 0 to 1). At training time, we randomly sample 4096 points in each block on-the-fly. At test time, we test on all the points. We follow the same protocol as  to use k-fold strategy for train and test. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_55", "text": " We compare our method with a baseline using handcrafted point features. The baseline extracts the same 9-dim local features and three additional ones: local point density, local curvature and normal. We use standard MLP as the classifier. Results are shown in Table 3, where our PointNet method significantly outperforms the baseline method. In Fig 4, we show qualitative segmentation results. Our network is able to output smooth predictions and is robust to missing points and occlusions. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_56", "text": " Based on the semantic segmentation output from our network, we further build a 3D object detection system using connected component for object proposal (see supplementary for details). We compare with previous state-of-the-art method in Table 4. The previous method is based on a sliding shape method (with CRF post processing) with SVMs trained on local geometric features and global room context feature in voxel grids. Our method outperforms it by a large margin on the furniture categories reported. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_57", "text": " In this section we validate our design choices by control experiments. We also show the effects of our network’s hyperparameters. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_58", "text": " As mentioned in Sec 4.2, there are at least three options for consuming unordered set inputs. We use the ModelNet40 shape classification problem as a test bed for comparisons of those options, the following two control experiment will also use this task. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_59", "text": " The baselines (illustrated in Fig 5) we compared with include multi-layer perceptron on unsorted and sorted points as n×3𝑛3n\\times 3 arrays, RNN model that considers input point as a sequence, and a model based on symmetry functions. The symmetry operation we experimented include max pooling, average pooling and an attention based weighted sum. The attention method is similar to that in , where a scalar score is predicted from each point feature, then the score is normalized across points by computing a softmax. The weighted sum is then computed on the normalized scores and the point features. As shown in Fig 5, max-pooling operation achieves the best performance by a large winning margin, which validates our choice. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_60", "text": " In Table 5 we demonstrate the positive effects of our input and feature transformations (for alignment). It’s interesting to see that the most basic architecture already achieves quite reasonable results. Using input transformation gives a 0.8%percent0.80.8\\% performance boost. The regularization loss is necessary for the higher dimension transform to work. By combining both transformations and the regularization term, we achieve the best performance. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_61", "text": " We show our PointNet, while simple and effective, is robust to various kinds of input corruptions. We use the same architecture as in Fig 5’s max pooling network. Input points are normalized into a unit sphere. Results are in Fig 6. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_62", "text": " As to missing points, when there are 50%percent5050\\% points missing, the accuracy only drops by 2.4%percent2.42.4\\% and 3.8%percent3.83.8\\% w.r.t. furthest and random input sampling. Our net is also robust to outlier points, if it has seen those during training. We evaluate two models: one trained on points with (x,y,z)𝑥𝑦𝑧(x,y,z) coordinates; the other on (x,y,z)𝑥𝑦𝑧(x,y,z) plus point density. The net has more than 80%percent8080\\% accuracy even when 20%percent2020\\% of the points are outliers. Fig 6 right shows the net is robust to point perturbations. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_63", "text": " In Fig 7, we visualize critical point sets 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} and upper-bound shapes 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} (as discussed in Thm 2) for some sample shapes S𝑆S. The point sets between the two shapes will give exactly the same global shape feature f​(S)𝑓𝑆f(S). ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_64", "text": " We can see clearly from Fig 7 that the critical point sets 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S}, those contributed to the max pooled feature, summarizes the skeleton of the shape. The upper-bound shapes 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} illustrates the largest possible point cloud that give the same global shape feature f​(S)𝑓𝑆f(S) as the input point cloud S𝑆S. 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} and 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} reflect the robustness of PointNet, meaning that losing some non-critical points does not change the global shape signature f​(S)𝑓𝑆f(S) at all. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_65", "text": " The 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} is constructed by forwarding all the points in a edge-length-2 cube through the network and select points p𝑝p whose point function values (h1​(p),h2​(p),⋯,hK​(p))subscriptℎ1𝑝subscriptℎ2𝑝⋯subscriptℎ𝐾𝑝(h_{1}(p),h_{2}(p),\\cdots,h_{K}(p)) are no larger than the global shape descriptor. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_66", "text": " Table 6 summarizes space (number of parameters in the network) and time (floating-point operations/sample) complexity of our classification PointNet. We also compare PointNet to a representative set of volumetric and multi-view based architectures in previous works. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_67", "text": " While MVCNN  and Subvolume (3D CNN)   achieve high performance, PointNet is orders more efficient in computational cost (measured in FLOPs/sample: 141x and 8x more efficient, respectively). Besides, PointNet is much more space efficient than MVCNN in terms of #param in the network (17x less parameters). Moreover, PointNet is much more scalable – it’s space and time complexity is O​(N)𝑂𝑁O(N) – linear in the number of input points. However, since convolution dominates computing time, multi-view method’s time complexity grows squarely on image resolution and volumetric convolution based method grows cubically with the volume size. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_68", "text": " Empirically, PointNet is able to process more than one million points per second for point cloud classification (around 1K objects/second) or semantic segmentation (around 2 rooms/second) with a 1080X GPU on TensorFlow, showing great potential for real-time applications. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_69", "text": " In this work, we propose a novel deep neural network PointNet that directly consumes point cloud. Our network provides a unified approach to a number of 3D recognition tasks including object classification, part segmentation and semantic segmentation, while obtaining on par or better results than state of the arts on standard benchmarks. We also provide theoretical analysis and visualizations towards understanding of our network. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" }, { "id": "1612.00593_all_70", "text": " The authors gratefully acknowledge the support of a Samsung GRO grant, ONR MURI N00014-13-1-0341 grant, NSF grant IIS-1528025, a Google Focused Research Award, a gift from the Adobe corporation and hardware donations by NVIDIA. ", "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation" } ]
What makes the performance of one-stage detectors inferior to two-stage detectors ?
Two-stage detectors can classify boxes at any position, scale, and aspect ratio using a region pooling operation [4]. In contrast, one-stage detectors use a fixed sampling grid [48]. Two-stage detectors can be made fast simply by reducing input image resolution and the number of proposals, but one-stage methods trailed in accuracy even with a larger compute budget [9].
[ 4, 48, 9 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of the foreground classes or as background using a convolutional neural network. Through a sequence of advances (10, 28, 20, 14), this two-stage framework consistently achieves top accuracy on the challenging COCO benchmark . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_1", "text": " Despite the success of two-stage detectors, a natural question to ask is: could a simple one-stage detector achieve similar accuracy? One stage detectors are applied over a regular, dense sampling of object locations, scales, and aspect ratios. Recent work on one-stage detectors, such as YOLO (26, 27) and SSD (22, 9), demonstrates promising results, yielding faster detectors with accuracy within 10-40% relative to state-of-the-art two-stage methods. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_2", "text": " This paper pushes the envelop further: we present a one-stage object detector that, for the first time, matches the state-of-the-art COCO AP of more complex two-stage detectors, such as the Feature Pyramid Network (FPN) or Mask R-CNN variants of Faster R-CNN . To achieve this result, we identify class imbalance during training as the main obstacle impeding one-stage detector from achieving state-of-the-art accuracy and propose a new loss function that eliminates this barrier. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_3", "text": " Class imbalance is addressed in R-CNN-like detectors by a two-stage cascade and sampling heuristics. The proposal stage (e.g., Selective Search , EdgeBoxes , DeepMask (24, 25), RPN ) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio (1:3), or online hard example mining (OHEM) , are performed to maintain a manageable balance between foreground and background. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_4", "text": " In contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. In practice this often amounts to enumerating ∼similar-to\\scriptstyle\\sim100k locations that densely cover spatial positions, scales, and aspect ratios. While similar sampling heuristics may also be applied, they are inefficient as the training procedure is still dominated by easily classified background examples. This inefficiency is a classic problem in object detection that is typically addressed via techniques such as bootstrapping (33, 29) or hard example mining (37, 8, 31). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_5", "text": " In this paper, we propose a new loss function that acts as a more effective alternative to previous approaches for dealing with class imbalance. The loss function is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases, see Figure 1. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. Experiments show that our proposed Focal Loss enables us to train a high-accuracy, one-stage detector that significantly outperforms the alternatives of training with the sampling heuristics or hard example mining, the previous state-of-the-art techniques for training one-stage detectors. Finally, we note that the exact form of the focal loss is not crucial, and we show other instantiations can achieve similar results. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_6", "text": " To demonstrate the effectiveness of the proposed focal loss, we design a simple one-stage object detector called RetinaNet, named for its dense sampling of object locations in an input image. Its design features an efficient in-network feature pyramid and use of anchor boxes. It draws on a variety of recent ideas from (22, 6, 28, 20). RetinaNet is efficient and accurate; our best model, based on a ResNet-101-FPN backbone, achieves a COCO test-dev AP of 39.1 while running at 5 fps, surpassing the previously best published single-model results from both one and two-stage detectors, see Figure 2. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_7", "text": " The sliding-window paradigm, in which a classifier is applied on a dense image grid, has a long and rich history. One of the earliest successes is the classic work of LeCun et al. who applied convolutional neural networks to handwritten digit recognition (19, 36). Viola and Jones used boosted object detectors for face detection, leading to widespread adoption of such models. The introduction of HOG and integral channel features gave rise to effective methods for pedestrian detection. DPMs helped extend dense detectors to more general object categories and had top results on PASCAL for many years. While the sliding-window approach was the leading detection paradigm in classic computer vision, with the resurgence of deep learning , two-stage detectors, described next, quickly came to dominate object detection. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_8", "text": " The dominant paradigm in modern object detection is based on a two-stage approach. As pioneered in the Selective Search work , the first stage generates a sparse set of candidate proposals that should contain all objects while filtering out the majority of negative locations, and the second stage classifies the proposals into foreground classes / background. R-CNN upgraded the second-stage classifier to a convolutional network yielding large gains in accuracy and ushering in the modern era of object detection. R-CNN was improved over the years, both in terms of speed (15, 10) and by using learned object proposals (6, 24, 28). Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework . Numerous extensions to this framework have been proposed, e.g. (20, 31, 32, 16, 14). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_9", "text": " OverFeat was one of the first modern one-stage object detector based on deep networks. More recently SSD (22, 9) and YOLO (26, 27) have renewed interest in one-stage methods. These detectors have been tuned for speed but their accuracy trails that of two-stage methods. SSD has a 10-20% lower AP, while YOLO focuses on an even more extreme speed/accuracy trade-off. See Figure 2. Recent work showed that two-stage detectors can be made fast simply by reducing input image resolution and the number of proposals, but one-stage methods trailed in accuracy even with a larger compute budget . In contrast, the aim of this work is to understand if one-stage detectors can match or surpass the accuracy of two-stage detectors while running at similar or faster speeds. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_10", "text": " The design of our RetinaNet detector shares many similarities with previous dense detectors, in particular the concept of ‘anchors’ introduced by RPN and use of features pyramids as in SSD and FPN . We emphasize that our simple detector achieves top results not based on innovations in network design but due to our novel loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_11", "text": " Both classic one-stage object detection methods, like boosted detectors (37, 5) and DPMs , and more recent methods, like SSD , face a large class imbalance during training. These detectors evaluate 104superscript10410^{4}-105superscript10510^{5} candidate locations per image but only a few locations contain objects. This imbalance causes two problems: (1) training is inefficient as most locations are easy negatives that contribute no useful learning signal; (2) en masse, the easy negatives can overwhelm training and lead to degenerate models. A common solution is to perform some form of hard negative mining (33, 37, 8, 31, 22) that samples hard examples during training or more complex sampling/reweighing schemes . In contrast, we show that our proposed focal loss naturally handles the class imbalance faced by a one-stage detector and allows us to efficiently train on all examples without sampling and without easy negatives overwhelming the loss and computed gradients. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_12", "text": " There has been much interest in designing robust loss functions (e.g., Huber loss ) that reduce the contribution of outliers by down-weighting the loss of examples with large errors (hard examples). In contrast, rather than addressing outliers, our focal loss is designed to address class imbalance by down-weighting inliers (easy examples) such that their contribution to the total loss is small even if their number is large. In other words, the focal loss performs the opposite role of a robust loss: it focuses training on a sparse set of hard examples. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_13", "text": " The Focal Loss is designed to address the one-stage object detection scenario in which there is an extreme imbalance between foreground and background classes during training (e.g., 1:1000). We introduce the focal loss starting from the cross entropy (CE) loss for binary classification111Extending the focal loss to the multi-class case is straightforward and works well; for simplicity we focus on the binary loss in this work.: CE​(p,y)={−log⁡(p)if y=1−log⁡(1−p)otherwise.CE𝑝𝑦cases𝑝if y=11𝑝otherwise.\\textrm{CE}(p,y)=\\begin{cases}-\\log(p)&\\text{if $y=1$}\\\\ -\\log(1-p)&\\text{otherwise.}\\end{cases} (1) In the above y∈{±1}𝑦plus-or-minus1y\\in\\{\\pm 1\\} specifies the ground-truth class and p∈(0,1)𝑝01p\\in(0,1) is the model’s estimated probability for the class with label y=1𝑦1y=1. For notational convenience, we define ptsubscript𝑝tp_{\\textrm{t}}: pt={pif y=11−potherwise,subscript𝑝tcases𝑝if y=11𝑝otherwise,p_{\\textrm{t}}=\\begin{cases}p&\\text{if $y=1$}\\\\ 1-p&\\text{otherwise,}\\end{cases} (2) and rewrite CE​(p,y)=CE​(pt)=−log⁡(pt)CE𝑝𝑦CEsubscript𝑝tsubscript𝑝t\\textrm{CE}(p,y)=\\textrm{CE}(p_{\\textrm{t}})=-\\log(p_{\\textrm{t}}). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_14", "text": " The CE loss can be seen as the blue (top) curve in Figure 1. One notable property of this loss, which can be easily seen in its plot, is that even examples that are easily classified (pt≫.5much-greater-thansubscript𝑝t.5p_{\\textrm{t}}\\gg.5) incur a loss with non-trivial magnitude. When summed over a large number of easy examples, these small loss values can overwhelm the rare class. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_15", "text": " A common method for addressing class imbalance is to introduce a weighting factor α∈(0,1)𝛼01\\alpha\\in(0,1) for class 111 and 1−α1𝛼1-\\alpha for class −11-1. In practice α𝛼\\alpha may be set by inverse class frequency or treated as a hyperparameter to set by cross validation. For notational convenience, we define αtsubscript𝛼t\\alpha_{\\textrm{t}} analogously to how we defined ptsubscript𝑝tp_{\\textrm{t}}. We write the α𝛼\\alpha-balanced CE loss as: CE​(pt)=−αt​log⁡(pt).CEsubscript𝑝tsubscript𝛼tsubscript𝑝t\\textrm{CE}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}\\log(p_{\\textrm{t}}). (3) This loss is a simple extension to CE that we consider as an experimental baseline for our proposed focal loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_16", "text": " As our experiments will show, the large class imbalance encountered during training of dense detectors overwhelms the cross entropy loss. Easily classified negatives comprise the majority of the loss and dominate the gradient. While α𝛼\\alpha balances the importance of positive/negative examples, it does not differentiate between easy/hard examples. Instead, we propose to reshape the loss function to down-weight easy examples and thus focus training on hard negatives. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_17", "text": " More formally, we propose to add a modulating factor (1−pt)γsuperscript1subscript𝑝t𝛾(1-p_{\\textrm{t}})^{\\gamma} to the cross entropy loss, with tunable focusing parameter γ≥0𝛾0\\gamma\\geq 0. We define the focal loss as: FL​(pt)=−(1−pt)γ​log⁡(pt).FLsubscript𝑝tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (4) ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_18", "text": " The focal loss is visualized for several values of γ∈(0,5)𝛾05\\gamma\\in(0,5) in Figure 1. We note two properties of the focal loss. (1) When an example is misclassified and ptsubscript𝑝tp_{\\textrm{t}} is small, the modulating factor is near 111 and the loss is unaffected. As pt→1→subscript𝑝t1p_{\\textrm{t}}\\rightarrow 1, the factor goes to 0 and the loss for well-classified examples is down-weighted. (2) The focusing parameter γ𝛾\\gamma smoothly adjusts the rate at which easy examples are down-weighted. When γ=0𝛾0\\gamma=0, FL is equivalent to CE, and as γ𝛾\\gamma is increased the effect of the modulating factor is likewise increased (we found γ=2𝛾2\\gamma=2 to work best in our experiments). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_19", "text": " Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss. For instance, with γ=2𝛾2\\gamma=2, an example classified with pt=0.9subscript𝑝t0.9p_{\\textrm{t}}=0.9 would have 100×100\\times lower loss compared with CE and with pt≈0.968subscript𝑝t0.968p_{\\textrm{t}}\\approx 0.968 it would have 1000×1000\\times lower loss. This in turn increases the importance of correcting misclassified examples (whose loss is scaled down by at most 4×4\\times for pt≤.5subscript𝑝t.5p_{\\textrm{t}}\\leq.5 and γ=2𝛾2\\gamma=2). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_20", "text": " In practice we use an α𝛼\\alpha-balanced variant of the focal loss: FL​(pt)=−αt​(1−pt)γ​log⁡(pt).FLsubscript𝑝tsubscript𝛼tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (5) We adopt this form in our experiments as it yields slightly improved accuracy over the non-α𝛼\\alpha-balanced form. Finally, we note that the implementation of the loss layer combines the sigmoid operation for computing p𝑝p with the loss computation, resulting in greater numerical stability. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_21", "text": " While in our main experimental results we use the focal loss definition above, its precise form is not crucial. In the appendix we consider other instantiations of the focal loss and demonstrate that these can be equally effective. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_22", "text": " Binary classification models are by default initialized to have equal probability of outputting either y=−1𝑦1y=-1 or 111. Under such an initialization, in the presence of class imbalance, the loss due to the frequent class can dominate total loss and cause instability in early training. To counter this, we introduce the concept of a ‘prior’ for the value of p𝑝p estimated by the model for the rare class (foreground) at the start of training. We denote the prior by π𝜋\\pi and set it so that the model’s estimated p𝑝p for examples of the rare class is low, e.g. 0.010.010.01. We note that this is a change in model initialization (see §4.1) and not of the loss function. We found this to improve training stability for both the cross entropy and focal loss in the case of heavy class imbalance. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_23", "text": " Two-stage detectors are often trained with the cross entropy loss without use of α𝛼\\alpha-balancing or our proposed loss. Instead, they address class imbalance through two mechanisms: (1) a two-stage cascade and (2) biased minibatch sampling. The first cascade stage is an object proposal mechanism (35, 24, 28) that reduces the nearly infinite set of possible object locations down to one or two thousand. Importantly, the selected proposals are not random, but are likely to correspond to true object locations, which removes the vast majority of easy negatives. When training the second stage, biased sampling is typically used to construct minibatches that contain, for instance, a 1:3 ratio of positive to negative examples. This ratio is like an implicit α𝛼\\alpha-balancing factor that is implemented via sampling. Our proposed focal loss is designed to address these mechanisms in a one-stage detection system directly via the loss function. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_24", "text": " RetinaNet is a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network. The first subnet performs convolutional object classification on the backbone’s output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that we propose specifically for one-stage, dense detection, see Figure 3. While there are many possible choices for the details of these components, most design parameters are not particularly sensitive to exact values as shown in the experiments. We describe each component of RetinaNet next. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_25", "text": " We adopt the Feature Pyramid Network (FPN) from as the backbone network for RetinaNet. In brief, FPN augments a standard convolutional network with a top-down pathway and lateral connections so the network efficiently constructs a rich, multi-scale feature pyramid from a single resolution input image, see Figure 3(a)-(b). Each level of the pyramid can be used for detecting objects at a different scale. FPN improves multi-scale predictions from fully convolutional networks (FCN) , as shown by its gains for RPN and DeepMask-style proposals , as well at two-stage detectors such as Fast R-CNN or Mask R-CNN . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_26", "text": " Following , we build FPN on top of the ResNet architecture . We construct a pyramid with levels P3subscript𝑃3P_{3} through P7subscript𝑃7P_{7}, where l𝑙l indicates pyramid level (Plsubscript𝑃𝑙P_{l} has resolution 2lsuperscript2𝑙2^{l} lower than the input). As in all pyramid levels have C=256𝐶256C=256 channels. Details of the pyramid generally follow with a few modest differences.222RetinaNet uses feature pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, where P3subscript𝑃3P_{3} to P5subscript𝑃5P_{5} are computed from the output of the corresponding ResNet residual stage (C3subscript𝐶3C_{3} through C5subscript𝐶5C_{5}) using top-down and lateral connections just as in , P6subscript𝑃6P_{6} is obtained via a 3×\\times3 stride-2 conv on C5subscript𝐶5C_{5}, and P7subscript𝑃7P_{7} is computed by applying ReLU followed by a 3×\\times3 stride-2 conv on P6subscript𝑃6P_{6}. This differs slightly from : (1) we don’t use the high-resolution pyramid level P2subscript𝑃2P_{2} for computational reasons, (2) P6subscript𝑃6P_{6} is computed by strided convolution instead of downsampling, and (3) we include P7subscript𝑃7P_{7} to improve large object detection. These minor modifications improve speed while maintaining accuracy. While many design choices are not crucial, we emphasize the use of the FPN backbone is; preliminary experiments using features from only the final ResNet layer yielded low AP. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_27", "text": " We use translation-invariant anchor boxes similar to those in the RPN variant in . The anchors have areas of 322superscript32232^{2} to 5122superscript5122512^{2} on pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, respectively. As in , at each pyramid level we use anchors at three aspect ratios {1\\{1:2,22, 111:111, 222:1}1\\}. For denser scale coverage than in , at each level we add anchors of sizes {20superscript202^{0}, 21/3superscript2132^{1/3}, 22/3superscript2232^{2/3}} of the original set of 3 aspect ratio anchors. This improve AP in our setting. In total there are A=9𝐴9A=9 anchors per level and across levels they cover the scale range 32 - 813 pixels with respect to the network’s input image. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_28", "text": " Each anchor is assigned a length K𝐾K one-hot vector of classification targets, where K𝐾K is the number of object classes, and a 4-vector of box regression targets. We use the assignment rule from RPN but modified for multi-class detection and with adjusted thresholds. Specifically, anchors are assigned to ground-truth object boxes using an intersection-over-union (IoU) threshold of 0.5; and to background if their IoU is in (0, 0.4). As each anchor is assigned to at most one object box, we set the corresponding entry in its length K𝐾K label vector to 111 and all other entries to 00. If an anchor is unassigned, which may happen with overlap in (0.4, 0.5), it is ignored during training. Box regression targets are computed as the offset between each anchor and its assigned object box, or omitted if there is no assignment. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_29", "text": " The classification subnet predicts the probability of object presence at each spatial position for each of the A𝐴A anchors and K𝐾K object classes. This subnet is a small FCN attached to each FPN level; parameters of this subnet are shared across all pyramid levels. Its design is simple. Taking an input feature map with C𝐶C channels from a given pyramid level, the subnet applies four 3×\\times3 conv layers, each with C𝐶C filters and each followed by ReLU activations, followed by a 3×\\times3 conv layer with K​A𝐾𝐴KA filters. Finally sigmoid activations are attached to output the K​A𝐾𝐴KA binary predictions per spatial location, see Figure 3 (c). We use C=256𝐶256C=256 and A=9𝐴9A=9 in most experiments. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_30", "text": " In contrast to RPN , our object classification subnet is deeper, uses only 3×\\times3 convs, and does not share parameters with the box regression subnet (described next). We found these higher-level design decisions to be more important than specific values of hyperparameters. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_31", "text": " In parallel with the object classification subnet, we attach another small FCN to each pyramid level for the purpose of regressing the offset from each anchor box to a nearby ground-truth object, if one exists. The design of the box regression subnet is identical to the classification subnet except that it terminates in 4​A4𝐴4A linear outputs per spatial location, see Figure 3 (d). For each of the A𝐴A anchors per spatial location, these 444 outputs predict the relative offset between the anchor and the ground-truth box (we use the standard box parameterization from R-CNN ). We note that unlike most recent work, we use a class-agnostic bounding box regressor which uses fewer parameters and we found to be equally effective. The object classification subnet and the box regression subnet, though sharing a common structure, use separate parameters. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_32", "text": " RetinaNet forms a single FCN comprised of a ResNet-FPN backbone, a classification subnet, and a box regression subnet, see Figure 3. As such, inference involves simply forwarding an image through the network. To improve speed, we only decode box predictions from at most 1k top-scoring predictions per FPN level, after thresholding detector confidence at 0.05. The top predictions from all levels are merged and non-maximum suppression with a threshold of 0.5 is applied to yield the final detections. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_33", "text": " We use the focal loss introduced in this work as the loss on the output of the classification subnet. As we will show in §5, we find that γ=2𝛾2\\gamma=2 works well in practice and the RetinaNet is relatively robust to γ∈(0.5,5)𝛾0.55\\gamma\\in(0.5,5). We emphasize that when training RetinaNet, the focal loss is applied to all ∼similar-to\\scriptstyle\\sim100k anchors in each sampled image. This stands in contrast to common practice of using heuristic sampling (RPN) or hard example mining (OHEM, SSD) to select a small set of anchors (e.g., 256) for each minibatch. The total focal loss of an image is computed as the sum of the focal loss over all ∼similar-to\\scriptstyle\\sim100k anchors, normalized by the number of anchors assigned to a ground-truth box. We perform the normalization by the number of assigned anchors, not total anchors, since the vast majority of anchors are easy negatives and receive negligible loss values under the focal loss. Finally we note that α𝛼\\alpha, the weight assigned to the rare class, also has a stable range, but it interacts with γ𝛾\\gamma making it necessary to select the two together (see Tables 1a and 1b). In general α𝛼\\alpha should be decreased slightly as γ𝛾\\gamma is increased (for γ=2𝛾2\\gamma=2, α=0.25𝛼0.25\\alpha=0.25 works best). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_34", "text": " We experiment with ResNet-50-FPN and ResNet-101-FPN backbones . The base ResNet-50 and ResNet-101 models are pre-trained on ImageNet1k; we use the models released by . New layers added for FPN are initialized as in . All new conv layers except the final one in the RetinaNet subnets are initialized with bias b=0𝑏0b=0 and a Gaussian weight fill with σ=0.01𝜎0.01\\sigma=0.01. For the final conv layer of the classification subnet, we set the bias initialization to b=−log⁡((1−π)/π)𝑏1𝜋𝜋b=-\\log((1-\\pi)/\\pi), where π𝜋\\pi specifies that at the start of training every anchor should be labeled as foreground with confidence of ∼similar-to\\scriptstyle\\simπ𝜋\\pi. We use π=.01𝜋.01\\pi=.01 in all experiments, although results are robust to the exact value. As explained in §3.3, this initialization prevents the large number of background anchors from generating a large, destabilizing loss value in the first iteration of training. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_35", "text": " RetinaNet is trained with stochastic gradient descent (SGD). We use synchronized SGD over 8 GPUs with a total of 16 images per minibatch (2 images per GPU). Unless otherwise specified, all models are trained for 90k iterations with an initial learning rate of 0.01, which is then divided by 10 at 60k and again at 80k iterations. We use horizontal image flipping as the only form of data augmentation unless otherwise noted. Weight decay of 0.0001 and momentum of 0.9 are used. The training loss is the sum the focal loss and the standard smooth L1subscript𝐿1L_{1} loss used for box regression . Training time ranges between 10 and 35 hours for the models in Table 1e. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_36", "text": " We present experimental results on the bounding box detection track of the challenging COCO benchmark . For training, we follow common practice (1, 20) and use the COCO trainval35k split (union of 80k images from train and a random 35k subset of images from the 40k image val split). We report lesion and sensitivity studies by evaluating on the minival split (the remaining 5k images from val). For our main results, we report COCO AP on the test-dev split, which has no public labels and requires use of the evaluation server. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_37", "text": " We run numerous experiments to analyze the behavior of the loss function for dense detection along with various optimization strategies. For all experiments we use depth 50 or 101 ResNets with a Feature Pyramid Network (FPN)  constructed on top. For all ablation studies we use an image scale of 600 pixels for training and testing. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_38", "text": " Our first attempt to train RetinaNet uses standard cross entropy (CE) loss without any modifications to the initialization or learning strategy. This fails quickly, with the network diverging during training. However, simply initializing the last layer of our model such that the prior probability of detecting an object is π=.01𝜋.01\\pi=.01 (see §4.1) enables effective learning. Training RetinaNet with ResNet-50 and this initialization already yields a respectable AP of 30.2 on COCO. Results are insensitive to the exact value of π𝜋\\pi so we use π=.01𝜋.01\\pi=.01 for all experiments. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_39", "text": " Our next attempt to improve learning involved using the α𝛼\\alpha-balanced CE loss described in §3.1. Results for various α𝛼\\alpha are shown in Table 1a. Setting α=.75𝛼.75\\alpha=.75 gives a gain of 0.9 points AP. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_40", "text": " Results using our proposed focal loss are shown in Table 1b. The focal loss introduces one new hyperparameter, the focusing parameter γ𝛾\\gamma, that controls the strength of the modulating term. When γ=0𝛾0\\gamma=0, our loss is equivalent to the CE loss. As γ𝛾\\gamma increases, the shape of the loss changes so that “easy” examples with low loss get further discounted, see Figure 1. FL shows large gains over CE as γ𝛾\\gamma is increased. With γ=2𝛾2\\gamma=2, FL yields a 2.9 AP improvement over the α𝛼\\alpha-balanced CE loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_41", "text": " For the experiments in Table 1b, for a fair comparison we find the best α𝛼\\alpha for each γ𝛾\\gamma. We observe that lower α𝛼\\alpha’s are selected for higher γ𝛾\\gamma’s (as easy negatives are down-weighted, less emphasis needs to be placed on the positives). Overall, however, the benefit of changing γ𝛾\\gamma is much larger, and indeed the best α𝛼\\alpha’s ranged in just (.25,.75) (we tested α∈(.01,.999)𝛼.01.999\\alpha\\in(.01,.999)). We use γ=2.0𝛾2.0\\gamma=2.0 with α=.25𝛼.25\\alpha=.25 for all experiments but α=.5𝛼.5\\alpha=.5 works nearly as well (.4 AP lower). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_42", "text": " To understand the focal loss better, we analyze the empirical distribution of the loss of a converged model. For this, we take take our default ResNet-101 600-pixel model trained with γ=2𝛾2\\gamma=2 (which has 36.0 AP). We apply this model to a large number of random images and sample the predicted probability for ∼similar-to\\scriptstyle\\sim107superscript10710^{7} negative windows and ∼similar-to\\scriptstyle\\sim105superscript10510^{5} positive windows. Next, separately for positives and negatives, we compute FL for these samples, and normalize the loss such that it sums to one. Given the normalized loss, we can sort the loss from lowest to highest and plot its cumulative distribution function (CDF) for both positive and negative samples and for different settings for γ𝛾\\gamma (even though model was trained with γ=2𝛾2\\gamma=2). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_43", "text": " Cumulative distribution functions for positive and negative samples are shown in Figure 4. If we observe the positive samples, we see that the CDF looks fairly similar for different values of γ𝛾\\gamma. For example, approximately 20% of the hardest positive samples account for roughly half of the positive loss, as γ𝛾\\gamma increases more of the loss gets concentrated in the top 20% of examples, but the effect is minor. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_44", "text": " The effect of γ𝛾\\gamma on negative samples is dramatically different. For γ=0𝛾0\\gamma=0, the positive and negative CDFs are quite similar. However, as γ𝛾\\gamma increases, substantially more weight becomes concentrated on the hard negative examples. In fact, with γ=2𝛾2\\gamma=2 (our default setting), the vast majority of the loss comes from a small fraction of samples. As can be seen, FL can effectively discount the effect of easy negatives, focusing all attention on the hard negative examples. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_45", "text": " proposed to improve training of two-stage detectors by constructing minibatches using high-loss examples. Specifically, in OHEM each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples. The nms threshold and batch size are tunable parameters. Like the focal loss, OHEM puts more emphasis on misclassified examples, but unlike FL, OHEM completely discards easy examples. We also implement a variant of OHEM used in SSD : after applying nms to all examples, the minibatch is constructed to enforce a 1:3 ratio between positives and negatives to help ensure each minibatch has enough positives. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_46", "text": " We test both OHEM variants in our setting of one-stage detection which has large class imbalance. Results for the original OHEM strategy and the ‘OHEM 1:3’ strategy for selected batch sizes and nms thresholds are shown in Table 1d. These results use ResNet-101, our baseline trained with FL achieves 36.0 AP for this setting. In contrast, the best setting for OHEM (no 1:3 ratio, batch size 128, nms of .5) achieves 32.8 AP. This is a gap of 3.2 AP, showing FL is more effective than OHEM for training dense detectors. We note that we tried other parameter setting and variants for OHEM but did not achieve better results. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_47", "text": " Finally, in early experiments, we attempted to train with the hinge loss on ptsubscript𝑝tp_{\\textrm{t}}, which sets loss to 0 above a certain value of ptsubscript𝑝tp_{\\textrm{t}}. However, this was unstable and we did not manage to obtain meaningful results. Results exploring alternate loss functions are in the appendix. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_48", "text": " One of the most important design factors in a one-stage detection system is how densely it covers the space of possible image boxes. Two-stage detectors can classify boxes at any position, scale, and aspect ratio using a region pooling operation . In contrast, as one-stage detectors use a fixed sampling grid, a popular approach for achieving high coverage of boxes in these approaches is to use multiple ‘anchors’ at each spatial position to cover boxes of various scales and aspect ratios. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_49", "text": " We sweep over the number of scale and aspect ratio anchors used at each spatial position and each pyramid level in FPN. We consider cases from a single square anchor at each location to 12 anchors per location spanning 4 sub-octave scales (2k/4superscript2𝑘42^{k/4}, for k≤3𝑘3k\\leq 3) and 3 aspect ratios (0.5, 1, 2). Results using ResNet-50 are shown in Table 1c. A surprisingly good AP (30.3) is achieved using just one square anchor. However, the AP can be improved by nearly 4 points (to 34.0) when using 3 scales and 3 aspect ratios per location. We used this setting for all other experiments in this work. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_50", "text": " Finally, we note that increasing beyond 6-9 anchors did not shown further gains. Thus while two-stage systems can classify arbitrary boxes in an image, the saturation of performance w.r.t. density implies the higher potential density of two-stage systems may not offer an advantage. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_51", "text": " Larger backbone networks yield higher accuracy, but also slower inference speeds. Likewise for input image scale (defined by the shorter image side). We show the impact of these two factors in Table 1e. In Figure 2 we plot the speed/accuracy trade-off curve for RetinaNet and compare it to recent methods using public numbers on COCO test-dev. The plot reveals that RetinaNet, enabled by our focal loss, forms an upper envelope over all existing methods, discounting the low-accuracy regime. RetinaNet with ResNet-101-FPN and a 600 pixel image scale (which we denote by RetinaNet-101-600 for simplicity) matches the accuracy of the recently published ResNet-101-FPN Faster R-CNN , while running in 122 ms per image compared to 172 ms (both measured on an Nvidia M40 GPU). Using larger scales allows RetinaNet to surpass the accuracy of all two-stage approaches, while still being faster. For faster runtimes, there is only one operating point (500 pixel input) at which using ResNet-50-FPN improves over ResNet-101-FPN. Addressing the high frame rate regime will likely require special network design, as in , and is beyond the scope of this work. We note that after publication, faster and more accurate results can now be obtained by a variant of Faster R-CNN from . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_52", "text": " We evaluate RetinaNet on the challenging COCO dataset and compare test-dev results to recent state-of-the-art methods including both one-stage and two-stage models. Results are presented in Table 2 for our RetinaNet-101-800 model trained using scale jitter and for 1.5×\\times longer than the models in Table 1e (giving a 1.3 AP gain). Compared to existing one-stage methods, our approach achieves a healthy 5.9 point AP gap (39.1 vs. 33.2) with the closest competitor, DSSD , while also being faster, see Figure 2. Compared to recent two-stage methods, RetinaNet achieves a 2.3 point gap above the top-performing Faster R-CNN model based on Inception-ResNet-v2-TDM . Plugging in ResNeXt-32x8d-101-FPN as the RetinaNet backbone further improves results another 1.7 AP, surpassing 40 AP on COCO. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_53", "text": " In this work, we identify class imbalance as the primary obstacle preventing one-stage object detectors from surpassing top-performing, two-stage methods. To address this, we propose the focal loss which applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. Our approach is simple and highly effective. We demonstrate its efficacy by designing a fully convolutional one-stage detector and report extensive experimental analysis showing that it achieves state-of-the-art accuracy and speed. Source code is available at https://github.com/facebookresearch/Detectron . ", "title": "Focal Loss for Dense Object Detection" } ]
Is using 46 Images for training and 15 images for testing enough for the model to learn the features well and generalize to new unseen cases ?
The results show that the performance of the model drops when faced with testing data that was acquired by centers that did not provide any data in the training dataset [60]. It can be inferred that having a more diverse dataset or utilizing techniques that can help make the CNN more robust to these differences can help generalization [76].
[ 60, 76 ]
[ { "id": "1603.05959_all_0", "text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient outcome. For a better understanding of the pathophysiology of diseases, quantitative imaging can reveal clues about the disease characteristics and effects on particular anatomical structures. For example, the associations of different lesion types, their spatial distribution and extent with acute and chronic sequelae after traumatic brain injury (TBI) are still poorly understood (Maas et al. (2015)). However, there is growing evidence that quantification of lesion burden may add insight into the functional outcome of patients (Ding et al. (2008); Moen et al. (2012)). Additionally, exact locations of injuries relate to particular deficits depending on the brain structure that is affected (Lehtonen et al. (2005); Warner et al. (2010); Sharp et al. (2011)). This is in line with estimates that functional deficits caused by stroke are associated with the extent of damage to particular parts of the brain (Carey et al. (2013)). Lesion burden is commonly quantified by means of volume and number of lesions, biomarkers that have been shown to be related to cognitive deficits. For example, volume of white matter lesions (WML) correlates with cognitive decline and increased risk of dementia (Ikram et al. (2010)). In clinical research on multiple sclerosis (MS), lesion count and volume are used to analyse disease progression and effectiveness of pharmaceutical treatment (Rovira and León (2008); Kappos et al. (2007)). Finally, accurate delineation of the pathology is important in the case of brain tumors, where estimation of the relative volume of a tumor’s sub-components is required for planning radiotherapy and treatment follow-up (Wen et al. (2010)). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_1", "text": " The quantitative analysis of lesions requires accurate lesion segmentation in multi-modal, three-dimensional images which is a challenging task for a number of reasons. The heterogeneous appearance of lesions including the large variability in location, size, shape and frequency make it difficult to devise effective segmentation rules. It is thus highly non-trivial to delineate contusions, edema and haemorrhages in TBI (Irimia et al. (2012)), or sub-components of brain tumors such as proliferating cells and necrotic core (Menze et al. (2015)). The arguably most accurate segmentation results can be obtained through manual delineation by a human expert which is tedious, expensive, time-consuming, impractical in larger studies, and introduces inter-observer variability. Additionally, for deciding whether a particular region is part of a lesion multiple image sequences with varying contrasts need to be considered, and the level of expert knowledge and experience are important factors that impact segmentation accuracy. Hence, in clinical routine often only qualitative, visual inspection, or at best crude measures like approximate lesion volume and number of lesions are used (Yuh et al. (2012); Wen et al. (2010)). In order to capture and better understand the complexity of brain pathologies it is important to conduct large studies with many subjects to gain the statistical power for drawing conclusions across a whole patient population. The development of accurate, automatic segmentation algorithms has therefore become a major research focus in medical image computing with the potential to offer objective, reproducible, and scalable approaches to quantitative assessment of brain lesions. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_2", "text": " Figure 1 illustrates some of the challenges that arise when devising a computational approach for the task of automatic lesion segmentation. The figure summarizes statistics and shows examples of brain lesions in the case of TBI, but is representative of other pathologies such as brain tumors and ischemic stroke. Lesions can occur at multiple sites, with varying shapes and sizes, and their image intensity profiles largely overlap with non-affected, healthy parts of the brain or lesions which are not in the focus of interest. For example, stroke and MS lesions have a similar hyper-intense appearance in FLAIR sequences as other WMLs (Mitra et al. (2014); Schmidt et al. (2012)). It is generally difficult to derive statistical prior information about lesion shape and appearance. On the other hand, in some applications there is an expectation on the spatial configuration of segmentation labels, for example there is a hierarchical layout of sub-components in brain tumors. Ideally, a computational approach is able to adjust itself to application specific characteristics by learning from a set of a few example images. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_3", "text": " A multitude of automatic lesion segmentation methods have been proposed over the last decade, and several main categories of approaches can be identified. One group of methods poses the lesion segmentation task as an abnormality detection problem, for example by employing image registration. The early work of Prastawa et al. (2004) and more recent ones by Schmidt et al. (2012) and Doyle et al. (2013) align the pathological scan to a healthy atlas and lesions are detected based on deviations in tissue appearance between the patient and the atlas image. Lesions, however, may cause large structural deformations that may lead to incorrect segmentation due to incorrect registration. Gooya et al. (2011); Parisot et al. (2012) alleviate this problem by jointly solving the segmentation and registration tasks. Liu et al. (2014) showed that registration together with a low-rank decomposition gives as a by-product the abnormal structures in the sparse components, although, this may not be precise enough for detection of small lesions. Abnormality detection has also been proposed within image synthesis works. Representative approaches are those of Weiss et al. (2013) using dictionary learning and Ye et al. (2013) using a patch-based approach. The idea is to synthesize pseudo-healthy images that when compared to the patient scan allow to highlight abnormal regions. In this context, Cardoso et al. (2015) present a generative model for image synthesis that yields a probabilistic segmentation of abnormalities. Another unsupervised technique is proposed by Erihov et al. (2015), a saliency-based method that exploits brain asymmetry in pathological cases. A common advantage of the above methods is that they do not require a training dataset with corresponding manual annotations. In general, these approaches are more suitable for detecting lesions rather than accurately segmenting them. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_4", "text": " Some of the most successful, supervised segmentation methods for brain lesions are based on voxel-wise classifiers, such as Random Forests. Representative work is that of Geremia et al. (2010) on MS lesions, employing intensity features to capture the appearance of the region around each voxel. Zikic et al. (2012) combine this with a generative Gaussian Mixture Model (GMM) to obtain tissue-specific probabilistic priors (Van Leemput et al. (1999)). This framework was adopted in multiple works, with representative pipelines for brain tumors by Tustison et al. (2013) and TBI by Rao et al. (2014). Both works incorporate morphological and contextual features to better capture the heterogeneity of lesions. Rao et al. (2014) also incorporate brain structure segmentation results obtained from a multi-atlas label propagation approach (Ledig et al. (2015)) to provide strong tissue-class priors to the Random Forests. Tustison et al. (2013) additionally use a Markov Random Field (MRF) to incorporate spatial regularization. MRFs are commonly used to encourage spatial continuity of the segmentation (Schmidt et al. (2012); Mitra et al. (2014)). Although those methods have been very successful, it appears that their modeling capabilities still have significant limitations. This is confirmed by the results of the most recent challenges 111links: http://braintumorsegmentation.org/, www.isles-challenge.org, and also by our own experience and experimentation with such approaches. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_5", "text": " At the same time, deep learning techniques have emerged as a powerful alternative for supervised learning with great model capacity and the ability to learn highly discriminative features for the task at hand. These features often outperform hand-crafted and pre-defined feature sets. In particular, Convolutional Neural Networks (CNNs) (LeCun et al. (1998); Krizhevsky et al. (2012)) have been applied with promising results on a variety of biomedical imaging problems. Ciresan et al. (2012) presented the first GPU implementation of a two-dimensional CNN for the segmentation of neural membranes. From the CNN based work that followed, related to our approach are the methods of Zikic et al. (2014); Havaei et al. (2015); Pereira et al. (2015), with the latter being the best performing automatic approach in the BRATS 2015 challenge (Menze et al. (2015)). These methods are based on 2D CNNs, which have been used extensively in computer vision applications on natural images. Here, the segmentation of a 3D brain scan is achieved by processing each 2D slice independently, which is arguably a non-optimal use of the volumetric medical image data. Despite the simplicity in the architecture, the promising results obtained by these methods indicate the potential of CNNs. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_6", "text": " Fully 3D CNNs come with an increased number of parameters and significant memory and computational requirements. Previous work discusses problems and apparent limitations when employing a 3D CNN on medical imaging data (Prasoon et al. (2013); Li et al. (2014); Roth et al. (2014)). To incorporate 3D contextual information, multiple works used 2D CNNs on three orthogonal 2D patches (Prasoon et al. (2013); Roth et al. (2014); Lyksborg et al. (2015)). In their work for structural brain segmentation, Brebisson and Montana (2015) extracted large 2D patches from multiple scales of the image and combined them with small single-scale 3D patches, in order to avoid the memory requirements of fully 3D networks. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_7", "text": " One of the reasons that discouraged the use of 3D CNNs is the slow inference due to the computationally expensive 3D convolutions. In contrast to the 2D/3D hybrid variants (Roth et al. (2014); Brebisson and Montana (2015)), 3D CNNs can fully exploit dense-inference (LeCun et al. (1998); Sermanet et al. (2014)), a technique that greatly decreases inference times and which we will further discuss in section 2.1. By employing dense-inference with 3D CNNs, Brosch et al. (2015) and Urban et al. (2014) reported computation times of a few seconds and approximately a minute respectively for the processing of a single brain scan. Even though the size of their developed networks was limited, a factor that is directly related to a network’s representational power, their results on MS and brain tumor segmentation respectively were very promising. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_8", "text": " Performance of CNNs is significantly influenced by the strategy for extracting training samples. A commonly adopted approach is training on image patches that are equally sampled from each class. This, however, biases the classifier towards rare classes and may result in over-segmentation. To counter this, Cireşan et al. (2013) proposes to train a second CNN on samples with a class distribution close to the real one, but oversample pixels that were incorrectly classified in the first stage. A secondary training stage was also suggested by Havaei et al. (2015), who retrain the classification layer on patches extracted uniformly from the image. In practice, two stage training schemes can be prone to overfitting and sensitive to the state of the first classifier. Alternatively, dense training (Long et al. (2015)) has been used to train a network on multiple or all voxels of a single image per optimisation step (Urban et al. (2014); Brosch et al. (2015); Ronneberger et al. (2015)). This can introduce severe class imbalance, similarly to uniform sampling. Weighted cost functions have been proposed in the two latter works to alleviate this problem. Brosch et al. (2015) manually adjusted the sensitivity of the network, but the method can become difficult to calibrate for multi-class problems. Ronneberger et al. (2015) first balance the cost from each class, which has an effect similar to equal sampling, and further adjust it for the specific task by estimating the difficulty of segmenting each pixel. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_9", "text": " We present a fully automatic approach for lesion segmentation in multi-modal brain MRI based on an 11-layers deep, multi-scale, 3D CNN with the following main contributions: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_10", "text": " 1. We propose an efficient hybrid training scheme, utilizing dense training (Long et al. (2015)) on sampled image segments, and analyze its behaviour in adapting to class imbalance of the segmentation problem at hand. 2. We analyze in depth the development of deeper, thus more discriminative, yet computationally efficient 3D CNNs. We exploit the utilization of small kernels, a design approach previously found beneficial in 2D networks (Simonyan and Zisserman (2014)) that impacts 3D CNNs even more, and present adopted solutions that enable training deeper networks. 3. We employ parallel convolutional pathways for multi-scale processing, a solution to efficiently incorporate both local and contextual information which greatly improves segmentation results. 4. We demonstrate the generalization capabilities of our system, which without significant modifications outperforms the state-of-the-art on a variety of challenging segmentation tasks, with top ranking results in two MICCAI challenges, ISLES and BRATS. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_11", "text": " Furthermore, a detailed analysis of the network reveals valuable insights into the powerful black box of deep learning with CNNs. For example, we have found that our network is capable of learning very complex, high level features that separate gray matter (GM), cerebrospinal fluid (CSF) and other anatomical structures to identify the image regions corresponding to lesions. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_12", "text": " Additionally, we have extended the fully-connected Conditional Random Field (CRF) model by Krähenbühl and Koltun (2011) to 3D which we use for final post-processing of the CNN’s soft segmentation maps. This CRF overcomes limitations of previous models as it can handle arbitrarily large neighborhoods while preserving fast inference times. To the best of our knowledge, this is the first use of a fully connected CRF on medical data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_13", "text": " To facilitate further research and encourage other researchers to build upon our results, the source code of our lesion segmentation method including the CNN and the 3D fully connected CRF is made publicly available on https://biomedia.doc.ic.ac.uk/software/deepmedic/. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_14", "text": " Our proposed lesion segmentation method consists of two main components, a 3D CNN that produces highly accurate, soft segmentation maps, and a fully connected 3D CRF that imposes regularization constraints on the CNN output and produces the final hard segmentation labels. The main contributions of our work are within the CNN component which we describe first in the following. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_15", "text": " CNNs produce estimates for the voxel-wise segmentation labels by classifying each voxel in an image independently taking the neighborhood, i.e. local and contextual image information, into account. This is achieved by sequential convolutions of the input with multiple filters at the cascaded layers of the network. Each layer l∈(1,L)𝑙1𝐿l\\in(1,L) consists of Clsubscript𝐶𝑙C_{l} feature maps (FMs), also referred to as channels. Every FM is a group of neurons that detects a particular pattern, i.e. a feature, in the channels of the previous layer. The pattern is defined by the kernel weights associated with the FM. If the neurons of the m𝑚m-th FM in the l𝑙l-th layer are arranged in a 3D grid, their activations constitute the image 𝐲lm=f​(∑n=1Cl−1𝐤lm,n⋆𝐲l−1n+blm)subscriptsuperscript𝐲𝑚𝑙𝑓superscriptsubscript𝑛1subscript𝐶𝑙1⋆subscriptsuperscript𝐤𝑚𝑛𝑙subscriptsuperscript𝐲𝑛𝑙1subscriptsuperscript𝑏𝑚𝑙\\mathbf{y}^{m}_{l}=f(\\sum_{n=1}^{C_{l-1}}{\\mathbf{k}^{m,n}_{l}\\star\\mathbf{y}^{n}_{l-1}}+b^{m}_{l}). This is the result of convolving each of the previous layer’s channels with a 3-dimensional kernel 𝐤lm,nsubscriptsuperscript𝐤𝑚𝑛𝑙\\mathbf{k}^{m,n}_{l}, adding a learned bias blmsubscriptsuperscript𝑏𝑚𝑙b^{m}_{l} and applying a non-linearity f𝑓f. Each kernel is a matrix of learned hidden weights 𝐖lm,nsubscriptsuperscript𝐖𝑚𝑛𝑙\\mathbf{W}^{m,n}_{l}. The images 𝐲0nsubscriptsuperscript𝐲𝑛0\\mathbf{y}^{n}_{0}, input to the first layer, correspond to the channels of the original input image, for instance a multi-sequence 3D MRI scan of the brain. The concatenation of the kernels 𝐤l=(𝐤lm,1,…,𝐤lm,Cl−1)subscript𝐤𝑙subscriptsuperscript𝐤𝑚1𝑙…subscriptsuperscript𝐤𝑚subscript𝐶𝑙1𝑙\\mathbf{k}_{l}=(\\mathbf{k}^{m,1}_{l},...,\\mathbf{k}^{m,C_{l-1}}_{l}) can be viewed as a 4-dimensional kernel convolving the concatenated channels 𝐲l−1=(𝐲l−11,…,𝐲l−1Cl−1)subscript𝐲𝑙1subscriptsuperscript𝐲1𝑙1…subscriptsuperscript𝐲subscript𝐶𝑙1𝑙1\\mathbf{y}_{l-1}=(\\mathbf{y}^{1}_{l-1},...,\\mathbf{y}^{C_{l-1}}_{l-1}), which then intuitively expresses that the neurons of higher layers combine the patterns extracted in previous layers, which results in the detection of increasingly more complex patterns. The activations of the neurons in the last layer L𝐿L correspond to particular segmentation class labels, hence this layer is also referred to as the classification layer. The neurons are thus grouped in CLsubscript𝐶𝐿C_{L} FMs, one for each of the segmentation classes. Their activations are fed into a position-wise softmax function that produces the predicted posterior pc​(𝐱)=exp⁡(𝐲Lc​(𝐱))/∑c=1CLexp⁡(𝐲Lc​(𝐱))subscript𝑝𝑐𝐱superscriptsubscript𝐲𝐿𝑐𝐱superscriptsubscript𝑐1subscript𝐶𝐿superscriptsubscript𝐲𝐿𝑐𝐱p_{c}(\\mathbf{x})=\\exp(\\mathbf{y}_{L}^{c}(\\mathbf{x}))/\\sum_{c=1}^{C_{L}}\\exp(\\mathbf{y}_{L}^{c}(\\mathbf{x})) for each class c𝑐c, which form soft segmentation maps with (pseudo-)probabilities. 𝐲Lc​(𝐱)superscriptsubscript𝐲𝐿𝑐𝐱\\mathbf{y}_{L}^{c}(\\mathbf{x}) is the activation of the c𝑐c-th classification FM at position 𝐱∈ℕ3𝐱superscriptℕ3\\mathbf{x}\\in\\mathbb{N}^{3}. This baseline network is depicted in Fig. 2. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_16", "text": " The neighborhood of voxels in the input that influence the activation of a neuron is its receptive field. Its size, 𝝋lsubscript𝝋𝑙\\bm{\\varphi}_{l}, increases at each subsequent layer l𝑙l and is given by the 3-dimensional vector: 𝝋l{x,y,z}=𝝋l−1{x,y,z}+(𝜿l{x,y,z}−1)​𝝉l{x,y,z}​ ,superscriptsubscript𝝋𝑙𝑥𝑦𝑧superscriptsubscript𝝋𝑙1𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑥𝑦𝑧1superscriptsubscript𝝉𝑙𝑥𝑦𝑧 ,\\bm{\\varphi}_{l}^{\\{x,y,z\\}}=\\bm{\\varphi}_{l-1}^{\\{x,y,z\\}}+(\\bm{\\kappa}_{l}^{\\{x,y,z\\}}-1)\\bm{\\tau}_{l}^{\\{x,y,z\\}}\\textrm{ ,} (1) where 𝜿l,𝝉l∈ℕ3subscript𝜿𝑙subscript𝝉𝑙superscriptℕ3\\bm{\\kappa}_{l},\\bm{\\tau}_{l}\\in\\mathbb{N}^{3} are vectors expressing the size of the kernels and stride of the receptive field at layer l𝑙l. 𝝉lsubscript𝝉𝑙\\bm{\\tau}_{l} is given by the product of the strides of kernels in layers preceding l𝑙l. In this work only unary strides are used, as larger strides downsample the FMs (Springenberg et al. (2014)), which is unwanted behaviour for accurate segmentation. Thus in our system 𝝉l=(1,1,1)subscript𝝉𝑙111\\bm{\\tau}_{l}=(1,1,1). The receptive field of a neuron in the classification layer corresponds to the image patch that influences the prediction for its central voxel. This is called the CNN’s receptive field, with 𝝋C​N​N=𝝋Lsubscript𝝋𝐶𝑁𝑁subscript𝝋𝐿\\bm{\\varphi}_{CNN}=\\bm{\\varphi}_{L}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_17", "text": " If input of size 𝜹i​nsubscript𝜹𝑖𝑛\\bm{\\delta}_{in} is provided, the dimensions of the FMs in layer l𝑙l are given by: 𝜹l{x,y,z}=⌊(𝜹i​n{x,y,z}−𝝋l{x,y,z})/𝝉l{x,y,z}+1⌋superscriptsubscript𝜹𝑙𝑥𝑦𝑧superscriptsubscript𝜹𝑖𝑛𝑥𝑦𝑧superscriptsubscript𝝋𝑙𝑥𝑦𝑧superscriptsubscript𝝉𝑙𝑥𝑦𝑧1\\bm{\\delta}_{l}^{\\{x,y,z\\}}=\\lfloor(\\bm{\\delta}_{in}^{\\{x,y,z\\}}-\\bm{\\varphi}_{l}^{\\{x,y,z\\}})/\\bm{\\tau}_{l}^{\\{x,y,z\\}}+1\\rfloor (2) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_18", "text": " In the common patch-wise classification setting, an input patch of size 𝜹i​n=𝝋C​N​Nsubscript𝜹𝑖𝑛subscript𝝋𝐶𝑁𝑁\\bm{\\delta}_{in}=\\bm{\\varphi}_{CNN} is provided and the network outputs a single prediction for its central voxel. In this case the classification layer consists of FMs with size 13superscript131^{3}. Networks that are implemented as fully-convolutionals are capable of dense-inference, which is performed when input of size greater than 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} is provided (Sermanet et al. (2014)). In this case, the dimensions of FMs increase according to Eq. (2). This includes the classification FMs which then output multiple predictions simultaneously, one for each stride of the CNN’s receptive field on the input (Fig. 2). All predictions are equally trustworthy, as long as the receptive field is fully contained within the input and captures only original content, i.e. no padding is used. This strategy significantly reduces the computational costs and memory loads since the otherwise repeated computations of convolutions on the same voxels in overlapping patches are avoided. Optimal performance is achieved if the whole image is scanned in one forward pass. If GPU memory constraints do not allow it, such as in the case of large 3D networks where a large number of FMs need to be cached, the volume is tiled in multiple image-segments, which are larger than individual patches, but small enough to fit into memory. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_19", "text": " Before analyzing how we exploit the above dense-inference technique for training, which is the first main contribution of our work, we present the commonly used setting in which CNNs are trained patch-by-patch. Random patches of size 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} are extracted from the training images. A batch is formed out of B𝐵B of these samples, which is then processed by the network for one training iteration of Stochastic Gradient Descent (SGD). This step aims to alter the network’s parameters 𝚯𝚯\\mathbf{\\Theta}, such as weights and biases, in order to maximize the log likelihood of the data or, equally, minimize the Cross Entropy via the cost function: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_20", "text": " J​(𝚯;𝐈i,ci)=−1B​∑i=1Blog⁡(P​(Y=ci|𝐈i,𝚯))=−1B​∑i=1Blog⁡(pci)​ ,𝐽𝚯superscript𝐈𝑖superscript𝑐𝑖1𝐵superscriptsubscript𝑖1𝐵𝑃𝑌conditionalsuperscript𝑐𝑖superscript𝐈𝑖𝚯1𝐵superscriptsubscript𝑖1𝐵subscript𝑝superscript𝑐𝑖 ,J(\\mathbf{\\Theta};\\mathbf{I}^{i},c^{i})=-\\frac{1}{B}\\sum_{i=1}^{B}\\log\\left(P(Y=c^{i}|\\mathbf{I}^{i},\\mathbf{\\Theta})\\right)=-\\frac{1}{B}\\sum_{i=1}^{B}\\log(p_{c^{i}})\\textrm{ ,} (3) where the pair (𝐈i,ci),∀i∈(1,B)superscript𝐈𝑖superscript𝑐𝑖for-all𝑖1𝐵(\\mathbf{I}^{i},c^{i}),\\forall{i}\\in{(1,B)} is the i𝑖i-th patch in the batch and the true label of its central voxel, while the scalar value pcisubscript𝑝superscript𝑐𝑖p_{c^{i}} is the predicted posterior for class cisuperscript𝑐𝑖c^{i}. Regularization terms were omitted for simplicity. Multiple sequential optimization steps over different batches gradually lead to convergence. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_21", "text": " Larger training batch sizes B𝐵B are preferred as they approximate the overall data more accurately and lead to better estimation of the true gradient by SGD. However, the memory requirement and computation time increase with the batch size. This limitation is especially relevant for 3D CNNs, where only a few dozens of patches can be processed within reasonable time on modern GPUs. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_22", "text": " To overcome this problem, we devise a training strategy that exploits the dense inference technique on image segments. Following from Eq. (2), if an image segment of size greater than 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} is given as input to our network, the output is a posterior probability for multiple voxels V=∏i={x,y,z}𝜹L(i)𝑉subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜹𝐿𝑖V=\\prod_{i=\\{x,y,z\\}}{\\bm{\\delta}_{L}^{(i)}}. If the training batches are formed of B𝐵B segments extracted from the training images, the cost function (3) in the case of dense-training becomes: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_23", "text": " JD​(𝚯;𝐈s,𝐜s)=−1B⋅V​∑s=1B∑v=1Vlog⁡(pcsv​(𝐱v))​ ,subscript𝐽𝐷𝚯subscript𝐈𝑠subscript𝐜𝑠1⋅𝐵𝑉superscriptsubscript𝑠1𝐵superscriptsubscript𝑣1𝑉subscript𝑝superscriptsubscript𝑐𝑠𝑣superscript𝐱𝑣 ,J_{D}(\\mathbf{\\Theta};\\mathbf{I}_{s},\\mathbf{c}_{s})=-\\frac{1}{B\\cdot V}\\sum_{s=1}^{B}\\sum_{v=1}^{V}\\log(p_{c_{s}^{v}}(\\mathbf{x}^{v}))\\textrm{ ,} (4) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_24", "text": " where 𝐈ssubscript𝐈𝑠\\mathbf{I}_{s} and 𝐜ssubscript𝐜𝑠\\mathbf{c}_{s} are the s𝑠s-th segment of the batch and the true labels of its V𝑉V predicted voxels respectively. csvsuperscriptsubscript𝑐𝑠𝑣c_{s}^{v} is the true label of the v𝑣v-th voxel, 𝐱vsuperscript𝐱𝑣\\mathbf{x}^{v} the corresponding position in the classification FMs and pcsvsubscript𝑝superscriptsubscript𝑐𝑠𝑣p_{c_{s}^{v}} the output of the softmax function. The effective batch size is increased by a factor of V𝑉V without a corresponding increase in computational and memory requirements, as earlier discussed in Sec. 2.1. Notice that this is a hybrid scheme between the commonly used training on individual patches and the dense training scheme on a whole image (Long et al. (2015)), with the latter being problematic to apply for training large 3D CNNs on volumes of high resolution due to memory limitations. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_25", "text": " An appealing consequence of this scheme is that the sampling of input segments provides a flexible and automatic way to balance the distribution of training samples from different segmentation classes which is an important issue that directly impacts the segmentation accuracy. Specifically, we build the training batches by extracting segments from the training images with 50% probability being centred on a foreground or background voxel, alleviating class-imbalance. Note that the predicted voxels V𝑉V in a segment do not have to be of the same class, something that occurs when a segment is sampled from a region near class boundaries (Fig. 3). Hence, the sampling rate of the proposed hybrid method adjusts to the true distribution of the segmentation task’s classes. Specifically, the smaller a labelled object, the more background voxels will be captured within segments centred on the foreground voxel. Implicitly, this yields a balance between sensitivity and specificity in the case of binary segmentation tasks. In multi-class problems, the rate at which different classes are captured within a segment centred on foreground reflects the real relative distribution of the foreground classes, while adjusting their frequency relatively to the background. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_26", "text": " Deeper networks have greater discriminative power due to the additional non-linearities and better quality of local optima (Choromanska et al. (2015)). However, convolutions with 3D kernels are computationally expensive in comparison to the 2D variants, which hampers the addition of more layers. Additionally, 3D architectures have a larger number of trainable parameters, with each layer adding Cl​Cl−1​∏i={x,y,z}𝜿l(i)subscript𝐶𝑙subscript𝐶𝑙1subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑖C_{l}C_{l-1}\\prod_{i=\\{x,y,z\\}}{\\bm{\\kappa}_{l}^{(i)}} weights to the model. Clsubscript𝐶𝑙C_{l} is the number of FMs in layer l𝑙l and 𝜿l{x,y,z}superscriptsubscript𝜿𝑙𝑥𝑦𝑧\\bm{\\kappa}_{l}^{\\{x,y,z\\}} the size of its kernel in the respective spatial dimension. Overall this makes the network increasingly prone to over-fitting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_27", "text": " In order to build a deeper 3D architecture, we adopt the sole use of small 33superscript333^{3} kernels that are faster to convolve with and contain less weights. This design approach was previously found beneficial for classification of natural images (Simonyan and Zisserman (2014)) but its effect is even more drastic on 3D networks. When compared to common kernel choices of 53superscript535^{3} (Zikic et al. (2014); Urban et al. (2014); Prasoon et al. (2013)) and in our baseline CNN, the smaller 33superscript333^{3} kernels reduce the element-wise multiplications by a factor of approximately 53/33≈4.6superscript53superscript334.65^{3}/3^{3}\\approx 4.6 while reducing the number of trainable parameters by the same factor. Thus deeper network variants that are implicitly regularised and more efficient can be designed by simply replacing each layer of common architectures with more layers that use smaller kernels (Fig. 4). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_28", "text": " However, deeper networks are more difficult to train. It has been shown that the forward (neuron activations) and backwards (gradients) propagated signal may explode or vanish if care is not given to retain its variance (Glorot and Bengio (2010)). This occurs because at every successive layer l𝑙l, the variance of the signal is multiplied by nli​n⋅v​a​r​(𝐖l)⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙n^{in}_{l}\\cdot var(\\mathbf{W}_{l}), where nli​n=Cl−1​∏i={x,y,z}𝜿l(i)subscriptsuperscript𝑛𝑖𝑛𝑙subscript𝐶𝑙1subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑖n^{in}_{l}=C_{l-1}\\prod_{i=\\{x,y,z\\}}{\\bm{\\kappa}_{l}^{(i)}} is the number of weights through which a neuron of layer l𝑙l is connected to its input and v​a​r​(𝐖l)𝑣𝑎𝑟subscript𝐖𝑙var(\\mathbf{W}_{l}) is the variance of the layer’s weights. To better preserve the signal in the initial training stage we adopt a scheme recently derived for ReLu-based networks by He et al. (2015) and initialize the kernel weights of our system by sampling from the normal distribution 𝒩​(0,2/nli​n)𝒩02subscriptsuperscript𝑛𝑖𝑛𝑙\\mathcal{N}(0,\\sqrt{2/n^{in}_{l}}). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_29", "text": " A phenomenon of similar nature that hinders the network’s performance is the “internal covariate shift” (Ioffe and Szegedy (2015)). It occurs throughout training, because the weight updates to deeper layers result in a continuously changing distribution of signal at higher layers, which hinders the convergence of their weights. Specifically, at training iteration t𝑡t the weight updates may cause deviation ϵl,tsubscriptitalic-ϵ𝑙𝑡\\epsilon_{l,t} to the variance of the weights. At the next iteration the signal will be amplified by nli​n⋅v​a​r​(𝐖l,t+1)=nli​n⋅(v​a​r​(𝐖l,t)+ϵl,t)⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙𝑡1⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙𝑡subscriptitalic-ϵ𝑙𝑡n^{in}_{l}\\cdot var(\\mathbf{W}_{l,t+1})=n^{in}_{l}\\cdot(var(\\mathbf{W}_{l,t})+\\epsilon_{l,t}). Thus before influencing the signal, any deviation ϵl,tsubscriptitalic-ϵ𝑙𝑡\\epsilon_{l,t} is amplified by nli​nsubscriptsuperscript𝑛𝑖𝑛𝑙n^{in}_{l} which is exponential in the number of dimensions. For this reason the problem affects training of 3D CNNs more severely than conventional 2D systems. For countering it, we adopt the recently proposed Batch Normalisation (BN) technique to all hidden layers (Ioffe and Szegedy (2015)), which allows normalization of the FM activations at every optimization step in order to better preserve the signal. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_30", "text": " The segmentation of each voxel is performed by taking into account the contextual information that is captured by the receptive field of the CNN when it is centred on the voxel. The spatial context is providing important information for being able to discriminate voxels that otherwise appear very similar when considering only local appearance. From Eq. (1) follows that an increase of the CNN’s receptive field requires bigger kernels or more convolutional layers, which increases computation and memory requirements. An alternative would be the use of pooling (LeCun et al. (1998)), which however leads to loss of the exact position of the segmented voxel and thus can negatively impact accuracy. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_31", "text": " In order to incorporate both local and larger contextual information into our 3D CNN, we add a second pathway that operates on down-sampled images. Thus, our dual pathway 3D CNN simultaneously processes the input image at multiple scales (Fig. 5). Higher level features such as the location within the brain are learned in the second pathway, while the detailed local appearance of structures is captured in the first. As the two pathways are decoupled in this architecture, arbitrarily large context can be processed by the second pathway by simply adjusting the down-sampling factor FDsubscript𝐹𝐷F_{D}. The size of the pathways can be independently adjusted according to the computational capacity and the task at hand, which may require relatively more or less filters focused on the down-sampled context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_32", "text": " To preserve the capability of dense inference, spatial correspondence of the activations in the FMs of the last convolutional layers of the two pathways, L​1𝐿1L1 and L​2𝐿2L2, should be ensured. In networks where only unary kernel strides are used, such as the proposed architecture, this requires that for every FDsubscript𝐹𝐷F_{D} shifts of the receptive field 𝝋L​1subscript𝝋𝐿1\\bm{\\varphi}_{L1} over the normal resolution input, only one shift is performed by 𝝋L​2subscript𝝋𝐿2\\bm{\\varphi}_{L2} over the down-sampled input. Hence it is required that the dimensions of the FMs in L​2𝐿2L2 are 𝜹L​2{x,y,z}=⌈𝜹L​1{x,y,z}/FD⌉superscriptsubscript𝜹𝐿2𝑥𝑦𝑧superscriptsubscript𝜹𝐿1𝑥𝑦𝑧subscript𝐹𝐷\\bm{\\delta}_{L2}^{\\{x,y,z\\}}=\\lceil\\bm{\\delta}_{L1}^{\\{x,y,z\\}}/F_{D}\\rceil. From Eq. (2), the size of the input to the second pathway is 𝜹i​n​2{x,y,z}=𝝋L​2{x,y,z}+𝜹L​2{x,y,z}−1superscriptsubscript𝜹𝑖𝑛2𝑥𝑦𝑧superscriptsubscript𝝋𝐿2𝑥𝑦𝑧superscriptsubscript𝜹𝐿2𝑥𝑦𝑧1\\bm{\\delta}_{in2}^{\\{x,y,z\\}}=\\bm{\\varphi}_{L2}^{\\{x,y,z\\}}+\\bm{\\delta}_{L2}^{\\{x,y,z\\}}-1 and similar is the relation between 𝜹i​n​1subscript𝜹𝑖𝑛1\\bm{\\delta}_{in1} and 𝜹L​1subscript𝜹𝐿1\\bm{\\delta}_{L1}. These establish the relation between the required dimensions of the input segments from the two resolutions, which can then be extracted centered on the same image location. The FMs of L​2𝐿2L2 are up-sampled to match the dimensions of L​1𝐿1L1’s FMs and are then concatenated together. We add two more hidden layers for combining the multi-scale features before the final classification, as shown in Fig. 5. Integration of the multi-scale parallel pathways in architectures with non-unary strides is discussed in A. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_33", "text": " Combining multi-scale features has been found beneficial in other recent works (Long et al. (2015); Ronneberger et al. (2015)), in which whole 2D images are processed in the network by applying a few number of convolutions and then down-sampling the FMs for further processing at various scales. Our decoupled pathways allow arbitrarily large context to be provided while avoiding the need to load large parts of the 3D volume into memory. Additionally, our architecture extracts features completely independently from the multiple resolutions. This way, the features learned by the first pathway retain finest details, as they are not involved in processing low resolution context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_34", "text": " Because neighboring voxels share substantial spatial context, the soft segmentation maps produced by the CNN tend to be smooth, even though neighborhood dependencies are not modeled directly. However, local minima in training and noise in the input images can still result in some spurious outputs, with small isolated regions or holes in the predictions. We employ a fully connected CRF (Krähenbühl and Koltun (2011)) as a post-processing step to achieve more structured predictions. As we describe below, this CRF is capable of modeling arbitrarily large voxel-neighborhoods but is also computationally efficient, making it ideal for processing 3D multi-modal medical scans. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_35", "text": " For an input image 𝐈𝐈\\mathbf{I} and the label configuration (segmentation) 𝐳𝐳\\mathbf{z}, the Gibbs energy in a CRF model is given by ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_36", "text": " E​(𝐳)=∑iψu​(zi)+∑i​j,i≠jψp​(zi,zj)​ .𝐸𝐳subscript𝑖subscript𝜓𝑢subscript𝑧𝑖subscript𝑖𝑗𝑖𝑗subscript𝜓𝑝subscript𝑧𝑖subscript𝑧𝑗 .E(\\mathbf{z})=\\sum_{i}{\\psi_{u}(z_{i})}+\\sum_{ij,i\\neq j}{\\psi_{p}(z_{i},z_{j})}\\textrm{ .} (5) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_37", "text": " The unary potential is the negative log-likelihood ψu​(zi)=−l​o​g​P​(zi|𝐈)subscript𝜓𝑢subscript𝑧𝑖𝑙𝑜𝑔𝑃conditionalsubscript𝑧𝑖𝐈\\psi_{u}(z_{i})=-logP(z_{i}|\\mathbf{I}), where in our case P​(zi|𝐈)𝑃conditionalsubscript𝑧𝑖𝐈P(z_{i}|\\mathbf{I}) is the CNN’s output for voxel i𝑖i. In a fully connected CRF, the pairwise potential is of form ψp​(zi,zj)=μ​(zi,zj)​k​(𝐟𝐢,𝐟𝐣)subscript𝜓𝑝subscript𝑧𝑖subscript𝑧𝑗𝜇subscript𝑧𝑖subscript𝑧𝑗𝑘subscript𝐟𝐢subscript𝐟𝐣\\psi_{p}(z_{i},z_{j})=\\mu(z_{i},z_{j})k(\\mathbf{f_{i}},\\mathbf{f_{j}}) between any pair of voxels, regardless of their spatial distance. The Pott’s Model is commonly used as the label compatibility function, giving μ​(zi,zj)=(zi≠zj)𝜇subscript𝑧𝑖subscript𝑧𝑗delimited-()subscript𝑧𝑖subscript𝑧𝑗\\mu(z_{i},z_{j})=(z_{i}\\neq z_{j}). The corresponding energy penalty is given by the function k𝑘k, which is defined over an arbitrary feature space, with 𝐟𝐢,𝐟𝐣subscript𝐟𝐢subscript𝐟𝐣\\mathbf{f_{i}},\\mathbf{f_{j}} being the feature vectors of the pair of voxels. Krähenbühl and Koltun (2011) observed that if the penalty function is defined as a linear combination of Gaussian kernels, k​(𝐟𝐢,𝐟𝐣)=∑m=1Mw(m)​k(m)​(𝐟𝐢,𝐟𝐣)𝑘subscript𝐟𝐢subscript𝐟𝐣superscriptsubscript𝑚1𝑀superscript𝑤𝑚superscript𝑘𝑚subscript𝐟𝐢subscript𝐟𝐣k(\\mathbf{f_{i}},\\mathbf{f_{j}})=\\sum_{m=1}^{M}{w^{(m)}k^{(m)}(\\mathbf{f_{i}},\\mathbf{f_{j}})}, the model lends itself for very efficient inference with mean field approximation, after expressing message passing as convolutions with the Gaussian kernels in the space of the feature vectors 𝐟𝐢,𝐟𝐣subscript𝐟𝐢subscript𝐟𝐣\\mathbf{f_{i}},\\mathbf{f_{j}}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_38", "text": " We extended the work of the original authors and implemented a 3D version of the CRF for processing multi-modal scans. We make use of two Gaussian kernels, which operate in the feature space defined by the voxel coordinates pi,dsubscript𝑝𝑖𝑑p_{i,d} and the intensities of the c𝑐c-th modality-channel Ii,csubscript𝐼𝑖𝑐I_{i,c} for voxel i𝑖i. The smoothness kernel, k(1)​(𝐟𝐢,𝐟𝐣)=e​x​p​(−∑d={x,y,z}|pi,d−pj,d|22​σα,d2)superscript𝑘1subscript𝐟𝐢subscript𝐟𝐣𝑒𝑥𝑝subscript𝑑𝑥𝑦𝑧superscriptsubscript𝑝𝑖𝑑subscript𝑝𝑗𝑑22superscriptsubscript𝜎𝛼𝑑2k^{(1)}(\\mathbf{f_{i}},\\mathbf{f_{j}})=exp\\Big{(}-\\sum_{d=\\{x,y,z\\}}{\\frac{|p_{i,d}-p_{j,d}|^{2}}{2\\sigma_{\\alpha,d}^{2}}}\\Big{)}, is defined by a diagonal covariance matrix with elements the configurable parameters σα,dsubscript𝜎𝛼𝑑\\sigma_{\\alpha,d}, one for each axis. These parameters express the size and shape of neighborhoods that homogeneous labels are encouraged. The appearance kernel k(2)​(𝐟𝐢,𝐟𝐣)=e​x​p​(−∑d={x,y,z}|pi,d−pj,d|22​σβ,d2−∑c=1C|Ii,c−Ij,c|22​σγ,c2)superscript𝑘2subscript𝐟𝐢subscript𝐟𝐣𝑒𝑥𝑝subscript𝑑𝑥𝑦𝑧superscriptsubscript𝑝𝑖𝑑subscript𝑝𝑗𝑑22superscriptsubscript𝜎𝛽𝑑2superscriptsubscript𝑐1𝐶superscriptsubscript𝐼𝑖𝑐subscript𝐼𝑗𝑐22superscriptsubscript𝜎𝛾𝑐2k^{(2)}(\\mathbf{f_{i}},\\mathbf{f_{j}})=exp\\Big{(}-\\sum_{d=\\{x,y,z\\}}{\\frac{|p_{i,d}-p_{j,d}|^{2}}{2\\sigma_{\\beta,d}^{2}}}-\\sum_{c=1}^{C}{\\frac{|I_{i,c}-I_{j,c}|^{2}}{2\\sigma_{\\gamma,c}^{2}}}\\Big{)} is defined similarly. The additional parameters σγ,csubscript𝜎𝛾𝑐\\sigma_{\\gamma,c} can be interpreted as how strongly to enforce homogeneous appearance in the C𝐶C input channels, when voxels in an area spatially defined by σβ,dsubscript𝜎𝛽𝑑\\sigma_{\\beta,d} are identically labelled. Finally, the configurable weights w(1),w(2)superscript𝑤1superscript𝑤2w^{(1)},w^{(2)} define the relative strength of the two factors. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_39", "text": " In this section we present a series of experiments in order to analyze the impact of each of the main contributions and to justify the choices made in the design of the proposed 11-layers, multi-scale 3D CNN architecture, referred to as the DeepMedic. Starting from the CNN baseline as discussed in Sec. 2.1, we first explore the benefit of our proposed dense training scheme (cf. Sec. 2.2), then investigate the use of deeper models (cf. Sec. 2.3) and then evaluate the influence of the multi-scale dual pathway (cf. Sec. 2.4). Finally, we compare our method with corresponding 2D variants to assess the benefit of processing 3D context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_40", "text": " The following experiments are conducted using the TBI dataset with 61 multi-channel MRIs which is described in more detail later in Sec. 4.1. Here, the images are randomly split into a validation and training set, with 15 and 46 images each. The same sets are used in all analyses. To monitor the progress of segmentation accuracy during training, we extract 10k random patches at regular intervals, with equal numbers extracted from each of the validation images. The patches are uniformly sampled from the brain region in order to approximate the true distribution of lesions and healthy tissue. Full segmentation of the validation datasets is performed every five epochs and the mean Dice similarity coefficient (DSC) is determined. Details on the configuration of the networks are provided in B. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_41", "text": " We compare our proposed dense training method with two other commonly used training schemes on the 5-layers baseline CNN (see Fig. 2). The first common scheme trains on 173superscript17317^{3} patches extracted uniformly from the brain region, and the second scheme samples patches equally from the lesion and background class. We refer to these schemes as Puniuni{}_{\\text{uni}} and Peqeq{}_{\\text{eq}}. The results shown in Fig. 6 show a correlation of sensitivity and specificity with the percentage of training samples that come from the lesion class. Peqeq{}_{\\text{eq}} performs poorly because of over-segmentation (high sensitivity, low specificity). Puniuni{}_{\\text{uni}} has better classification on the background class (high specificity), which leads to high mean voxel-wise accuracy since the majority corresponds to background, but not particularly high DSC scores due to under-segmentation (low sensitivity). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_42", "text": " To evaluate our dense training scheme, we train multiple models with varying sized image segments, equally sampled from lesions and background. The tested sizes of the segments go from 193superscript19319^{3} upwards to 293superscript29329^{3}. The models are referred to as “S-d𝑑d”, where d𝑑d is the side length of the cubic segments. For fair comparison, the batch sizes in all the experiments are adjusted to have a similar memory footprint and lead to similar training times as compared to training on Puni and Peq222Dense training on a whole volume was inapplicable in these experimental settings due to memory limitations but was previously shown to give similar results as training on uniformly sampled patches (Long et al. (2015)).. We observe a great performance increase for model S-1919{19} over Peqeq{}_{\\text{eq}}. We account this partly to the efficient increase of the effective batch size (B⋅V⋅𝐵𝑉B\\cdot V in Eq. (4)), but also to the altered distribution of training samples. As we increase the size of the training segments further, we quickly reach a balance between the sensitivity of Peq and the specificity of Puni, which results in improved segmentation as expressed by the DSC. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_43", "text": " The segment size is a hyper-parameter in our model. We observe that the increase in performance with increasing segment size quickly levels off, and similar performance is obtained for a wide range of segment sizes, which allows for easy configuration. For the remaining experiments, all models were trained on segments of size 253superscript25325^{3}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_44", "text": " The 5-layers baseline CNN (Fig. 2), here referred to as the “Shallow” model, is extended to 9-layers by replacing each convolutional layer that uses 53superscript535^{3} kernels with two layers that use 33superscript333^{3} kernels (Fig. 4). This model is referred to as “Deep”. Training the latter, however, utterly fails with the model making only predictions corresponding to the background class. This problem is related to the challenge of preserving the signal as it propagates through deep networks and its variance gets multiplied with the variance of the weights, as previously discussed in Sec. 2.3. One of the causes is that the weights of both models have been initialized with the commonly used scheme of sampling from the normal distribution 𝒩​(0,0.01)𝒩00.01\\mathcal{N}(0,0.01) (cf. Krizhevsky et al. (2012)). In comparison, the initialization scheme by He et al. (2015), derived for preserving the signal in the initial stage of training, results in higher values and overcomes this problem. Further preservation of the signal is obtained by employing Batch Normalization. This results in an enhanced 9-layers model which we refer to as “Deep+”, and using the same enhancements on the Shallow model yields “Shallow+”. The significant performance improvement of Deep+ over Shallow+, as shown in Fig. 7, is the result of the greater representational power of the deeper network. The two models need similar computational times, which highlights the benefits of utilizing small kernels in the design of 3D CNNs. Although the deeper model requires more sequential (layer by layer) computations on the GPU, those are faster due to the smaller kernel size. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_45", "text": " The final version of the proposed network architecture, referred to as “DeepMedic”, is built by extending the Deep+ model with a second convolutional pathway that is identical to the first one. Two hidden layers are added for combining the multi-scale features before the classification layer, resulting in a deep network of 11-layers (cf. Fig. 5). The input segments to the second pathway are extracted from the images down-sampled by a factor of three. Thus, the network is capable of capturing context in a 513superscript51351^{3} area of the original image through the 173superscript17317^{3} receptive field of the lower-resolution pathway, while only doubling the computational and memory requirements over the single pathway CNN. In comparison, the most recent 2D CNN systems proposed for lesion segmentation (Havaei et al. (2015); Pereira et al. (2015)) have a receptive field limited to 332superscript33233^{2} voxels. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_46", "text": " Figure 8 shows the improvement DeepMedic achieves over the single pathway model Deep+. In Fig. 9 we show two representative visual examples of this improvement when using the multi-scale CNN. Finally, we confirm that the performance increase can be accounted to the additional context and not the additional capacity of DeepMedic. To this end, we build a big single-scale model by doubling the FMs at each of the 9-layers of Deep+ and adding two hidden layers. This 11-layers deep and wide model, referred to as “BigDeep+”, has the same number of parameters as DeepMedic. The performance of the model is not improved, while showing signs of over-fitting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_47", "text": " Acquired brain MRI scans are often anisotropic. Such is the case for most sequences in our TBI dataset, which have been acquired with lower axial resolution, except for the isotropic MPRAGE. We perform a series of experiments to investigate the behaviour of 2D networks and assess the benefit of processing 3D context in this setting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_48", "text": " DeepMedic can be converted to 2D by setting the third dimension of each kernel to one. This way only information from the surrounding context on the axial plane influences the classification of each voxel. If 2D segments are given as input, the dimensionality of the feature maps decreases and so does the memory required. This allows developing 2D variants with increased width, depth and size of training batch with similar requirements as the 3D version, which are valid candidates for model selection in practical scenarios. We assess various configurations and present some representatives in Table 1(b) along with their performance. Best segmentation among investigated 2D variants is achieved by a 19-layers, multi-scale network, reaching 61.5% average DSC on the validation fold. The decline from the 66.6% DSC achieved by the 3D version of DeepMedic indicates the importance of processing 3D context even in settings where most acquired sequences have low resolution along a certain axis. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_49", "text": " The proposed system consisting of the DeepMedic CNN architecture, optionally coupled with a fully connected CRF, is evaluated on three lesion segmentation tasks including challenging clinical data from patients with traumatic brain injuries, brain tumors, and ischemic stroke. Quantitative evaluation and comparisons with state-of-the-art are reported for each of the tasks. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_50", "text": " Sixty-six patients with moderate-to-severe TBI who required admission to the Neurosciences Critical Care Unit at Addenbrooke’s Hospital, Cambridge, UK, underwent imaging using a 3-Tesla Siemens Magnetom TIM Trio within the first week of injury. Ethical approval was obtained from the Local Research Ethics Committee (LREC 97/290) and written assent via consultee agreement was obtained for all patients. The structural MRI sequences that are used in this work are isotropic MPRAGE (1mm×mm\\times1mm×mm\\times1m​m𝑚𝑚mm), axial FLAIR, T2 and Proton Density (PD) (0.7mm×mm\\times0.7mm×mm\\times5m​m𝑚𝑚mm), and Gradient-Echo (GE) (0.86mm×mm\\times0.86mm×mm\\times5m​m𝑚𝑚mm). All visible lesions were manually annotated on the FLAIR and GE sequences with separate labeling for each lesion type. In nine patients the presence of hyperintense white matter lesions that were felt to be chronic in nature were also annotated. Artifacts, for example, signal loss secondary to intraparenchymal pressure probes, were also noted. For the purpose of this study we focus on binary segmentation of all abnormalities within the brain tissue. Thus, we merged all classes that correspond to intra-cerebral abnormalities into a single “lesion” label. Extra-cerebral pathologies such as epidural and subdural hematoma were treated as background. We excluded two datasets because of corrupted FLAIR images, two cases because no lesions were found and one case because of a major scanning artifact corrupting the images. This results in a total of 61 cases used for quantitative evaluation. Brain masks were obtained using the ROBEX tool (Iglesias et al. (2011)). All images were resampled to an isotropic 1​m​m31𝑚superscript𝑚31mm^{3} resolution, with dimensions 193×\\times229×\\times193 and affinely registered (Studholme et al. (1999)) to MNI space using the atlas by Grabner et al. (2006). No bias field correction was used as preliminary results showed that this can negatively affect lesion appearance. Image intensities were normalized to have zero-mean and unit variance, as it has been reported that this improves CNN results (Jarrett et al. (2009)). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_51", "text": " Network configuration and training: The network architecture corresponds to the one described in Sec. 3.4, i.e. a dual-pathway, 11-layers deep CNN. The training data is augmented by adding images reflected along the sagittal axis. To make the network invariant to absolute intensities we also shift the intensities of each MR channel c𝑐c of every training segment by ic=rc​σcsubscript𝑖𝑐subscript𝑟𝑐subscript𝜎𝑐i_{c}=r_{c}\\sigma_{c}. rcsubscript𝑟𝑐r_{c} is sampled for every segment from 𝒩​(0,0.1)𝒩00.1\\mathcal{N}(0,0.1) and σcsubscript𝜎𝑐\\sigma_{c} is the standard deviation of intensities under the brain mask in the corresponding image. The network is regularized using dropout (Hinton et al. (2012)) with a rate of 2% on all convolutional layers, which is in addition to a 50% rate used on the last two layers. The network is evaluated with 5-fold cross-validation on the 61 subjects. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_52", "text": " CRF configuration: The parameters of the fully connected CRF are determined in a configuration experiment using random-search and 15 randomly selected subjects from the TBI database with predictions from a preliminary version of the corresponding model. The 15 subjects are reshuffled into the 5-folds used for subsequent evaluation. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_53", "text": " Random Forest baseline: We have done our best to set up a competitive baseline for comparison. We employ a context-sensitive Random Forest, similar to the model presented by Zikic et al. (2012) for brain tumors except that we apply the forest to the MR images without additional tissue specific priors. We train a forest with 50 trees and maximum depth of 30. Larger size did not improve results. Training data points are approximately equally sampled from lesion and background classes, with the optimal balance empirically chosen. Two hundred randomized cross-channel box features are evaluated at each split node with maximum offsets and box sizes of 20mm. The same folds of training and test sets are used as for our CNN approach. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_54", "text": " Table 1 summarizes the results on TBI. Our CNN significantly outperforms the Random Forest baseline, while the relatively overall low DSC values indicate the difficulty of the task. Due to randomness during training the local minima where a network converges are different between training sessions and some errors they produce differ (Choromanska et al. (2015)). To clear the unbiased errors of the network we form an ensemble of three similar networks, aggregating their output by averaging. This ensemble yields better performance in all metrics but also allows us to investigate the behaviour of our network focusing only on the biased errors. Fig. 10 shows the DSC obtained by the ensemble on each subject in relation to the manually segmented and predicted lesion volume. The network is capable of segmenting cases with very small lesions, although, performance is less robust in these cases as even small errors have large influence on the DSC metric. Investigation of the predicted lesion volume, which is an important biomarker for prognostication, shows that the network is neither biased towards the lesion nor background class, with promising results even on cases with very small lesions. Furthermore, we separately evaluate the influence of the post-processing with the fully connected CRF. As shown in Table 1, the CRF yields improvements over all classifiers. Effects are more prominent when the performance of the primary segmenter degrades, which shows the robustness of this regulariser. Fig. 11 shows three representative cases. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_55", "text": " For brain tumors, we evaluate our system on the data from the 2015 Brain Tumor Segmentation Challenge (BRATS) (Menze et al. (2015)). The training set consists of 220 cases with high grade (HG) and 54 cases with low grade (LG) glioma for which corresponding reference segmentations are provided. The segmentations include the following tumor tissue classes: 1) necrotic core, 2) edema, 3) non-enhancing and 4) enhancing core. The test set consists of 110 cases of both HG and LG but the grade is not revealed. Reference segmentations for the test set are hidden and evaluation is carried out via an online system. For evaluation, the four predicted labels are merged into different sets of whole tumor (all four classes), the core (classes 1,3,4), and the enhancing tumor (class 4)333For interpretation of the results note that, to the best of our knowledge, cases where the “enhancing tumor” class is not present in the manual segmentation are considered as zeros for the calculation of average performance by the evaluation platform, lowering the upper bound for this class.. For each subject, four MRI sequences are available, FLAIR, T1, T1-contrast and T2. The datasets are pre-processed by the organizers and provided as skull-stripped, registered to a common space and resampled to isotropic 1​m​m31𝑚superscript𝑚31mm^{3} resolution. Dimensions of each volume are 240×\\times240×\\times155. We add minimal pre-processing of normalizing the brain-tissue intensities of each sequence to have zero-mean and unit variance. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_56", "text": " Network configuration and training: We modify the DeepMedic architecture to handle multi-class problems by extending the classification layer to five feature maps (four tumor classes plus background). The rest of the configuration remains unchanged. We enrich the dataset with sagittal reflections. Opposite to the experiments on TBI, we do not employ the intensity perturbation and dropout on convolutional layers, because the network should not require as much regularisation with this large database. The network is trained on image segments extracted with equal probability centred on the whole tumor and healthy tissue. The distribution of the classes captured by our training scheme is provided in C. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_57", "text": " To examine our network’s behaviour, we first evaluate it on the training data of the challenge. For this, we run a 5-fold cross validation where each fold contains both HG and LG images. We then retrain the network using all training images, before applying it on the test data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_58", "text": " CRF configuration: For the multi-class problem it is challenging to find a global set of parameters for the CRF which can consistently improve the segmentation of all classes. So instead we merge the four predicted probability maps into a single “whole tumor” map for CRF post-processing. The CRF then only refines the boundaries between tumor and background and additionally removes isolated false positives. Similarly to the experiments on TBI, the CRF is configured on a random subset of 44 HG and 18 LG training images, which are then reshuffled into the subsequent 5-fold cross validation. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_59", "text": " Quantitative results from the application of the DeepMedic, the CRF and an ensemble of three similar networks on the training data are presented in Table 2. The latter two offer an improvement, albeit fairly small since the performance of DeepMedic is already rather high in this task. Also shown are results from previous works, as reported on the online evaluation platform. Various settings may vary among submissions, such as the pre-processing pipeline or the number of folds used for cross-validation. Still it appears that our system performs favourably compared to previous state-of-the-art, including the semi-automatic system of Bakas et al. (2015) (bakas1) who won the latest challenge and the method of Pereira et al. (2015) (peres1), which is based on grade-specific 2D CNNs and requires visual inspection of the tumor and identification of the grade by the user prior to segmentation. Examples of segmentations obtained with our method are shown in Fig. 12. DeepMedic behaves very well in preserving the hierarchical structure of the tumor, which we account to the large context processed by our multi-scale network. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_60", "text": " Table 3 shows the results of our method on the BRATS test data. Results of other submissions are not accessible. The decrease in performance is possibly due to the the inclusion of test images that vary significantly from the training data, such as cases acquired in clinical centers that did not provide any of the training images, something that was confirmed by the organisers. Note that performance gains obtained with the CRF are larger in this case. This indicates not only that its configuration has not overfitted to the training database but also that the CRF is robust to factors of variation between acquisition sites, which complements nicely the more sensitive CNN. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_61", "text": " We participated in the 2015 Ischemic Stroke Lesion Segmentation (ISLES) challenge, where our system achieved the best results among all participants on sub-acute ischemic stroke lesions (Maier et al. (2017)). In the training phase of the challenge, 28 datasets have been made available, along with manual segmentations. Each dataset included T1, T1-contrast, FLAIR and DWI sequences. All images were provided as skull-stripped and resampled to isotropic 1​m​m31𝑚superscript𝑚31mm^{3} voxel resolution. Each volume is of size 230×\\times230×\\times154. In the testing stage, teams were provided with 36 datasets for evaluation. The test data were acquired in two clinical centers, with one of them being the same that provided all training images. Corresponding expert segmentations were hidden and results had to be submitted to an online evaluation platform. Similar to BRATS, the only pre-processing that we applied is the normalization of each image to the zero-mean and unit variance. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_62", "text": " Network Configuration and Training: The configuration of the network employed is described in Kamnitsas et al. (2015). The main difference with the configuration used for TBI and tumors as employed above is the relatively smaller number of FMs in the low-resolution pathway. This choice should not significantly influence accuracy on the generally small SISS lesions but it allowed us to lower the computational cost. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_63", "text": " Similar to the other experiments, we evaluate our network with a 5-fold cross validation on the training datasets. We use data augmentation with sagittal reflections. For the testing phase of the challenge, we trained an ensemble of three networks on all training cases and aggregate their predictions by averaging. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_64", "text": " CRF configuration: The parameters of the CRF were configured via a random search on the whole training dataset. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_65", "text": " The performance of our system on the training data is shown in Table 4. Significant improvement is achieved by the structural regularisation offered by the CRF, although it could be partially accounted for by overfitting the training data during the CRF’s configuration. Examples for visual inspection are shown in Fig. 13. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_66", "text": " For the testing phase of the challenge we formed an ensemble of three networks, coupled with the fully connected CRF. Our submission ranked first, indicating superior performance on this challenging task among 14 submissions. Table 5 shows our results, along with the other two top entries (Feng et al. (2015); Halme et al. (2015)). Among the other participating methods was the CNN of Havaei et al. (2015) with 3 layers of 2D convolutions. That method perfomed less well on this challenging task (Maier et al. (2017)). This points out the advantage offered by 3D context, the large field of view of DeepMedic thanks to multi-scale processing and the representational power of deeper networks. It is important to note the decrease of performance in comparison to the training set. All methods performed worse on the data coming from the second clinical center, including the method of Feng et al. (2015) that is not machine-learning based. This highlights a general difficulty with current approaches when applied on multi-center data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_67", "text": " Our CNN is implemented using the Theano library (Bastien et al. (2012)). Each training session requires approximately one day on an NVIDIA GTX Titan X GPU using cuDNN v5.0. The efficient architecture of DeepMedic also allows models to be trained on GPUs with only 3GB of memory. Note that although dimensions of the volumes in the processed databases do not allow dense training on whole volumes for this size of network, dense inference on a whole volume is still possible, as it requires only a forward-pass and thus less memory. In this fashion segmentation of a volume takes less than 30 seconds but requires 12 GB of GPU memory. Tiling the volume into multiple segments of size 353superscript35335^{3} allows inference on 3 GB GPUs in less than three minutes. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_68", "text": " Our 3D fully connected CRF is implemented by extending the original source code by Krähenbühl and Koltun (2011). A CPU implementation is fast, capable of processing a five-channel brain scan in under three minutes. Further speed-up could be achieved with a GPU implementation, but was not found necessary in the scope of this work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_69", "text": " We have presented DeepMedic, a 3D CNN architecture for automatic lesion segmentation that surpasses state-of-the-art on challenging data. The proposed novel training scheme is not only computationally efficient but also offers an adaptive way of partially alleviating the inherent class-imbalance of segmentation problems. We analyzed the benefits of using small convolutional kernels in 3D CNNs, which allowed us to develop a deeper and thus more discriminative network, without increasing the computational cost and number of trainable parameters. We discussed the challenges of training deep neural networks and the adopted solutions from the latest advances in deep learning. Furthermore, we proposed an efficient solution for processing large image context by the use of parallel convolutional pathways for multi-scale processing, alleviating one of the main computational limitations of previous 3D CNNs. Finally, we presented the first application of a 3D fully connected CRF on medical data, employed as a post-processing step to refine the network’s output, a method that has also been shown promising for processing 2D natural images (Chen et al. (2014)). The design of the proposed system is well suited for processing medical volumes thanks to its generic 3D nature. The capabilities of DeepMedic and the employed CRF for capturing 3D patterns exceed those of 2D networks and locally connected random fields, models that have been commonly used in previous work. At the same time, our system is very efficient at inference time, which allows its adoption in a variety of research and clinical settings. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_70", "text": " The generic nature of our system allows its straightforward application for different lesion segmentation tasks without major adaptations. To the best of our knowledge, our system achieved the highest reported accuracy on a cohort of patients with severe TBI. As a comparison, we improved over the reported performance of the pipeline in Rao et al. (2014). Important to note is that the latter work focused only on segmentation of contusions, while our system has been shown capable of segmenting even small and diffused pathologies. Additionally, our pipeline achieved state-of-the-art performance on both public benchmarks of brain tumors (BRATS 2015) and stroke lesions (SISS ISLES 2015). We believe performance can be further improved with task- and data-specific adjustments, for instance in the pre-processing, but our results show the potential of this generically designed segmentation system. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_71", "text": " When applying our pipeline to new tasks, a laborious process is the reconfiguration of the CRF. The model improved our system’s performance with statistical significance in all investigated tasks, most profoundly when the performance of the underlying classifier degrades, proving its flexibility and robustness. Finding optimal parameters for each task, however, can be challenging. This became most obvious on the task of multi-class tumor segmentation. Because the tumor’s substructures vary significantly in appearance, finding a global set of parameters that yields improvements on all classes proved difficult. Instead, we applied the CRF in a binary fashion. This CRF model can be configured with a separate set of parameters for each class. However the larger parameter space would complicate its configuration further. Recent work from Zheng et al. (2015) showed that this particular CRF can be casted as a neural network and its parameters can be learned with regular gradient descent. Training it in an end-to-end fashion on top of a neural network would alleviate the discussed problems. This will be explored as part of future work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_72", "text": " The discriminative power of the learned features is indicated by the success of recent CNN-based systems in matching human performance in domains where it was previously considered too ambitious (He et al. (2015); Silver et al. (2016)). Analysis of the automatically extracted information could potentially provide novel insights and facilitate research on pathologies for which little prior knowledge is currently available. In an attempt to illustrate this, we explore what patterns have been learned automatically for the lesion segmentation tasks. We visualize the activations of DeepMedic’s FMs when processing a subject from our TBI database. Many appearing patterns are difficult to interpret, especially in deeper layers. In Fig. 14 we provide some examples that have an intuitive explanation. One of the most interesting findings is that the network learns to identify the ventricles, CSF, white and gray matter. This reveals that differentiation of tissue type is beneficial for lesion segmentation. This is in line with findings in the literature, where segmentation performance of traditional classifiers was significantly improved by incorporation of tissue priors (Van Leemput et al. (1999); Zikic et al. (2012)). It is intuitive that different types of lesions affect different parts of the brain depending on the underlying mechanisms of the pathology. A rigorous analysis of spatial cues extracted by the network may reveal correlations that are not well defined yet. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_73", "text": " Similarly intriguing is the information extracted in the low-resolution pathway. As they process greater context, these neurons gain additional localization capabilities. The activations of certain FMs form fields in the surrounding areas of the brain. These patterns are preserved in the deepest hidden layers, which indicates they are beneficial for the final segmentation (see two last rows of Fig. 14). We believe these cues provide a spatial bias to the system, for instance that large TBI contusions tend to occur towards the front and sides of the brain (see Fig. 1(c)). Furthermore, the interaction of the multi-resolution features can be observed in FMs of the hidden layer that follows the concatenation of the pathways. The network learns to weight the output of the two pathways, preserving low resolution in certain parts and show fine details in others (bottom row of Fig. 14, first three FMs). Our assumption is that the low-resolution pathway provides a rough localization of large pathologies and brain areas that are challenging to segment, which reserves the rest of the network’s capacity for learning detailed patterns associated with the detection of smaller lesions, fine structures and ambiguous areas. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_74", "text": " The findings of the above exploration lead us to believe that great potential lies into fusing the discriminative power of the “deep black box” with the knowledge acquired over years of targeted biomedical research. Clinical knowledge is available for certain pathologies, such as spatial priors for white matter lesions. Previously engineered models have been proven effective in tackling fundamental imaging problems, such as brain extraction, tissue segmentation and bias field correction. We show that a network is capable of automatically extracting some of this information. It would be interesting, however, to investigate structured ways for incorporating such existing information as priors into the network’s feature space, which should simplify the optimization problem while letting a specialist guide the network towards an optimal solution. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_75", "text": " Although neural networks seem promising for medical image analysis, making the inference process more interpretable is required. This would allow understanding when the network fails, an important aspect in biomedical applications. Although the output is bounded in the (0,1)01(0,1) range and commonly referred to as probability for convenience, it is not a true probability in a Bayesian sense. Research towards Bayesian networks aims to alleviate this limitation. An example is the recent work of Gal and Ghahramani (2015) who show that model confidence can be estimated via sampling the dropout mask. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_76", "text": " A general point should be made about the performance drop observed when our system is applied on test datasets of BRATS and ISLES in comparison to its cross-validated performance on the training data. In both cases, subsets of the test images were acquired in clinical centers different from the ones of training datasets. Differences in scanner type and acquisition protocols have significant impact on the appearance of the images. The issue of multi-center data heterogeneity is considered a major bottleneck for enabling large-scale imaging studies. This is not specific to our approach, but a general problem in medical image analysis. One possible way of making the CNN invariant to the data heterogeneity is to learn a generative model for the data acquisition process, and use this model in the data augmentation step. This is a direction we explore as part of future work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_77", "text": " In order to facilitate further research in this area and to provide a baseline for future evaluations, we make the source code of the entire system publicly available. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" } ]
What are the outputs of Mask-RNN
Output of Mask -RNN are class label , bounding box and object mask [12].
[ 12 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network (FCN) frameworks for object detection and semantic segmentation, respectively. These methods are conceptually intuitive and offer flexibility and robustness, together with fast training and inference time. Our goal in this work is to develop a comparably enabling framework for instance segmentation. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_1", "text": " Instance segmentation is challenging because it requires the correct detection of all objects in an image while also precisely segmenting each instance. It therefore combines elements from the classical computer vision tasks of object detection, where the goal is to classify individual objects and localize each using a bounding box, and semantic segmentation, where the goal is to classify each pixel into a fixed set of categories without differentiating object instances.111Following common terminology, we use object detection to denote detection via bounding boxes, not masks, and semantic segmentation to denote per-pixel classification without differentiating instances. Yet we note that instance segmentation is both semantic and a form of detection. Given this, one might expect a complex method is required to achieve good results. However, we show that a surprisingly simple, flexible, and fast system can surpass prior state-of-the-art instance segmentation results. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_2", "text": " Our method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI), in parallel with the existing branch for classification and bounding box regression (Figure 1). The mask branch is a small FCN applied to each RoI, predicting a segmentation mask in a pixel-to-pixel manner. Mask R-CNN is simple to implement and train given the Faster R-CNN framework, which facilitates a wide range of flexible architecture designs. Additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_3", "text": " In principle Mask R-CNN is an intuitive extension of Faster R-CNN, yet constructing the mask branch properly is critical for good results. Most importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is most evident in how RoIPool (18, 12), the de facto core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, we propose a simple, quantization-free layer, called RoIAlign, that faithfully preserves exact spatial locations. Despite being a seemingly minor change, RoIAlign has a large impact: it improves mask accuracy by relative 10% to 50%, showing bigger gains under stricter localization metrics. Second, we found it essential to decouple mask and class prediction: we predict a binary mask for each class independently, without competition among classes, and rely on the network’s RoI classification branch to predict the category. In contrast, FCNs usually perform per-pixel multi-class categorization, which couples segmentation and classification, and based on our experiments works poorly for instance segmentation. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_4", "text": " Without bells and whistles, Mask R-CNN surpasses all previous state-of-the-art single-model results on the COCO instance segmentation task , including the heavily-engineered entries from the 2016 competition winner. As a by-product, our method also excels on the COCO object detection task. In ablation experiments, we evaluate multiple basic instantiations, which allows us to demonstrate its robustness and analyze the effects of core factors. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_5", "text": " Our models can run at about 200ms per frame on a GPU, and training on COCO takes one to two days on a single 8-GPU machine. We believe the fast train and test speeds, together with the framework’s flexibility and accuracy, will benefit and ease future research on instance segmentation. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_6", "text": " Finally, we showcase the generality of our framework via the task of human pose estimation on the COCO keypoint dataset . By viewing each keypoint as a one-hot binary mask, with minimal modification Mask R-CNN can be applied to detect instance-specific poses. Mask R-CNN surpasses the winner of the 2016 COCO keypoint competition, and at the same time runs at 5 fps. Mask R-CNN, therefore, can be seen more broadly as a flexible framework for instance-level recognition and can be readily extended to more complex tasks. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_7", "text": " We have released code to facilitate future research. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_8", "text": " The Region-based CNN (R-CNN) approach to bounding-box object detection is to attend to a manageable number of candidate object regions (42, 20) and evaluate convolutional networks (25, 24) independently on each RoI. R-CNN was extended (18, 12) to allow attending to RoIs on feature maps using RoIPool, leading to fast speed and better accuracy. Faster R-CNN advanced this stream by learning the attention mechanism with a Region Proposal Network (RPN). Faster R-CNN is flexible and robust to many follow-up improvements (e.g., (38, 27, 21)), and is the current leading framework in several benchmarks. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_9", "text": " Driven by the effectiveness of R-CNN, many approaches to instance segmentation are based on segment proposals. Earlier methods (13, 15, 16, 9) resorted to bottom-up segments (42, 2). DeepMask and following works (34, 8) learn to propose segment candidates, which are then classified by Fast R-CNN. In these methods, segmentation precedes recognition, which is slow and less accurate. Likewise, Dai et al. proposed a complex multiple-stage cascade that predicts segment proposals from bounding-box proposals, followed by classification. Instead, our method is based on parallel prediction of masks and class labels, which is simpler and more flexible. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_10", "text": " Most recently, Li et al. combined the segment proposal system in and object detection system in for “fully convolutional instance segmentation” (FCIS). The common idea in (8, 11, 26) is to predict a set of position-sensitive output channels fully convolutionally. These channels simultaneously address object classes, boxes, and masks, making the system fast. But FCIS exhibits systematic errors on overlapping instances and creates spurious edges (Figure 6), showing that it is challenged by the fundamental difficulties of segmenting instances. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_11", "text": " Another family of solutions (23, 4, 3, 29) to instance segmentation are driven by the success of semantic segmentation. Starting from per-pixel classification results (e.g., FCN outputs), these methods attempt to cut the pixels of the same category into different instances. In contrast to the segmentation-first strategy of these methods, Mask R-CNN is based on an instance-first strategy. We expect a deeper incorporation of both strategies will be studied in the future. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_12", "text": " Mask R-CNN is conceptually simple: Faster R-CNN has two outputs for each candidate object, a class label and a bounding-box offset; to this we add a third branch that outputs the object mask. Mask R-CNN is thus a natural and intuitive idea. But the additional mask output is distinct from the class and box outputs, requiring extraction of much finer spatial layout of an object. Next, we introduce the key elements of Mask R-CNN, including pixel-to-pixel alignment, which is the main missing piece of Fast/Faster R-CNN. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_13", "text": " We begin by briefly reviewing the Faster R-CNN detector . Faster R-CNN consists of two stages. The first stage, called a Region Proposal Network (RPN), proposes candidate object bounding boxes. The second stage, which is in essence Fast R-CNN , extracts features using RoIPool from each candidate box and performs classification and bounding-box regression. The features used by both stages can be shared for faster inference. We refer readers to for latest, comprehensive comparisons between Faster R-CNN and other frameworks. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_14", "text": " Mask R-CNN adopts the same two-stage procedure, with an identical first stage (which is RPN). In the second stage, in parallel to predicting the class and box offset, Mask R-CNN also outputs a binary mask for each RoI. This is in contrast to most recent systems, where classification depends on mask predictions (e.g. (33, 10, 26)). Our approach follows the spirit of Fast R-CNN that applies bounding-box classification and regression in parallel (which turned out to largely simplify the multi-stage pipeline of original R-CNN ). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_15", "text": " Formally, during training, we define a multi-task loss on each sampled RoI as L=Lc​l​s+Lb​o​x+Lm​a​s​k𝐿subscript𝐿𝑐𝑙𝑠subscript𝐿𝑏𝑜𝑥subscript𝐿𝑚𝑎𝑠𝑘L=L_{cls}+L_{box}+L_{mask}. The classification loss Lc​l​ssubscript𝐿𝑐𝑙𝑠L_{cls} and bounding-box loss Lb​o​xsubscript𝐿𝑏𝑜𝑥L_{box} are identical as those defined in . The mask branch has a K​m2𝐾superscript𝑚2Km^{2}-dimensional output for each RoI, which encodes K𝐾K binary masks of resolution m×m𝑚𝑚m\\times m, one for each of the K𝐾K classes. To this we apply a per-pixel sigmoid, and define Lm​a​s​ksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} as the average binary cross-entropy loss. For an RoI associated with ground-truth class k𝑘k, Lm​a​s​ksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is only defined on the k𝑘k-th mask (other mask outputs do not contribute to the loss). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_16", "text": " Our definition of Lm​a​s​ksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} allows the network to generate masks for every class without competition among classes; we rely on the dedicated classification branch to predict the class label used to select the output mask. This decouples mask and class prediction. This is different from common practice when applying FCNs to semantic segmentation, which typically uses a per-pixel softmax and a multinomial cross-entropy loss. In that case, masks across classes compete; in our case, with a per-pixel sigmoid and a binary loss, they do not. We show by experiments that this formulation is key for good instance segmentation results. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_17", "text": " A mask encodes an input object’s spatial layout. Thus, unlike class labels or box offsets that are inevitably collapsed into short output vectors by fully-connected (fc) layers, extracting the spatial structure of masks can be addressed naturally by the pixel-to-pixel correspondence provided by convolutions. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_18", "text": " Specifically, we predict an m×m𝑚𝑚m\\times m mask from each RoI using an FCN . This allows each layer in the mask branch to maintain the explicit m×m𝑚𝑚m\\times m object spatial layout without collapsing it into a vector representation that lacks spatial dimensions. Unlike previous methods that resort to fc layers for mask prediction (33, 34, 10), our fully convolutional representation requires fewer parameters, and is more accurate as demonstrated by experiments. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_19", "text": " This pixel-to-pixel behavior requires our RoI features, which themselves are small feature maps, to be well aligned to faithfully preserve the explicit per-pixel spatial correspondence. This motivated us to develop the following RoIAlign layer that plays a key role in mask prediction. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_20", "text": " RoIPool is a standard operation for extracting a small feature map (e.g., 7×\\times7) from each RoI. RoIPool first quantizes a floating-number RoI to the discrete granularity of the feature map, this quantized RoI is then subdivided into spatial bins which are themselves quantized, and finally feature values covered by each bin are aggregated (usually by max pooling). Quantization is performed, e.g., on a continuous coordinate x𝑥x by computing (x/16)delimited-()𝑥16(x/16), where 16 is a feature map stride and (⋅)delimited-()⋅(\\cdot) is rounding; likewise, quantization is performed when dividing into bins (e.g., 7×\\times7). These quantizations introduce misalignments between the RoI and the extracted features. While this may not impact classification, which is robust to small translations, it has a large negative effect on predicting pixel-accurate masks. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_21", "text": " To address this, we propose an RoIAlign layer that removes the harsh quantization of RoIPool, properly aligning the extracted features with the input. Our proposed change is simple: we avoid any quantization of the RoI boundaries or bins (i.e., we use x/16𝑥16x/16 instead of (x/16)delimited-()𝑥16(x/16)). We use bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and aggregate the result (using max or average), see Figure 3 for details. We note that the results are not sensitive to the exact sampling locations, or how many points are sampled, as long as no quantization is performed. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_22", "text": " RoIAlign leads to large improvements as we show in §4.2. We also compare to the RoIWarp operation proposed in . Unlike RoIAlign, RoIWarp overlooked the alignment issue and was implemented in as quantizing RoI just like RoIPool. So even though RoIWarp also adopts bilinear resampling motivated by , it performs on par with RoIPool as shown by experiments (more details in Table 2c), demonstrating the crucial role of alignment. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_23", "text": " To demonstrate the generality of our approach, we instantiate Mask R-CNN with multiple architectures. For clarity, we differentiate between: (i) the convolutional backbone architecture used for feature extraction over an entire image, and (ii) the network head for bounding-box recognition (classification and regression) and mask prediction that is applied separately to each RoI. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_24", "text": " We denote the backbone architecture using the nomenclature network-depth-features. We evaluate ResNet and ResNeXt networks of depth 50 or 101 layers. The original implementation of Faster R-CNN with ResNets extracted features from the final convolutional layer of the 4-th stage, which we call C4. This backbone with ResNet-50, for example, is denoted by ResNet-50-C4. This is a common choice used in (19, 10, 21, 39). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_25", "text": " We also explore another more effective backbone recently proposed by Lin et al. , called a Feature Pyramid Network (FPN). FPN uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. Faster R-CNN with an FPN backbone extracts RoI features from different levels of the feature pyramid according to their scale, but otherwise the rest of the approach is similar to vanilla ResNet. Using a ResNet-FPN backbone for feature extraction with Mask R-CNN gives excellent gains in both accuracy and speed. For further details on FPN, we refer readers to . ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_26", "text": " For the network head we closely follow architectures presented in previous work to which we add a fully convolutional mask prediction branch. Specifically, we extend the Faster R-CNN box heads from the ResNet and FPN papers. Details are shown in Figure 4. The head on the ResNet-C4 backbone includes the 5-th stage of ResNet (namely, the 9-layer ‘res5’ ), which is compute-intensive. For FPN, the backbone already includes res5 and thus allows for a more efficient head that uses fewer filters. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_27", "text": " We note that our mask branches have a straightforward structure. More complex designs have the potential to improve performance but are not the focus of this work. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_28", "text": " We set hyper-parameters following existing Fast/Faster R-CNN work (12, 36, 27). Although these decisions were made for object detection in original papers (12, 36, 27), we found our instance segmentation system is robust to them. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_29", "text": " As in Fast R-CNN, an RoI is considered positive if it has IoU with a ground-truth box of at least 0.5 and negative otherwise. The mask loss Lm​a​s​ksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is defined only on positive RoIs. The mask target is the intersection between an RoI and its associated ground-truth mask. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_30", "text": " We adopt image-centric training . Images are resized such that their scale (shorter edge) is 800 pixels . Each mini-batch has 2 images per GPU and each image has N𝑁N sampled RoIs, with a ratio of 1:3 of positive to negatives . N𝑁N is 64 for the C4 backbone (as in (12, 36)) and 512 for FPN (as in ). We train on 8 GPUs (so effective mini-batch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt , we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_31", "text": " The RPN anchors span 5 scales and 3 aspect ratios, following . For convenient ablation, RPN is trained separately and does not share features with Mask R-CNN, unless specified. For every entry in this paper, RPN and Mask R-CNN have the same backbones and so they are shareable. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_32", "text": " At test time, the proposal number is 300 for the C4 backbone (as in ) and 1000 for FPN (as in ). We run the box prediction branch on these proposals, followed by non-maximum suppression . The mask branch is then applied to the highest scoring 100 detection boxes. Although this differs from the parallel computation used in training, it speeds up inference and improves accuracy (due to the use of fewer, more accurate RoIs). The mask branch can predict K𝐾K masks per RoI, but we only use the k𝑘k-th mask, where k𝑘k is the predicted class by the classification branch. The m𝑚m×\\timesm𝑚m floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_33", "text": " Note that since we only compute masks on the top 100 detection boxes, Mask R-CNN adds a small overhead to its Faster R-CNN counterpart (e.g., ∼similar-to\\scriptstyle\\sim20% on typical models). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_34", "text": " We perform a thorough comparison of Mask R-CNN to the state of the art along with comprehensive ablations on the COCO dataset . We report the standard COCO metrics including AP (averaged over IoU thresholds), AP50, AP75, and APS, APM, APL (AP at different scales). Unless noted, AP is evaluating using mask IoU. As in previous work (5, 27), we train using the union of 80k train images and a 35k subset of val images (trainval35k), and report ablations on the remaining 5k val images (minival). We also report results on test-dev . ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_35", "text": " We compare Mask R-CNN to the state-of-the-art methods in instance segmentation in Table 1. All instantiations of our model outperform baseline variants of previous state-of-the-art models. This includes MNC and FCIS , the winners of the COCO 2015 and 2016 segmentation challenges, respectively. Without bells and whistles, Mask R-CNN with ResNet-101-FPN backbone outperforms FCIS+++ , which includes multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) . While outside the scope of this work, we expect many such improvements to be applicable to ours. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_36", "text": " Mask R-CNN outputs are visualized in Figures 2 and 5. Mask R-CNN achieves good results even under challenging conditions. In Figure 6 we compare our Mask R-CNN baseline and FCIS+++ . FCIS+++ exhibits systematic artifacts on overlapping instances, suggesting that it is challenged by the fundamental difficulty of instance segmentation. Mask R-CNN shows no such artifacts. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_37", "text": " We run a number of ablations to analyze Mask R-CNN. Results are shown in Table 2 and discussed in detail next. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_38", "text": " Table 2a shows Mask R-CNN with various backbones. It benefits from deeper networks (50 vs. 101) and advanced designs including FPN and ResNeXt. We note that not all frameworks automatically benefit from deeper or advanced networks (see benchmarking in ). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_39", "text": " Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label, we generate a mask for each class without competition among classes (by a per-pixel sigmoid and a binary loss). In Table 2b, we compare this to using a per-pixel softmax and a multinomial loss (as commonly used in FCN ). This alternative couples the tasks of mask and class prediction, and results in a severe loss in mask AP (5.5 points). This suggests that once the instance has been classified as a whole (by the box branch), it is sufficient to predict a binary mask without concern for the categories, which makes the model easier to train. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_40", "text": " Our default instantiation predicts class-specific masks, i.e., one m𝑚m×\\timesm𝑚m mask per class. Interestingly, Mask R-CNN with class-agnostic masks (i.e., predicting a single m𝑚m×\\timesm𝑚m output regardless of class) is nearly as effective: it has 29.7 mask AP vs. 30.3 for the class-specific counterpart on ResNet-50-C4. This further highlights the division of labor in our approach which largely decouples classification and segmentation. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_41", "text": " An evaluation of our proposed RoIAlign layer is shown in Table 2c. For this experiment we use the ResNet-50-C4 backbone, which has stride 16. RoIAlign improves AP by about 3 points over RoIPool, with much of the gain coming at high IoU (AP75). RoIAlign is insensitive to max/average pool; we use average in the rest of the paper. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_42", "text": " Additionally, we compare with RoIWarp proposed in MNC that also adopt bilinear sampling. As discussed in §3, RoIWarp still quantizes the RoI, losing alignment with the input. As can be seen in Table 2c, RoIWarp performs on par with RoIPool and much worse than RoIAlign. This highlights that proper alignment is key. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_43", "text": " We also evaluate RoIAlign with a ResNet-50-C5 backbone, which has an even larger stride of 32 pixels. We use the same head as in Figure 4 (right), as the res5 head is not applicable. Table 2d shows that RoIAlign improves mask AP by a massive 7.3 points, and mask AP75 by 10.5 points (50% relative improvement). Moreover, we note that with RoIAlign, using stride-32 C5 features (30.9 AP) is more accurate than using stride-16 C4 features (30.3 AP, Table 2c). RoIAlign largely resolves the long-standing challenge of using large-stride features for detection and segmentation. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_44", "text": " Finally, RoIAlign shows a gain of 1.5 mask AP and 0.5 box AP when used with FPN, which has finer multi-level strides. For keypoint detection that requires finer alignment, RoIAlign shows large gains even with FPN (Table 6). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_45", "text": " Segmentation is a pixel-to-pixel task and we exploit the spatial layout of masks by using an FCN. In Table 2e, we compare multi-layer perceptrons (MLP) and FCNs, using a ResNet-50-FPN backbone. Using FCNs gives a 2.1 mask AP gain over MLPs. We note that we choose this backbone so that the conv layers of the FCN head are not pre-trained, for a fair comparison with MLP. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_46", "text": " We compare Mask R-CNN to the state-of-the-art COCO bounding-box object detection in Table 3. For this result, even though the full Mask R-CNN model is trained, only the classification and box outputs are used at inference (the mask output is ignored). Mask R-CNN using ResNet-101-FPN outperforms the base variants of all previous state-of-the-art models, including the single-model variant of G-RMI , the winner of the COCO 2016 Detection Challenge. Using ResNeXt-101-FPN, Mask R-CNN further improves results, with a margin of 3.0 points box AP over the best previous single model entry from (which used Inception-ResNet-v2-TDM). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_47", "text": " As a further comparison, we trained a version of Mask R-CNN but without the mask branch, denoted by “Faster R-CNN, RoIAlign” in Table 3. This model performs better than the model presented in due to RoIAlign. On the other hand, it is 0.9 points box AP lower than Mask R-CNN. This gap of Mask R-CNN on box detection is therefore due solely to the benefits of multi-task training. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_48", "text": " Lastly, we note that Mask R-CNN attains a small gap between its mask and box AP: e.g., 2.7 points between 37.1 (mask, Table 1) and 39.8 (box, Table 3). This indicates that our approach largely closes the gap between object detection and the more challenging instance segmentation task. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_49", "text": " We train a ResNet-101-FPN model that shares features between the RPN and Mask R-CNN stages, following the 4-step training of Faster R-CNN . This model runs at 195ms per image on an Nvidia Tesla M40 GPU (plus 15ms CPU time resizing the outputs to the original resolution), and achieves statistically the same mask AP as the unshared one. We also report that the ResNet-101-C4 variant takes ∼similar-to\\scriptstyle\\sim400ms as it has a heavier box head (Figure 4), so we do not recommend using the C4 variant in practice. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_50", "text": " Although Mask R-CNN is fast, we note that our design is not optimized for speed, and better speed/accuracy trade-offs could be achieved , e.g., by varying image sizes and proposal numbers, which is beyond the scope of this paper. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_51", "text": " Mask R-CNN is also fast to train. Training with ResNet-50-FPN on COCO trainval35k takes 32 hours in our synchronized 8-GPU implementation (0.72s per 16-image mini-batch), and 44 hours with ResNet-101-FPN. In fact, fast prototyping can be completed in less than one day when training on the train set. We hope such rapid training will remove a major hurdle in this area and encourage more people to perform research on this challenging topic. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_52", "text": " Our framework can easily be extended to human pose estimation. We model a keypoint’s location as a one-hot mask, and adopt Mask R-CNN to predict K𝐾K masks, one for each of K𝐾K keypoint types (e.g., left shoulder, right elbow). This task helps demonstrate the flexibility of Mask R-CNN. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_53", "text": " We note that minimal domain knowledge for human pose is exploited by our system, as the experiments are mainly to demonstrate the generality of the Mask R-CNN framework. We expect that domain knowledge (e.g., modeling structures ) will be complementary to our simple approach. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_54", "text": " We make minor modifications to the segmentation system when adapting it for keypoints. For each of the K𝐾K keypoints of an instance, the training target is a one-hot m×m𝑚𝑚m\\times m binary mask where only a single pixel is labeled as foreground. During training, for each visible ground-truth keypoint, we minimize the cross-entropy loss over an m2superscript𝑚2m^{2}-way softmax output (which encourages a single point to be detected). We note that as in instance segmentation, the K𝐾K keypoints are still treated independently. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_55", "text": " We adopt the ResNet-FPN variant, and the keypoint head architecture is similar to that in Figure 4 (right). The keypoint head consists of a stack of eight 3×\\times3 512-d conv layers, followed by a deconv layer and 2×\\times bilinear upscaling, producing an output resolution of 56×\\times56. We found that a relatively high resolution output (compared to masks) is required for keypoint-level localization accuracy. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_56", "text": " Models are trained on all COCO trainval35k images that contain annotated keypoints. To reduce overfitting, as this training set is smaller, we train using image scales randomly sampled from (640, 800) pixels; inference is on a single scale of 800 pixels. We train for 90k iterations, starting from a learning rate of 0.02 and reducing it by 10 at 60k and 80k iterations. We use bounding-box NMS with a threshold of 0.5. Other details are identical as in §3.1. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_57", "text": " We evaluate the person keypoint AP (APkpkp{}^{\\text{kp}}) and experiment with a ResNet-50-FPN backbone; more backbones will be studied in the appendix. Table 4 shows that our result (62.7 APkpkp{}^{\\text{kp}}) is 0.9 points higher than the COCO 2016 keypoint detection winner that uses a multi-stage processing pipeline (see caption of Table 4). Our method is considerably simpler and faster. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_58", "text": " More importantly, we have a unified model that can simultaneously predict boxes, segments, and keypoints while running at 5 fps. Adding a segment branch (for the person category) improves the APkpkp{}^{\\text{kp}} to 63.1 (Table 4) on test-dev. More ablations of multi-task learning on minival are in Table 5. Adding the mask branch to the box-only (i.e., Faster R-CNN) or keypoint-only versions consistently improves these tasks. However, adding the keypoint branch reduces the box/mask AP slightly, suggesting that while keypoint detection benefits from multitask training, it does not in turn help the other tasks. Nevertheless, learning all three tasks jointly enables a unified system to efficiently predict all outputs simultaneously (Figure 7). ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_59", "text": " We also investigate the effect of RoIAlign on keypoint detection (Table 6). Though this ResNet-50-FPN backbone has finer strides (e.g., 4 pixels on the finest level), RoIAlign still shows significant improvement over RoIPool and increases APkpkp{}^{\\text{kp}} by 4.4 points. This is because keypoint detections are more sensitive to localization accuracy. This again indicates that alignment is essential for pixel-level localization, including masks and keypoints. ", "title": "Mask R-CNN" }, { "id": "1703.06870_all_60", "text": " Given the effectiveness of Mask R-CNN for extracting object bounding boxes, masks, and keypoints, we expect it be an effective framework for other instance-level tasks. ", "title": "Mask R-CNN" } ]
Compare the training time of the proposed network to the SRCNN ?
The proposed method is much faster than SRCNN by a scale of 10 or more [35].
[ 35 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to medical imaging where more image details are required on demand. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_1", "text": " Many SISR methods have been studied in the computer vision community. Early methods include interpolation such as bicubic interpolation and Lanczos resampling more powerful methods utilizing statistical image priors (20, 13) or internal patch recurrence . ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_2", "text": " Currently, learning methods are widely used to model a mapping from LR to HR patches. Neighbor embedding (4, 15) methods interpolate the patch subspace. Sparse coding (25, 26, 21, 22) methods use a learned compact dictionary based on sparse signal representation. Lately, random forest and convolutional neural network (CNN) have also been used with large improvements in accuracy. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_3", "text": " Among them, Dong et al. has demonstrated that a CNN can be used to learn a mapping from LR to HR in an end-to-end manner. Their method, termed SRCNN, does not require any engineered features that are typically necessary in other methods (25, 26, 21, 22) and shows the state-of-the-art performance. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_4", "text": " While SRCNN successfully introduced a deep learning technique into the super-resolution (SR) problem, we find its limitations in three aspects: first, it relies on the context of small image regions; second, training converges too slowly; third, the network only works for a single scale. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_5", "text": " In this work, we propose a new method to practically resolve the issues. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_6", "text": " Context We utilize contextual information spread over very large image regions. For a large scale factor, it is often the case that information contained in a small patch is not sufficient for detail recovery (ill-posed). Our very deep network using large receptive field takes a large image context into account. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_7", "text": " Convergence We suggest a way to speed-up the training: residual-learning CNN and extremely high learning rates. As LR image and HR image share the same information to a large extent, explicitly modelling the residual image, which is the difference between HR and LR images, is advantageous. We propose a network structure for efficient learning when input and output are highly correlated. Moreover, our initial learning rate is 104superscript10410^{4} times higher than that of SRCNN . This is enabled by residual-learning and gradient clipping. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_8", "text": " Scale Factor We propose a single-model SR approach. Scales are typically user-specified and can be arbitrary including fractions. For example, one might need smooth zoom-in in an image viewer or resizing to a specific dimension. Training and storing many scale-dependent models in preparation for all possible scenarios is impractical. We find a single convolutional network is sufficient for multi-scale-factor super-resolution. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_9", "text": " Contribution In summary, in this work, we propose a highly accurate SR method based on a very deep convolutional network. Very deep networks converge too slowly if small learning rates are used. Boosting convergence rate with high learning rates lead to exploding gradients and we resolve the issue with residual-learning and gradient clipping. In addition, we extend our work to cope with multi-scale SR problem in a single network. Our method is relatively accurate and fast in comparison to state-of-the-art methods as illustrated in Figure 1. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_10", "text": " SRCNN is a representative state-of-art method for deep learning-based SR approach. So, let us analyze and compare it with our proposed method. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_11", "text": " Model SRCNN consists of three layers: patch extraction/representation, non-linear mapping and reconstruction. Filters of spatial sizes 9×9999\\times 9, 1×1111\\times 1, and 5×5555\\times 5 were used respectively. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_12", "text": " In , Dong et al. attempted to prepare deeper models, but failed to observe superior performance after a week of training. In some cases, deeper models gave inferior performance. They conclude that deeper networks do not result in better performance (Figure 9). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_13", "text": " However, we argue that increasing depth significantly boosts performance. We successfully use 20 weight layers (3×3333\\times 3 for each layer). Our network is very deep (20 vs. 3 ) and information used for reconstruction (receptive field) is much larger (41×41414141\\times 41 vs. 13×13131313\\times 13). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_14", "text": " Training For training, SRCNN directly models high-resolution images. A high-resolution image can be decomposed into a low frequency information (corresponding to low-resolution image) and high frequency information (residual image or image details). Input and output images share the same low-frequency information. This indicates that SRCNN serves two purposes: carrying the input to the end layer and reconstructing residuals. Carrying the input to the end is conceptually similar to what an auto-encoder does. Training time might be spent on learning this auto-encoder so that the convergence rate of learning the other part (image details) is significantly decreased. In contrast, since our network models the residual images directly, we can have much faster convergence with even better accuracy. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_15", "text": " Scale As in most existing SR methods, SRCNN is trained for a single scale factor and is supposed to work only with the specified scale. Thus, if a new scale is on demand, a new model has to be trained. To cope with multiple scale SR (possibly including fractional factors), we need to construct individual single scale SR system for each scale of interest. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_16", "text": " However, preparing many individual machines for all possible scenarios to cope with multiple scales is inefficient and impractical. In this work, we design and train a single network to handle multiple scale SR problem efficiently. This turns out to work very well. Our single machine is compared favorably to a single-scale expert for the given sub-task. For three scales factors (×2,3,4\\times 2,3,4), we can reduce the number of parameters by three-fold. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_17", "text": " In addition to the aforementioned issues, there are some minor differences. Our output image has the same size as the input image by padding zeros every layer during training whereas output from SRCNN is smaller than the input. Finally, we simply use the same learning rates for all layers while SRCNN uses different learning rates for different layers in order to achieve stable convergence. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_18", "text": " For SR image reconstruction, we use a very deep convolutional network inspired by Simonyan and Zisserman . The configuration is outlined in Figure 2. We use d𝑑d layers where layers except the first and the last are of the same type: 64 filter of the size 3×3×6433643\\times 3\\times 64, where a filter operates on 3×3333\\times 3 spatial region across 64 channels (feature maps). The first layer operates on the input image. The last layer, used for image reconstruction, consists of a single filter of size 3×3×6433643\\times 3\\times 64. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_19", "text": " The network takes an interpolated low-resolution image (to the desired size) as input and predicts image details. Modelling image details is often used in super-resolution methods (21, 22, 15, 3) and we find that CNN-based methods can benefit from this domain-specific knowledge. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_20", "text": " In this work, we demonstrate that explicitly modelling image details (residuals) has several advantages. These are further discussed later in Section 4.2. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_21", "text": " One problem with using a very deep network to predict dense outputs is that the size of the feature map gets reduced every time convolution operations are applied. For example, when an input of size (n+1)×(n+1)𝑛1𝑛1(n+1)\\times(n+1) is applied to a network with receptive field size n×n𝑛𝑛n\\times n, the output image is 1×1111\\times 1. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_22", "text": " This is in accordance with other super-resolution methods since many require surrounding pixels to infer center pixels correctly. This center-surround relation is useful since the surrounding region provides more constraints to this ill-posed problem (SR). For pixels near the image boundary, this relation cannot be exploited to the full extent and many SR methods crop the result image. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_23", "text": " This methodology, however, is not valid if the required surround region is very big. After cropping, the final image is too small to be visually pleasing. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_24", "text": " To resolve this issue, we pad zeros before convolutions to keep the sizes of all feature maps (including the output image) the same. It turns out that zero-padding works surprisingly well. For this reason, our method differs from most other methods in the sense that pixels near the image boundary are also correctly predicted. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_25", "text": " Once image details are predicted, they are added back to the input ILR image to give the final image (HR). We use this structure for all experiments in our work. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_26", "text": " We now describe the objective to minimize in order to find optimal parameters of our model. Let 𝐱𝐱{\\bf x} denote an interpolated low-resolution image and 𝐲𝐲{\\bf y} a high-resolution image. Given a training dataset {𝐱(i),𝐲(i)}i=1N\\{{\\bf x}^{(i)},{\\bf y}^{(i)}\\}{}_{i=1}^{N}, our goal is to learn a model f𝑓f that predicts values 𝐲^=f​(𝐱)^𝐲𝑓𝐱\\mathbf{\\hat{y}}=f(\\mathbf{x}), where 𝐲^^𝐲\\mathbf{\\hat{y}} is an estimate of the target HR image. We minimize the mean squared error 12​‖𝐲−f​(𝐱)‖212superscriptnorm𝐲𝑓𝐱2\\frac{1}{2}||\\mathbf{y}-f(\\mathbf{x})||^{2} averaged over the training set is minimized. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_27", "text": " Residual-Learning In SRCNN, the exact copy of the input has to go through all layers until it reaches the output layer. With many weight layers, this becomes an end-to-end relation requiring very long-term memory. For this reason, the vanishing/exploding gradients problem can be critical. We can solve this problem simply with residual-learning. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_28", "text": " As the input and output images are largely similar, we define a residual image 𝐫=𝐲−𝐱𝐫𝐲𝐱{\\bf r}={\\bf y}-{\\bf x}, where most values are likely to be zero or small. We want to predict this residual image. The loss function now becomes 12​‖𝐫−f​(𝐱)‖212superscriptnorm𝐫𝑓𝐱2\\frac{1}{2}||\\mathbf{r}-f(\\mathbf{x})||^{2}, where f​(𝐱)𝑓𝐱f(\\bf{x}) is the network prediction. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_29", "text": " In networks, this is reflected in the loss layer as follows. Our loss layer takes three inputs: residual estimate, network input (ILR image) and ground truth HR image. The loss is computed as the Euclidean distance between the reconstructed image (the sum of network input and output) and ground truth. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_30", "text": " Training is carried out by optimizing the regression objective using mini-batch gradient descent based on back-propagation (LeCun et al. ). We set the momentum parameter to 0.9. The training is regularized by weight decay (L2subscript𝐿2L_{2} penalty multiplied by 0.0001). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_31", "text": " High Learning Rates for Very Deep Networks Training deep models can fail to converge in realistic limit of time. SRCNN fails to show superior performance with more than three weight layers. While there can be various reasons, one possibility is that they stopped their training procedure before networks converged. Their learning rate 10−5superscript10510^{-5} is too small for a network to converge within a week on a common GPU. Looking at Fig. 9 of , it is not easy to say their deeper networks have converged and their performances were saturated. While more training will eventually resolve the issue, but increasing depth to 20 does not seems practical with SRCNN. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_32", "text": " It is a basic rule of thumb to make learning rate high to boost training. But simply setting learning rate high can also lead to vanishing/exploding gradients . For the reason, we suggest an adjustable gradient clipping for maximal boost in speed while suppressing exploding gradients. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_33", "text": " Adjustable Gradient Clipping Gradient clipping is a technique that is often used in training recurrent neural networks . But, to our knowledge, its usage is limited in training CNNs. While there exist many ways to limit gradients, one of the common strategies is to clip individual gradients to the predefined range (−θ,θ)𝜃𝜃(-\\theta,\\theta). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_34", "text": " With clipping, gradients are in a certain range. With stochastic gradient descent commonly used for training, learning rate is multiplied to adjust the step size. If high learning rate is used, it is likely that θ𝜃\\theta is tuned to be small to avoid exploding gradients in a high learning rate regime. But as learning rate is annealed to get smaller, the effective gradient (gradient multiplied by learning rate) approaches zero and training can take exponentially many iterations to converge if learning rate is decreased geometrically. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_35", "text": " For maximal speed of convergence, we clip the gradients to (−θγ,θγ)𝜃𝛾𝜃𝛾(-\\frac{\\theta}{\\gamma},\\frac{\\theta}{\\gamma}), where γ𝛾\\gamma denotes the current learning rate. We find the adjustable gradient clipping makes our convergence procedure extremely fast. Our 20-layer network training is done within 4 hours whereas 3-layer SRCNN takes several days to train. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_36", "text": " Multi-Scale While very deep models can boost performance, more parameters are now needed to define a network. Typically, one network is created for each scale factor. Considering that fractional scale factors are often used, we need an economical way to store and retrieve networks. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_37", "text": " For this reason, we also train a multi-scale model. With this approach, parameters are shared across all predefined scale factors. Training a multi-scale model is straightforward. Training datasets for several specified scales are combined into one big dataset. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_38", "text": " Data preparation is similar to SRCNN with some differences. Input patch size is now equal to the size of the receptive field and images are divided into sub-images with no overlap. A mini-batch consists of 64 sub-images, where sub-images from different scales can be in the same batch. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_39", "text": " We implement our model using the MatConvNet111http://www.vlfeat.org/matconvnet/ package . ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_40", "text": " In this section, we study three properties of our proposed method. First, we show that large depth is necessary for the task of SR. A very deep network utilizes more contextual information in an image and models complex functions with many nonlinear layers. We experimentally verify that deeper networks give better performances than shallow ones. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_41", "text": " Second, we show that our residual-learning network converges much faster than the standard CNN. Moreover, our network gives a significant boost in performance. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_42", "text": " Third, we show that our method with a single network performs as well as a method using multiple networks trained for each scale. We can effectively reduce model capacity (the number of parameters) of multi-network approaches. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_43", "text": " Convolutional neural networks exploit spatially-local correlation by enforcing a local connectivity pattern between neurons of adjacent layers . In other words, hidden units in layer m𝑚m take as input a subset of units in layer m−1𝑚1m-1. They form spatially contiguous receptive fields. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_44", "text": " Each hidden unit is unresponsive to variations outside of the receptive field with respect to the input. The architecture thus ensures that the learned filters produce the strongest response to a spatially local input pattern. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_45", "text": " However, stacking many such layers leads to filters that become increasingly “global” (i.e. responsive to a larger region of pixel space). In other words, a filter of very large support can be effectively decomposed into a series of small filters. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_46", "text": " In this work, we use filters of the same size, 3×\\times3, for all layers. For the first layer, the receptive field is of size 3×\\times3. For the next layers, the size of the receptive field increases by 2 in both height and width. For depth D𝐷D network, the receptive field has size (2​D+1)×(2​D+1)2𝐷12𝐷1(2D+1)\\times(2D+1). Its size is proportional to the depth. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_47", "text": " In the task of SR, this corresponds to the amount of contextual information that can be exploited to infer high-frequency components. A large receptive field means the network can use more context to predict image details. As SR is an ill-posed inverse problem, collecting and analyzing more neighbor pixels give more clues. For example, if there are some image patterns entirely contained in a receptive field, it is plausible that this pattern is recognized and used to super-resolve the image. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_48", "text": " In addition, very deep networks can exploit high nonlinearities. We use 19 rectified linear units and our networks can model very complex functions with moderate number of channels (neurons). The advantages of making a thin deep network is well explained in Simonyan and Zisserman . ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_49", "text": " We now experimentally show that very deep networks significantly improve SR performance. We train and test networks of depth ranging from 5 to 20 (only counting weight layers excluding nonlinearity layers). In Figure 3, we show the results. In most cases, performance increases as depth increases. As depth increases, performance improves rapidly. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_50", "text": " As we already have a low-resolution image as the input, predicting high-frequency components is enough for the purpose of SR. Although the concept of predicting residuals has been used in previous methods (21, 22, 26), it has not been studied in the context of deep-learning-based SR framework. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_51", "text": " In this work, we have proposed a network structure that learns residual images. We now study the effect of this modification to a standard CNN structure in detail. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_52", "text": " First, we find that this residual network converges much faster. Two networks are compared experimentally: the residual network and the standard non-residual network. We use depth 10 (weight layers) and scale factor 2. Performance curves for various learning rates are shown in Figure 4. All use the same learning rate scheduling mechanism that has been mentioned above. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_53", "text": " Second, at convergence, the residual network shows superior performance. In Figure 4, residual networks give higher PSNR when training is done. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_54", "text": " Another remark is that if small learning rates are used, networks do not converge in the given number of epochs. If initial learning rate 0.1 is used, PSNR of a residual-learning network reaches 36.90 within 10 epochs. But if 0.001 is used instead, the network never reaches the same level of performance (its performance is 36.52 after 80 epochs). In a similar manner, residual and non-residual networks show dramatic performance gaps after 10 epochs (36.90 vs. 27.42 for rate 0.1). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_55", "text": " In short, this simple modification to a standard non-residual network structure is very powerful and one can explore the validity of the idea in other image restoration problems where input and output images are highly correlated. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_56", "text": " Scale augmentation during training is a key technique to equip a network with super-resolution machines of multiple scales. Many SR processes for different scales can be executed with our multi-scale machine with much smaller capacity than that of single-scale machines combined. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_57", "text": " We start with an interesting experiment as follows: we train our network with a single scale factor strainsubscript𝑠trains_{\\text{train}} and it is tested under another scale factor stestsubscript𝑠tests_{\\text{test}}. Here, factors 2,3 and 4 that are widely used in SR comparisons are considered. Possible pairs (strainsubscript𝑠trains_{\\text{train}},stestsubscript𝑠tests_{\\text{test}}) are tried for the dataset ‘Set5’ . Experimental results are summarized in Table 2. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_58", "text": " Performance is degraded if strain≠stestsubscript𝑠trainsubscript𝑠tests_{\\text{train}}\\neq s_{\\text{test}}. For scale factor 2, the model trained with factor 2 gives PSNR of 37.10 (in dB), whereas models trained with factor 3 and 4 give 30.05 and 28.13, respectively. A network trained over single-scale data is not capable of handling other scales. In many tests, it is even worse than bicubic interpolation, the method used for generating the input image. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_59", "text": " We now test if a model trained with scale augmentation is capable of performing SR at multiple scale factors. The same network used above is trained with multiple scale factors strain={2,3,4}subscript𝑠train234s_{\\text{train}}=\\{2,3,4\\}. In addition, we experiment with the cases strain={2,3},{2,4},{3,4}subscript𝑠train232434s_{\\text{train}}=\\{2,3\\},\\{2,4\\},\\{3,4\\} for more comparisons. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_60", "text": " We observe that the network copes with any scale used during training. When strain={2,3,4}subscript𝑠train234s_{\\text{train}}=\\{2,3,4\\} (×2,3,4\\times 2,3,4 in Table 2), its PSNR for each scale is comparable to those achieved from the corresponding result of single-scale network: 37.06 vs. 37.10 (×2absent2\\times 2), 33.27 vs. 32.89 (×3absent3\\times 3), 30.95 vs. 30.86 (×4absent4\\times 4). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_61", "text": " Another pattern is that for large scales (×3,4\\times 3,4), our multi-scale network outperforms single-scale network: our model (×2,3\\times 2,3), (×3,4\\times 3,4) and (×2,3,4\\times 2,3,4) give PSNRs 33.22, 33.24 and 33.27 for test scale 3, respectively, whereas (×3absent3\\times 3) gives 32.89. Similarly, (×2,4\\times 2,4), (×3,4\\times 3,4) and (×2,3,4\\times 2,3,4) give 30.86, 30.94 and 30.95 (vs. 30.84 by ×4absent4\\times 4 model), respectively. From this, we observe that training multiple scales boosts the performance for large scales. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_62", "text": " In this section, we evaluate the performance of our method on several datasets. We first describe datasets used for training and testing our method. Next, parameters necessary for training are given. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_63", "text": " After outlining our experimental setup, we compare our method with several state-of-the-art SISR methods. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_64", "text": " Training dataset Different learning-based methods use different training images. For example, RFL has two methods, where the first one uses 91 images from Yang et al. and the second one uses 291 images with the addition of 200 images from Berkeley Segmentation Dataset . SRCNN uses a very large ImageNet dataset. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_65", "text": " We use 291 images as in for benchmark with other methods in this section. In addition, data augmentation (rotation or flip) is used. For results in previous sections, we used 91 images to train network fast, so performances can be slightly different. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_66", "text": " Test dataset For benchmark, we use four datasets. Datasets ‘Set5’ and ‘Set14’ are often used for benchmark in other works (22, 21, 5). Dataset ‘Urban100’, a dataset of urban images recently provided by Huang et al. , is very interesting as it contains many challenging images failed by many of the existing methods. Finally, dataset ‘B100’, natural images in the Berkeley Segmentation Dataset used in Timofte et al. and Yang and Yang for benchmark, is also employed. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_67", "text": " We provide parameters used to train our final model. We use a network of depth 20. Training uses batches of size 64. Momentum and weight decay parameters are set to 0.9 and 0.00010.00010.0001, respectively. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_68", "text": " For weight initialization, we use the method described in He et al. . This is a theoretically sound procedure for networks utilizing rectified linear units (ReLu). ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_69", "text": " We train all experiments over 80 epochs (9960 iterations with batch size 64). Learning rate was initially set to 0.1 and then decreased by a factor of 10 every 20 epochs. In total, the learning rate was decreased 3 times, and the learning is stopped after 80 epochs. Training takes roughly 4 hours on GPU Titan Z. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_70", "text": " For benchmark, we follow the publicly available framework of Huang et al. . It enables the comparison of many state-of-the-art results with the same evaluation procedure. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_71", "text": " The framework applies bicubic interpolation to color components of an image and sophisticated models to luminance components as in other methods , , . This is because human vision is more sensitive to details in intensity than in color. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_72", "text": " This framework crops pixels near image boundary. For our method, this procedure is unnecessary as our network outputs the full-sized image. For fair comparison, however, we also crop pixels to the same amount. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_73", "text": " We provide quantitative and qualitative comparisons. Compared methods are A+ , RFL, SelfEx and SRCNN . In Table 3, we provide a summary of quantitative evaluation on several datasets. Our methods outperform all previous methods in these datasets. Moreover, our methods are relatively fast. The public code of SRCNN based on a CPU implementation is slower than the code used by Dong et. al in their paper based on a GPU implementation. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_74", "text": " In Figures 6 and 7, we compare our method with top-performing methods. In Figure 6, only our method perfectly reconstructs the line in the middle. Similarly, in Figure 7, contours are clean and vivid in our method whereas they are severely blurred or distorted in other methods. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" }, { "id": "1511.04587_all_75", "text": " In this work, we have presented a super-resolution method using very deep networks. Training a very deep network is hard due to a slow convergence rate. We use residual-learning and extremely high learning rates to optimize a very deep network fast. Convergence speed is maximized and we use gradient clipping to ensure the training stability. We have demonstrated that our method outperforms the existing method by a large margin on benchmarked images. We believe our approach is readily applicable to other image restoration problems such as denoising and compression artifact removal. ", "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" } ]
Which datasets are used by the paper for supervised learning?
UCF-101 and HMDB-51 datasets are used for supervised learning [22].
[ 22 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised sequence learning tasks, such as speech recognition (Graves & Jaitly, 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014), and caption generation for images (Vinyals et al., 2014). They have also been applied on videos for recognizing actions and generating natural language descriptions (Donahue et al., 2014). A general sequence to sequence learning framework was described by Sutskever et al. (2014) in which a recurrent network is used to encode a sequence into a fixed length representation, and then another recurrent network is used to decode a sequence out of that representation. In this work, we apply and extend this framework to learn representations of sequences of images. We choose to work in the unsupervised setting where we only have access to a dataset of unlabelled videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_1", "text": " Videos are an abundant and rich source of visual information and can be seen as a window into the physics of the world we live in, showing us examples of what constitutes objects, how objects move against backgrounds, what happens when cameras move and how things get occluded. Being able to learn a representation that disentangles these factors would help in making intelligent machines that can understand and act in their environment. Additionally, learning good video representations is essential for a number of useful tasks, such as recognizing actions and gestures. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_2", "text": " Supervised learning has been extremely successful in learning good visual representations that not only produce good results at the task they are trained for, but also transfer well to other tasks and datasets. Therefore, it is natural to extend the same approach to learning video representations. This has led to research in 3D convolutional nets (Ji et al., 2013; Tran et al., 2014), different temporal fusion strategies (Karpathy et al., 2014) and exploring different ways of presenting visual information to convolutional nets (Simonyan & Zisserman, 2014a). However, videos are much higher dimensional entities compared to single images. Therefore, it becomes increasingly difficult to do credit assignment and learn long range structure, unless we collect much more labelled data or do a lot of feature engineering (for example computing the right kinds of flow features) to keep the dimensionality low. The costly work of collecting more labelled data and the tedious work of doing more clever engineering can go a long way in solving particular problems, but this is ultimately unsatisfying as a machine learning solution. This highlights the need for using unsupervised learning to find and represent structure in videos. Moreover, videos have a lot of structure in them (spatial and temporal regularities) which makes them particularly well suited as a domain for building unsupervised learning models. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_3", "text": " When designing any unsupervised learning model, it is crucial to have the right inductive biases and choose the right objective function so that the learning signal points the model towards learning useful features. In this paper, we use the LSTM Encoder-Decoder framework to learn video representations. The key inductive bias here is that the same operation must be applied at each time step to propagate information to the next step. This enforces the fact that the physics of the world remains the same, irrespective of input. The same physics acting on any state, at any time, must produce the next state. Our model works as follows. The Encoder LSTM runs through a sequence of frames to come up with a representation. This representation is then decoded through another LSTM to produce a target sequence. We consider different choices of the target sequence. One choice is to predict the same sequence as the input. The motivation is similar to that of autoencoders – we wish to capture all that is needed to reproduce the input but at the same time go through the inductive biases imposed by the model. Another option is to predict the future frames. Here the motivation is to learn a representation that extracts all that is needed to extrapolate the motion and appearance beyond what has been observed. These two natural choices can also be combined. In this case, there are two decoder LSTMs – one that decodes the representation into the input sequence and another that decodes the same representation to predict the future. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_4", "text": " The inputs to the model can, in principle, be any representation of individual video frames. However, for the purposes of this work, we limit our attention to two kinds of inputs. The first is image patches. For this we use natural image patches as well as a dataset of moving MNIST digits. The second is high-level “percepts” extracted by applying a convolutional net trained on ImageNet. These percepts are the states of last (and/or second-to-last) layers of rectified linear hidden states from a convolutional neural net model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_5", "text": " In order to evaluate the learned representations we qualitatively analyze the reconstructions and predictions made by the model. For a more quantitative evaluation, we use these LSTMs as initializations for the supervised task of action recognition. If the unsupervised learning model comes up with useful representations then the classifier should be able to perform better, especially when there are only a few labelled examples. We find that this is indeed the case. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_6", "text": " The first approaches to learning representations of videos in an unsupervised way were based on ICA (van Hateren & Ruderman, 1998; Hurri & Hyvärinen, 2003). Le et al. (2011) approached this problem using multiple layers of Independent Subspace Analysis modules. Generative models for understanding transformations between pairs of consecutive images are also well studied (Memisevic, 2013; Memisevic & Hinton, 2010; Susskind et al., 2011). This work was extended recently by Michalski et al. (2014) to model longer sequences. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_7", "text": " Recently, Ranzato et al. (2014) proposed a generative model for videos. The model uses a recurrent neural network to predict the next frame or interpolate between frames. In this work, the authors highlight the importance of choosing the right loss function. It is argued that squared loss in input space is not the right objective because it does not respond well to small distortions in input space. The proposed solution is to quantize image patches into a large dictionary and train the model to predict the identity of the target patch. This does solve some of the problems of squared loss but it introduces an arbitrary dictionary size into the picture and altogether removes the idea of patches being similar or dissimilar to one other. Designing an appropriate loss function that respects our notion of visual similarity is a very hard problem (in a sense, almost as hard as the modeling problem we want to solve in the first place). Therefore, in this paper, we use the simple squared loss objective function as a starting point and focus on designing an encoder-decoder RNN architecture that can be used with any loss function. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_8", "text": " In this section, we describe several variants of our LSTM Encoder-Decoder model. The basic unit of our network is the LSTM cell block. Our implementation of LSTMs follows closely the one discussed by Graves (2013). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_9", "text": " In this section we briefly describe the LSTM unit which is the basic building block of our model. The unit is shown in Fig. 1 (reproduced from Graves (2013)). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_10", "text": " Each LSTM unit has a cell which has a state ctsubscript𝑐𝑡c_{t} at time t𝑡t. This cell can be thought of as a memory unit. Access to this memory unit for reading or modifying it is controlled through sigmoidal gates – input gate itsubscript𝑖𝑡i_{t}, forget gate ftsubscript𝑓𝑡f_{t} and output gate otsubscript𝑜𝑡o_{t}. The LSTM unit operates as follows. At each time step it receives inputs from two external sources at each of the four terminals (the three gates and the input). The first source is the current frame 𝐱tsubscript𝐱𝑡{{\\bf x}_{t}}. The second source is the previous hidden states of all LSTM units in the same layer 𝐡t−1subscript𝐡𝑡1{\\bf h}_{t-1}. Additionally, each gate has an internal source, the cell state ct−1subscript𝑐𝑡1c_{t-1} of its cell block. The links between a cell and its own gates are called peephole connections. The inputs coming from different sources get added up, along with a bias. The gates are activated by passing their total input through the logistic function. The total input at the input terminal is passed through the tanh non-linearity. The resulting activation is multiplied by the activation of the input gate. This is then added to the cell state after multiplying the cell state by the forget gate’s activation ftsubscript𝑓𝑡f_{t}. The final output from the LSTM unit htsubscriptℎ𝑡h_{t} is computed by multiplying the output gate’s activation otsubscript𝑜𝑡o_{t} with the updated cell state passed through a tanh non-linearity. These updates are summarized for a layer of LSTM units as follows 𝐢tsubscript𝐢𝑡\\displaystyle{\\bf i}_{t} =\\displaystyle= σ​(Wx​i​𝐱t+Wh​i​𝐡t−1+Wc​i​𝐜t−1+𝐛i),𝜎subscript𝑊𝑥𝑖subscript𝐱𝑡subscript𝑊ℎ𝑖subscript𝐡𝑡1subscript𝑊𝑐𝑖subscript𝐜𝑡1subscript𝐛𝑖\\displaystyle\\sigma\\left(W_{xi}{\\bf x}_{t}+W_{hi}{\\bf h}_{t-1}+W_{ci}{\\bf c}_{t-1}+{\\bf b}_{i}\\right), 𝐟tsubscript𝐟𝑡\\displaystyle{\\bf f}_{t} =\\displaystyle= σ​(Wx​f​𝐱t+Wh​f​𝐡t−1+Wc​f​𝐜t−1+𝐛f),𝜎subscript𝑊𝑥𝑓subscript𝐱𝑡subscript𝑊ℎ𝑓subscript𝐡𝑡1subscript𝑊𝑐𝑓subscript𝐜𝑡1subscript𝐛𝑓\\displaystyle\\sigma\\left(W_{xf}{\\bf x}_{t}+W_{hf}{\\bf h}_{t-1}+W_{cf}{\\bf c}_{t-1}+{\\bf b}_{f}\\right), 𝐜tsubscript𝐜𝑡\\displaystyle{\\bf c}_{t} =\\displaystyle= 𝐟t​𝐜t−1+𝐢t​tanh⁡(Wx​c​𝐱t+Wh​c​𝐡t−1+𝐛c),subscript𝐟𝑡subscript𝐜𝑡1subscript𝐢𝑡subscript𝑊𝑥𝑐subscript𝐱𝑡subscript𝑊ℎ𝑐subscript𝐡𝑡1subscript𝐛𝑐\\displaystyle{\\bf f}_{t}{\\bf c}_{t-1}+{\\bf i}_{t}\\tanh\\left(W_{xc}{\\bf x}_{t}+W_{hc}{\\bf h}_{t-1}+{\\bf b}_{c}\\right), 𝐨tsubscript𝐨𝑡\\displaystyle{\\bf o}_{t} =\\displaystyle= σ​(Wx​o​𝐱t+Wh​o​𝐡t−1+Wc​o​𝐜t+𝐛o),𝜎subscript𝑊𝑥𝑜subscript𝐱𝑡subscript𝑊ℎ𝑜subscript𝐡𝑡1subscript𝑊𝑐𝑜subscript𝐜𝑡subscript𝐛𝑜\\displaystyle\\sigma\\left(W_{xo}{\\bf x}_{t}+W_{ho}{\\bf h}_{t-1}+W_{co}{\\bf c}_{t}+{\\bf b}_{o}\\right), 𝐡tsubscript𝐡𝑡\\displaystyle{\\bf h}_{t} =\\displaystyle= 𝐨t​tanh⁡(𝐜t).subscript𝐨𝑡subscript𝐜𝑡\\displaystyle{\\bf o}_{t}\\tanh({\\bf c}_{t}). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_11", "text": " Note that all Wc⁣∙subscript𝑊𝑐∙W_{c\\bullet} matrices are diagonal, whereas the rest are dense. The key advantage of using an LSTM unit over a traditional neuron in an RNN is that the cell state in an LSTM unit sums activities over time. Since derivatives distribute over sums, the error derivatives don’t vanish quickly as they get sent back into time. This makes it easy to do credit assignment over long sequences and discover long-range features. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_12", "text": " In this section, we describe a model that uses Recurrent Neural Nets (RNNs) made of LSTM units to do unsupervised learning. The model consists of two RNNs – the encoder LSTM and the decoder LSTM as shown in Fig. 2. The input to the model is a sequence of vectors (image patches or features). The encoder LSTM reads in this sequence. After the last input has been read, the decoder LSTM takes over and outputs a prediction for the target sequence. The target sequence is same as the input sequence, but in reverse order. Reversing the target sequence makes the optimization easier because the model can get off the ground by looking at low range correlations. This is also inspired by how lists are represented in LISP. The encoder can be seen as creating a list by applying the cons function on the previously constructed list and the new input. The decoder essentially unrolls this list, with the hidden to output weights extracting the element at the top of the list (car function) and the hidden to hidden weights extracting the rest of the list (cdr function). Therefore, the first element out is the last element in. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_13", "text": " The decoder can be of two kinds – conditional or unconditioned. A conditional decoder receives the last generated output frame as input, i.e., the dotted input in Fig. 2 is present. An unconditioned decoder does not receive that input. This is discussed in more detail in Sec. 2.4. Fig. 2 shows a single layer LSTM Autoencoder. The architecture can be extend to multiple layers by stacking LSTMs on top of each other. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_14", "text": " Why should this learn good features? The state of the encoder LSTM after the last input has been read is the representation of the input video. The decoder LSTM is being asked to reconstruct back the input sequence from this representation. In order to do so, the representation must retain information about the appearance of the objects and the background as well as the motion contained in the video. However, an important question for any autoencoder-style model is what prevents it from learning an identity mapping and effectively copying the input to the output. In that case all the information about the input would still be present but the representation will be no better than the input. There are two factors that control this behaviour. First, the fact that there are only a fixed number of hidden units makes it unlikely that the model can learn trivial mappings for arbitrary length input sequences. Second, the same LSTM operation is used to decode the representation recursively. This means that the same dynamics must be applied on the representation at any stage of decoding. This further prevents the model from learning an identity mapping. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_15", "text": " Another natural unsupervised learning task for sequences is predicting the future. This is the approach used in language models for modeling sequences of words. The design of the Future Predictor Model is same as that of the Autoencoder Model, except that the decoder LSTM in this case predicts frames of the video that come after the input sequence (Fig. 3). Ranzato et al. (2014) use a similar model but predict only the next frame at each time step. This model, on the other hand, predicts a long sequence into the future. Here again we can consider two variants of the decoder – conditional and unconditioned. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_16", "text": " Why should this learn good features? In order to predict the next few frames correctly, the model needs information about which objects and background are present and how they are moving so that the motion can be extrapolated. The hidden state coming out from the encoder will try to capture this information. Therefore, this state can be seen as a representation of the input sequence. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_17", "text": " For each of these two models, we can consider two possibilities - one in which the decoder LSTM is conditioned on the last generated frame and the other in which it is not. In the experimental section, we explore these choices quantitatively. Here we briefly discuss arguments for and against a conditional decoder. A strong argument in favour of using a conditional decoder is that it allows the decoder to model multiple modes in the target sequence distribution. Without that, we would end up averaging the multiple modes in the low-level input space. However, this is an issue only if we expect multiple modes in the target sequence distribution. For the LSTM Autoencoder, there is only one correct target and hence a unimodal target distribution. But for the LSTM Future Predictor there is a possibility of multiple targets given an input because even if we assume a deterministic universe, everything needed to predict the future will not necessarily be observed in the input. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_18", "text": " There is also an argument against using a conditional decoder from the optimization point-of-view. There are strong short-range correlations in video data, for example, most of the content of a frame is same as the previous one. If the decoder was given access to the last few frames while generating a particular frame at training time, it would find it easy to pick up on these correlations. There would only be a very small gradient that tries to fix up the extremely subtle errors that require long term knowledge about the input sequence. In an unconditioned decoder, this input is removed and the model is forced to look for information deep inside the encoder. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_19", "text": " The two tasks – reconstructing the input and predicting the future can be combined to create a composite model as shown in Fig. 4. Here the encoder LSTM is asked to come up with a state from which we can both predict the next few frames as well as reconstruct the input. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_20", "text": " This composite model tries to overcome the shortcomings that each model suffers on its own. A high-capacity autoencoder would suffer from the tendency to learn trivial representations that just memorize the inputs. However, this memorization is not useful at all for predicting the future. Therefore, the composite model cannot just memorize information. On the other hand, the future predictor suffers form the tendency to store information only about the last few frames since those are most important for predicting the future, i.e., in order to predict vtsubscript𝑣𝑡v_{t}, the frames {vt−1,…,vt−k}subscript𝑣𝑡1…subscript𝑣𝑡𝑘\\{v_{t-1},\\ldots,v_{t-k}\\} are much more important than v0subscript𝑣0v_{0}, for some small value of k𝑘k. Therefore the representation at the end of the encoder will have forgotten about a large part of the input. But if we ask the model to also predict all of the input sequence, then it cannot just pay attention to the last few frames. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_21", "text": " We design experiments to accomplish the following objectives: • Get a qualitative understanding of what the LSTM learns to do. • Measure the benefit of initializing networks for supervised learning tasks with the weights found by unsupervised learning, especially with very few training examples. • Compare the different proposed models - Autoencoder, Future Predictor and Composite models and their conditional variants. • Compare with state-of-the-art action recognition benchmarks. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_22", "text": " We use the UCF-101 and HMDB-51 datasets for supervised tasks. The UCF-101 dataset (Soomro et al., 2012) contains 13,320 videos with an average length of 6.2 seconds belonging to 101 different action categories. The dataset has 3 standard train/test splits with the training set containing around 9,500 videos in each split (the rest are test). The HMDB-51 dataset (Kuehne et al., 2011) contains 5100 videos belonging to 51 different action categories. Mean length of the videos is 3.2 seconds. This also has 3 train/test splits with 3570 videos in the training set and rest in test. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_23", "text": " To train the unsupervised models, we used a subset of the Sports-1M dataset (Karpathy et al., 2014), that contains 1 million YouTube clips. Even though this dataset is labelled for actions, we did not do any supervised experiments on it because of logistical constraints with working with such a huge dataset. We instead collected 300 hours of video by randomly sampling 10 second clips from the dataset. It is possible to collect better samples if instead of choosing randomly, we extracted videos where a lot of motion is happening and where there are no shot boundaries. However, we did not do so in the spirit of unsupervised learning, and because we did not want to introduce any unnatural bias in the samples. We also used the supervised datasets (UCF-101 and HMDB-51) for unsupervised training. However, we found that using them did not give any significant advantage over just using the YouTube videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_24", "text": " We extracted percepts using the convolutional neural net model of Simonyan & Zisserman (2014b). The videos have a resolution of 240 ×\\times 320 and were sampled at almost 30 frames per second. We took the central 224 ×\\times 224 patch from each frame and ran it through the convnet. This gave us the RGB percepts. Additionally, for UCF-101, we computed flow percepts by extracting flows using the Brox method and training the temporal stream convolutional network as described by Simonyan & Zisserman (2014a). We found that the fc6 features worked better than fc7 for single frame classification using both RGB and flow percepts. Therefore, we used the 4096-dimensional fc6 layer as the input representation of our data. Besides these percepts, we also trained the proposed models on 32 ×\\times 32 patches of pixels. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_25", "text": " All models were trained using backprop on a single NVIDIA Titan GPU. A two layer 2048 unit Composite model that predicts 13 frames and reconstructs 16 frames took 18-20 hours to converge on 300 hours of percepts. We initialized weights by sampling from a uniform distribution whose scale was set to 1/sqrt(fan-in). Biases at all the gates were initialized to zero. Peep-hole connections were initialized to zero. The supervised classifiers trained on 16 frames took 5-15 minutes to converge. The code can be found at https://github.com/emansim/unsupervised-videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_26", "text": " The aim of this set of experiments to visualize the properties of the proposed models. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_27", "text": " Experiments on MNIST We first trained our models on a dataset of moving MNIST digits. In this dataset, each video was 20 frames long and consisted of two digits moving inside a 64 ×\\times 64 patch. The digits were chosen randomly from the training set and placed initially at random locations inside the patch. Each digit was assigned a velocity whose direction was chosen uniformly randomly on a unit circle and whose magnitude was also chosen uniformly at random over a fixed range. The digits bounced-off the edges of the 64 ×\\times 64 frame and overlapped if they were at the same location. The reason for working with this dataset is that it is infinite in size and can be generated quickly on the fly. This makes it possible to explore the model without expensive disk accesses or overfitting issues. It also has interesting behaviours due to occlusions and the dynamics of bouncing off the walls. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_28", "text": " We first trained a single layer Composite Model. Each LSTM had 2048 units. The encoder took 10 frames as input. The decoder tried to reconstruct these 10 frames and the future predictor attempted to predict the next 10 frames. We used logistic output units with a cross entropy loss function. Fig. 5 shows two examples of running this model. The true sequences are shown in the first two rows. The next two rows show the reconstruction and future prediction from the one layer Composite Model. It is interesting to note that the model figures out how to separate superimposed digits and can model them even as they pass through each other. This shows some evidence of disentangling the two independent factors of variation in this sequence. The model can also correctly predict the motion after bouncing off the walls. In order to see if adding depth helps, we trained a two layer Composite Model, with each layer having 2048 units. We can see that adding depth helps the model make better predictions. Next, we changed the future predictor by making it conditional. We can see that this model makes sharper predictions. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_29", "text": " Experiments on Natural Image Patches Next, we tried to see if our models can also work with natural image patches. For this, we trained the models on sequences of 32 ×\\times 32 natural image patches extracted from the UCF-101 dataset. In this case, we used linear output units and the squared error loss function. The input was 16 frames and the model was asked to reconstruct the 16 frames and predict the future 13 frames. Fig. 6 shows the results obtained from a two layer Composite model with 2048 units. We found that the reconstructions and the predictions are both very blurry. We then trained a bigger model with 4096 units. The outputs from this model are also shown in Fig. 6. We can see that the reconstructions get much sharper. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_30", "text": " Generalization over time scales In the next experiment, we test if the model can work at time scales that are different than what it was trained on. We take a one hidden layer unconditioned Composite Model trained on moving MNIST digits. The model has 2048 LSTM units and looks at a 64 ×\\times 64 input. It was trained on input sequences of 10 frames to reconstruct those 10 frames as well as predict 10 frames into the future. In order to test if the future predictor is able to generalize beyond 10 frames, we let the model run for 100 steps into the future. Fig. 7(a) shows the pattern of activity in the LSTM units of the future predictor pathway for a randomly chosen test input. It shows the activity at each of the three sigmoidal gates (input, forget, output), the input (after the tanh non-linearity, before being multiplied by the input gate), the cell state and the final output (after being multiplied by the output gate). Even though the units are ordered randomly along the vertical axis, we can see that the dynamics has a periodic quality to it. The model is able to generate persistent motion for long periods of time. In terms of reconstruction, the model only outputs blobs after the first 15 frames, but the motion is relatively well preserved. More results, including long range future predictions over hundreds of time steps can see been at http://www.cs.toronto.edu/~nitish/unsupervised_video. To show that setting up a periodic behaviour is not trivial, Fig. 7(b) shows the activity from a randomly initialized future predictor. Here, the LSTM state quickly converges and the outputs blur completely. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_31", "text": " Out-of-domain Inputs Next, we test this model’s ability to deal with out-of-domain inputs. For this, we test the model on sequences of one and three moving digits. The model was trained on sequences of two moving digits, so it has never seen inputs with just one digit or three digits. Fig. 8 shows the reconstruction and future prediction results. For one moving digit, we can see that the model can do a good job but it really tries to hallucinate a second digit overlapping with the first one. The second digit shows up towards the end of the future reconstruction. For three digits, the model merges digits into blobs. However, it does well at getting the overall motion right. This highlights a key drawback of modeling entire frames of input in a single pass. In order to model videos with variable number of objects, we perhaps need models that not only have an attention mechanism in place, but can also learn to execute themselves a variable number of times and do variable amounts of computation. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_32", "text": " Visualizing Features Next, we visualize the features learned by this model. Fig. 9 shows the weights that connect each input frame to the encoder LSTM. There are four sets of weights. One set of weights connects the frame to the input units. There are three other sets, one corresponding to each of the three gates (input, forget and output). Each weight has a size of 64 ×\\times 64. A lot of features look like thin strips. Others look like higher frequency strips. It is conceivable that the high frequency features help in encoding the direction and velocity of motion. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_33", "text": " Fig. 10 shows the output features from the two LSTM decoders of a Composite Model. These correspond to the weights connecting the LSTM output units to the output layer. They appear to be somewhat qualitatively different from the input features shown in Fig. 9. There are many more output features that are local blobs, whereas those are rare in the input features. In the output features, the ones that do look like strips are much shorter than those in the input features. One way to interpret this is the following. The model needs to know about motion (which direction and how fast things are moving) from the input. This requires precise information about location (thin strips) and velocity (high frequency strips). But when it is generating the output, the model wants to hedge its bets so that it does not suffer a huge loss for predicting things sharply at the wrong place. This could explain why the output features have somewhat bigger blobs. The relative shortness of the strips in the output features can be explained by the fact that in the inputs, it does not hurt to have a longer feature than what is needed to detect a location because information is coarse-coded through multiple features. But in the output, the model may not want to put down a feature that is bigger than any digit because other units will have to conspire to correct for it. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_34", "text": " The aim of this set of experiments is to see if the features learned by unsupervised learning can help improve performance on supervised tasks. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_35", "text": " We trained a two layer Composite Model with 2048 hidden units with no conditioning on either decoders. The model was trained on percepts extracted from 300 hours of YouTube data. The model was trained to autoencode 16 frames and predict the next 13 frames. We initialize an LSTM classifier with the weights learned by the encoder LSTM from this model. The classifier is shown in Fig. 11. The output from each LSTM in the second layer goes into a softmax classifier that makes a prediction about the action being performed at each time step. Since only one action is being performed in each video in the datasets we consider, the target is the same at each time step. At test time, the predictions made at each time step are averaged. To get a prediction for the entire video, we average the predictions from all 16 frame blocks in the video with a stride of 8 frames. Using a smaller stride did not improve results. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_36", "text": " The baseline for comparing these models is an identical LSTM classifier but with randomly initialized weights. All classifiers used dropout regularization, where we dropped activations as they were communicated across layers but not through time within the same LSTM as proposed in Zaremba et al. (2014). We emphasize that this is a very strong baseline and does significantly better than just using single frames. Using dropout was crucial in order to train good baseline models especially with very few training examples. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_37", "text": " Fig. 12 compares three models - single frame classifier (logistic regression), baseline LSTM classifier and the LSTM classifier initialized with weights from the Composite Model as the number of labelled videos per class is varied. Note that having one labelled video means having many labelled 16 frame blocks. We can see that for the case of very few training examples, unsupervised learning gives a substantial improvement. For example, for UCF-101, the performance improves from 29.6% to 34.3% when training on only one labelled video. As the size of the labelled dataset grows, the improvement becomes smaller. Even for the full UCF-101 dataset we still get a considerable improvement from 74.5% to 75.8%. On HMDB-51, the improvement is from 42.8% to 44.0% for the full dataset (70 videos per class) and 14.4% to 19.1% for one video per class. Although, the improvement in classification by using unsupervised learning was not as big as we expected, we still managed to yield an additional improvement over a strong baseline. We discuss some avenues for improvements later. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_38", "text": " We further ran similar experiments on the optical flow percepts extracted from the UCF-101 dataset. A temporal stream convolutional net, similar to the one proposed by Simonyan & Zisserman (2014b), was trained on single frame optical flows as well as on stacks of 10 optical flows. This gave an accuracy of 72.2% and 77.5% respectively. Here again, our models took 16 frames as input, reconstructed them and predicted 13 frames into the future. LSTMs with 128 hidden units improved the accuracy by 2.1% to 74.3% for the single frame case. Bigger LSTMs did not improve results. By pretraining the LSTM, we were able to further improve the classification to 74.9% (±0.1plus-or-minus0.1\\pm 0.1). For stacks of 10 frames we improved very slightly to 77.7%. These results are summarized in Table 1. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_39", "text": " The aim of this set of experiments is to compare the different variants of the model proposed in this paper. Since it is always possible to get lower reconstruction error by copying the inputs, we cannot use input reconstruction error as a measure of how good a model is doing. However, we can use the error in predicting the future as a reasonable measure of how good the model is doing. Besides, we can use the performance on supervised tasks as a proxy for how good the unsupervised model is doing. In this section, we present results from these two analyses. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_40", "text": " Future prediction results are summarized in Table 2. For MNIST we compute the cross entropy of the predictions with respect to the ground truth, both of which are 64 ×\\times 64 patches. For natural image patches, we compute the squared loss. We see that the Composite Model always does a better job of predicting the future compared to the Future Predictor. This indicates that having the autoencoder along with the future predictor to force the model to remember more about the inputs actually helps predict the future better. Next, we can compare each model with its conditional variant. Here, we find that the conditional models perform better, as was also noted in Fig. 5. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_41", "text": " Next, we compare the models using performance on a supervised task. Table 3 shows the performance on action recognition achieved by finetuning different unsupervised learning models. Besides running the experiments on the full UCF-101 and HMDB-51 datasets, we also ran the experiments on small subsets of these to better highlight the case where we have very few training examples. We find that all unsupervised models improve over the baseline LSTM which is itself well-regularized by using dropout. The Autoencoder model seems to perform consistently better than the Future Predictor. The Composite model which combines the two does better than either one alone. Conditioning on the generated inputs does not seem to give a clear advantage over not doing so. The Composite Model with a conditional future predictor works the best, although its performance is almost same as that of the Composite Model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_42", "text": " Finally, we compare our models to the state-of-the-art action recognition results. The performance is summarized in Table 4. The table is divided into three sets. The first set compares models that use only RGB data (single or multiple frames). The second set compares models that use explicitly computed flow features only. Models in the third set use both. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_43", "text": " On RGB data, our model performs at par with the best deep models. It performs 3% better than the LRCN model that also used LSTMs on top of convnet features111However, the improvement is only partially from unsupervised learning, since we used a better convnet model.. Our model performs better than C3D features that use a 3D convolutional net. However, when the C3D features are concatenated with fc6 percepts, they do slightly better than our model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_44", "text": " The improvement for flow features over using a randomly initialized LSTM network is quite small. We believe this is atleast partly due to the fact that the flow percepts already capture a lot of the motion information that the LSTM would otherwise discover. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_45", "text": " When we combine predictions from the RGB and flow models, we obtain 84.3 accuracy on UCF-101. We believe further improvements can be made by running the model over different patch locations and mirroring the patches. Also, our model can be applied deeper inside the convnet instead of just at the top-level. That can potentially lead to further improvements. In this paper, we focus on showing that unsupervised training helps consistently across both datasets and across different sized training sets. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_46", "text": " We proposed models based on LSTMs that can learn good video representations. We compared them and analyzed their properties through visualizations. Moreover, we managed to get an improvement on supervised tasks. The best performing model was the Composite Model that combined an autoencoder and a future predictor. Conditioning on generated outputs did not have a significant impact on the performance for supervised tasks, however it made the future predictions look slightly better. The model was able to persistently generate motion well beyond the time scales it was trained for. However, it lost the precise object features rapidly after the training time scale. The features at the input and output layers were found to have some interesting properties. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_47", "text": " To further get improvements for supervised tasks, we believe that the model can be extended by applying it convolutionally across patches of the video and stacking multiple layers of such models. Applying this model in the lower layers of a convolutional net could help extract motion information that would otherwise be lost across max-pooling layers. In our future work, we plan to build models based on these autoencoders from the bottom up instead of applying them only to percepts. ", "title": "Unsupervised Learning of Video Representations using LSTMs" } ]
What does "interaction between the pixels to the text embedding through the diffusion process" mean?
To answer this question we need to recall the diffusion process, which is in order to predict the noise of an image we have two inputs 1- noisy image and 2- text embedding, and the interaction between the two inputs are fused using Cross-attention layers that produce spatial attention maps for each textual token [12]. and that is what is meant by the interaction between pixels to text embedding [2].
[ 12, 2 ]
[ { "id": "2208.01626_all_0", "text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2  and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained on extremely large language-image datasets and use state-of-the-art image generative models including auto-regressive and diffusion models. However, these models do not provide simple editing means, and generally lack control over specific semantic regions of a given image. In particular, even the slightest change in the textual prompt may lead to a completely different output image. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_1", "text": " To circumvent this, LLI-based methods (28, 4, 33) require the user to explicitly mask a part of the image to be inpainted, and drive the edited image to change in the masked area only, while matching the background of the original image. This approach has provided appealing results, however, the masking procedure is cumbersome, hampering quick and intuitive text-driven editing. Moreover, masking the image content removes important structural information, which is completely ignored in the inpainting process. Therefore, some editing capabilities are out of the inpainting scope, such as modifying the texture of a specific object. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_2", "text": " In this paper, we introduce an intuitive and powerful textual editing method to semantically edit images in pre-trained text-conditioned diffusion models via Prompt-to-Prompt manipulations. To do so, we dive deep into the cross-attention layers and explore their semantic strength as a handle to control the generated image. Specifically, we consider the internal cross-attention maps, which are high-dimensional tensors that bind pixels and tokens extracted from the prompt text. We find that these maps contain rich semantic relations which critically affect the generated image. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_3", "text": " Our key idea is that we can edit images by injecting the cross-attention maps during the diffusion process, controlling which pixels attend to which tokens of the prompt text during which diffusion steps. To apply our method to various creative editing applications, we show several methods to control the cross-attention maps through a simple and semantic interface (see fig. 1). The first is to change a single token’s value in the prompt (e.g., “dog” to “cat”), while fixing the cross-attention maps, to preserve the scene composition. The second is to globally edit an image, e.g., change the style, by adding new words to the prompt and freezing the attention on previous tokens, while allowing new attention to flow to the new tokens. The third is to amplify or attenuate the semantic effect of a word in the generated image. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_4", "text": " Our approach constitutes an intuitive image editing interface through editing only the textual prompt, therefore called Prompt-to-Prompt. This method enables various editing tasks, which are challenging otherwise, and does not requires model training, fine-tuning, extra data, or optimization. Throughout our analysis, we discover even more control over the generation process, recognizing a trade-off between the fidelity to the edited prompt and the source image. We even demonstrate that our method can be applied to real images by using an existing inversion process. Our experiments and numerous results show that our method enables seamless editing in an intuitive text-based manner over extremely diverse images. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_5", "text": " Image editing is one of the most fundamental tasks in computer graphics, encompassing the process of modifying an input image through the use of an auxiliary input, such as a label, scribble, mask, or reference image. A specifically intuitive way to edit an image is through textual prompts provided by the user. Recently, text-driven image manipulation has achieved significant progress using GANs  (15, 8, 19, 20, 21), which are known for their high-quality generation, in tandem with CLIP , which consists of a semantically rich joint image-text representation, trained over millions of text-image pairs. Seminal works (29, 14, 46, 2) which combined these components were revolutionary, since they did not require extra manual labor, and produced highly realistic manipulations using text only. Bau et al. further demonstrated how to use masks provided by the user, to localize the text-based editing and restrict the change to a specific spatial region. However, while GAN-based image editing approaches succeed on highly-curated datasets , e.g., human faces, they struggle over large and diverse datasets. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_6", "text": " To obtain more expressive generation capabilities, Crowson et al. use VQ-GAN , trained over diverse data, as a backbone. Other works (5, 22) exploit the recent Diffusion models (17, 39, 41, 17, 40, 36), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . Kim et al.  show how to perform global changes, whereas Avrahami et al.  successfully perform local manipulations using user-provided masks for guidance. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_7", "text": " While most works that require only text (i.e., no masks) are limited to global editing (9, 23), Bar-Tal et al.  proposed a text-based localized editing technique without using any mask, showing impressive results. Yet, their techniques mainly allow changing textures, but not modifying complex structures, such as changing a bicycle to a car. Moreover, unlike our method, their approach requires training a network for each input. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_8", "text": " Numerous works (11, 16, 42, 25, 26, 30, 31, 34, 49, 9, 13, 36) significantly advanced the generation of images conditioned on plain text, known as text-to-image synthesis. Several large-scale text-image models have recently emerged, such as Imagen , DALL-E2 , and Parti , demonstrating unprecedented semantic generation. However, these models do not provide control over a generated image, specifically using text guidance only. Changing a single word in the original prompt associated with the image often leads to a completely different outcome. For instance, adding the adjective “white” to “dog” often changes the dog’s shape. To overcome this, several works (28, 4) assume that the user provides a mask to restrict the area in which the changes are applied. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_9", "text": " Unlike previous works, our method requires textual input only, by using the spatial information from the internal layers of the generative model itself. This offers the user a much more intuitive editing experience of modifying local or global details by merely modifying the text prompt. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_10", "text": " Let ℐℐ\\mathcal{I} be an image which was generated by a text-guided diffusion model using the text prompt 𝒫𝒫\\mathcal{P} and a random seed s𝑠s. Our goal is editing the input image guided only by the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}, resulting in an edited image ℐ∗superscriptℐ\\mathcal{I}^{*}. For example, consider an image generated from the prompt “my new bicycle”, and assume that the user wants to edit the color of the bicycle, its material, or even replace it with a scooter while preserving the appearance and structure of the original image. An intuitive interface for the user is to directly change the text prompt by further describing the appearance of the bikes, or replacing it with another word. As opposed to previous works, we wish to avoid relying on any user-defined mask to assist or signify where the edit should occur. A simple, but an unsuccessful attempt is to fix the internal randomness and regenerate using the edited text prompt. Unfortunately, as fig. 2 shows, this results in a completely different image with a different structure and composition. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_11", "text": " Our key observation is that the structure and appearances of the generated image depend not only on the random seed, but also on the interaction between the pixels to the text embedding through the diffusion process. By modifying the pixel-to-text interaction that occurs in cross-attention layers, we provide Prompt-to-Prompt image editing capabilities. More specifically, injecting the cross-attention maps of the input image ℐℐ\\mathcal{I} enables us to preserve the original composition and structure. In section 3.1, we review how cross-attention is used, and in section 3.2 we describe how to exploit the cross-attention for editing. For additional background on diffusion models, please refer to appendix A. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_12", "text": " We use the Imagen  text-guided synthesis model as a backbone. Since the composition and geometry are mostly determined at the 64×64646464\\times 64 resolution, we only adapt the text-to-image diffusion model, using the super-resolution process as is. Recall that each diffusion step t𝑡t consists of predicting the noise ϵitalic-ϵ\\epsilon from a noisy image ztsubscript𝑧𝑡z_{t} and text embedding ψ​(𝒫)𝜓𝒫\\psi(\\mathcal{P}) using a U-shaped network . At the final step, this process yields the generated image ℐ=z0ℐsubscript𝑧0\\mathcal{I}=z_{0}. Most importantly, the interaction between the two modalities occurs during the noise prediction, where the embeddings of the visual and textual features are fused using Cross-attention layers that produce spatial attention maps for each textual token. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_13", "text": " More formally, as illustrated in fig. 3(Top), the deep spatial features of the noisy image ϕ​(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}) are projected to a query matrix Q=ℓQ​(ϕ​(zt))𝑄subscriptℓ𝑄italic-ϕsubscript𝑧𝑡Q=\\ell_{Q}(\\phi(z_{t})), and the textual embedding is projected to a key matrix K=ℓK​(ψ​(𝒫))𝐾subscriptℓ𝐾𝜓𝒫K=\\ell_{K}(\\psi(\\mathcal{P})) and a value matrix V=ℓV​(ψ​(𝒫))𝑉subscriptℓ𝑉𝜓𝒫V=\\ell_{V}(\\psi(\\mathcal{P})), via learned linear projections ℓQ,ℓK,ℓVsubscriptℓ𝑄subscriptℓ𝐾subscriptℓ𝑉\\ell_{Q},\\ell_{K},\\ell_{V}. The attention maps are then M=Softmax​(Q​KTd),𝑀Softmax𝑄superscript𝐾𝑇𝑑M=\\text{Softmax}\\left(\\frac{QK^{T}}{\\sqrt{d}}\\right), (1) where the cell Mi​jsubscript𝑀𝑖𝑗M_{ij} defines the weight of the value of the j𝑗j-th token on the pixel i𝑖i, and where d𝑑d is the latent projection dimension of the keys and queries. Finally, the cross-attention output is defined to be ϕ^​(zt)=M​V^italic-ϕsubscript𝑧𝑡𝑀𝑉\\widehat{\\phi}\\left(z_{t}\\right)=MV, which is then used to update the spatial features ϕ​(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}). ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_14", "text": " Intuitively, the cross-attention output M​V𝑀𝑉MV is a weighted average of the values V𝑉V where the weights are the attention maps M𝑀M, which are correlated to the similarity between Q𝑄Q and K𝐾K. In practice, to increase their expressiveness, multi-head attention is used in parallel, and then the results are concatenated and passed through a learned linear layer to get the final output. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_15", "text": " Imagen , similar to GLIDE  , conditions on the text prompt in the noise prediction of each diffusion step (see section A.2) through two types of attention layers: i) cross-attention layers. ii) hybrid attention that acts both as self-attention and cross-attention by simply concatenating the text embedding sequence to the key-value pairs of each self-attention layer. Throughout the rest of the paper, we refer to both of them as cross-attention since our method only intervenes in the cross-attention part of the hybrid attention. That is, only the last channels, which refer to text tokens, are modified in the hybrid attention modules. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_16", "text": " We return to our key observation — the spatial layout and geometry of the generated image depend on the cross-attention maps. This interaction between pixels and text is illustrated in fig. 4, where the average attention maps are plotted. As can be seen, pixels are more attracted to the words that describe them, e.g., pixels of the bear are correlated with the word “bear”. Note that averaging is done for visualization purposes, and attention maps are kept separate for each head in our method. Interestingly, we can see that the structure of the image is already determined in the early steps of the diffusion process. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_17", "text": " Since the attention reflects the overall composition, we can inject the attention maps M𝑀M that were obtained from the generation with the original prompt 𝒫𝒫\\mathcal{P}, into a second generation with the modified prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. This allows the synthesis of an edited image ℐ∗superscriptℐ\\mathcal{I}^{*} that is not only manipulated according to the edited prompt, but also preserves the structure of the input image ℐℐ\\mathcal{I}. This example is a specific instance of a broader set of attention-based manipulations leading to different types of intuitive editing. We, therefore, start by proposing a general framework, followed by the details of the specific editing operations. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_18", "text": " Let D​M​(zt,𝒫,t,s)𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠DM(z_{t},\\mathcal{P},t,s) be the computation of a single step t𝑡t of the diffusion process, which outputs the noisy image zt−1subscript𝑧𝑡1z_{t-1}, and the attention map Mtsubscript𝑀𝑡M_{t} (omitted if not used). We denote by D​M​(zt,𝒫,t,s)​{M←M^}𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠←𝑀^𝑀DM(z_{t},\\mathcal{P},t,s)\\{M\\leftarrow\\widehat{M}\\} the diffusion step where we override the attention map M𝑀M with an additional given map M^^𝑀\\widehat{M}, but keep the values V𝑉V from the supplied prompt. We also denote by Mt∗superscriptsubscript𝑀𝑡M_{t}^{*} the produced attention map using the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. Lastly, we define E​d​i​t​(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) to be a general edit function, receiving as input the t𝑡t’th attention maps of the original and edited images during their generation. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_19", "text": " Our general algorithm for controlled image generation consists of performing the iterative diffusion process for both prompts simultaneously, where an attention-based manipulation is applied in each step according to the desired editing task. We note that for the method above to work, we must fix the internal randomness. This is due to the nature of diffusion models, where even for the same prompt, two random seeds produce drastically different outputs. Formally, our general algorithm is: ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_20", "text": " Notice that we can also define image ℐℐ\\mathcal{I}, which is generated by prompt 𝒫𝒫\\mathcal{P} and random seed s𝑠s, as an additional input. Yet, the algorithm would remain the same. For editing real images, see section 4. Also, note that we can skip the forward call in line 777 by applying the edit function inside the diffusion forward function. Moreover, a diffusion step can be applied on both zt−1subscript𝑧𝑡1z_{t-1} and zt∗superscriptsubscript𝑧𝑡z_{t}^{*} in the same batch (i.e., in parallel), and so there is only one step overhead with respect to the original inference of the diffusion model. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_21", "text": " We now turn to address specific editing operations, filling the missing definition of the E​d​i​t​(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) function. An overview is presented in fig. 3(Bottom). ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_22", "text": " In this case, the user swaps tokens of the original prompt with others, e.g., 𝒫=𝒫absent\\mathcal{P}=“a big red bicycle” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“a big red car”. The main challenge is to preserve the original composition while also addressing the content of the new prompt. To this end, we inject the attention maps of the source image into the generation with the modified prompt. However, the proposed attention injection may over constrain the geometry, especially when a large structural modification, such as “car” to “bicycle”, is involved. We address this by suggesting a softer attention constrain: ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_23", "text": " E​d​i​t​(Mt,Mt∗,t):={Mt∗if​t<τMtotherwise.assign𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡casessuperscriptsubscript𝑀𝑡if𝑡𝜏subscript𝑀𝑡otherwise.Edit(M_{t},M_{t}^{*},t):=\\begin{cases}M_{t}^{*}&\\quad\\text{if}\\;t<\\tau\\\\ M_{t}&\\quad\\text{otherwise.}\\\\ \\end{cases} where τ𝜏\\tau is a timestamp parameter that determines until which step the injection is applied. Note that the composition is determined in the early steps of the diffusion process. Therefore, by limiting the number of injection steps, we can guide the composition of the newly generated image while allowing the necessary geometry freedom for adapting to the new prompt. An illustration is provided in section 4. Another natural relaxation for our algorithm is to assign a different number of injection timestamps for the different tokens in the prompt. In case the two words are represented using a different number of tokens, the maps can be duplicated/averaged as necessary using an alignment function as described in the next paragraph. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_24", "text": " In another setting, the user adds new tokens to the prompt, e.g., 𝒫=𝒫absent\\mathcal{P}=“a castle next to a river” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“children drawing of a castle next to a river”. To preserve the common details, we apply the attention injection only over the common tokens from both prompts. Formally, we use an alignment function A𝐴A that receives a token index from target prompt 𝒫∗superscript𝒫\\mathcal{P}^{*} and outputs the corresponding token index in 𝒫𝒫\\mathcal{P} or None if there isn’t a match. Then, the editing function is given by: ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_25", "text": " (E​d​i​t​(Mt,Mt∗,t))i,j:={(Mt∗)i,jif​A​(j)=N​o​n​e(Mt)i,A​(j)otherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗casessubscriptsuperscriptsubscript𝑀𝑡𝑖𝑗if𝐴𝑗𝑁𝑜𝑛𝑒subscriptsubscript𝑀𝑡𝑖𝐴𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}(M_{t}^{*})_{i,j}&\\quad\\text{if}\\;A(j)=None\\\\ (M_{t})_{i,A(j)}&\\quad\\text{otherwise.}\\\\ \\end{cases} Recall that index i𝑖i corresponds to a pixel value, where j𝑗j corresponds to a text token. Again, we may set a timestamp τ𝜏\\tau to control the number of diffusion steps in which the injection is applied. This kind of editing enables diverse Prompt-to-Prompt capabilities such as stylization, specification of object attributes, or global manipulations as demonstrated in section 4. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_26", "text": " Lastly, the user may wish to strengthen or weakens the extent to which each token is affecting the resulting image. For example, consider the prompt 𝒫=𝒫absent\\mathcal{P}= “a fluffy red ball”, and assume we want to make the ball more or less fluffy. To achieve such manipulation, we scale the attention map of the assigned token j∗superscript𝑗j^{*} with parameter c∈(−2,2)𝑐22c\\in(-2,2), resulting in a stronger/weaker effect. The rest of the attention maps remain unchanged. That is: (E​d​i​t​(Mt,Mt∗,t))i,j:={c⋅(Mt)i,jif ​j=j∗(Mt)i,jotherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗cases⋅𝑐subscriptsubscript𝑀𝑡𝑖𝑗if 𝑗superscript𝑗subscriptsubscript𝑀𝑡𝑖𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}c\\cdot(M_{t})_{i,j}&\\quad\\text{if }j=j^{*}\\\\ (M_{t})_{i,j}&\\quad\\text{otherwise.}\\\\ \\end{cases} As described in section 4, the parameter c𝑐c allows fine and intuitive control over the induced effect. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_27", "text": " Our method, described in section 3, enables intuitive text-only editing by controlling the spatial layout corresponding to each word in the user-provided prompt. In this section, we show several applications using this technique. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_28", "text": " Text-Only Localized Editing. We first demonstrate localized editing by modifying the user-provided prompt without requiring any user-provided mask. In fig. 2, we depict an example where we generate an image using the prompt “lemon cake”. Our method allows us to retain the spatial layout, geometry, and semantics when replacing the word “lemon” with “pumpkin” (top row). Observe that the background is well-preserved, including the top-left lemons transforming into pumpkins. On the other hand, naively feeding the synthesis model with the prompt “pumpkin cake” results in a completely different geometry (333rd row), even when using the same random seed in a deterministic setting (i.e., DDIM ). Our method succeeds even for a challenging prompt such as “pasta cake.” (222nd row) — the generated cake consists of pasta layers with tomato sauce on top. Another example is provided in fig. 5 where we do not inject the attention of the entire prompt but only the attention of a specific word – “butterfly”. This enables the preservation of the original butterfly while changing the rest of the content. Additional results are provided in the appendix (fig. 13). ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_29", "text": " As can be seen in fig. 6, our method is not confined to modifying only textures, and it can perform structural modifications, e.g., change a “bicycle” to a “car”. To analyze our attention injection, in the left column we show the results without cross-attention injection, where changing a single word leads to an entirely different outcome. From left to right, we then show the resulting generated image by injecting attention to an increasing number of diffusion steps. Note that the more diffusion steps in which we apply cross-attention injection, the higher the fidelity to the original image. However, the optimal result is not necessarily achieved by applying the injection throughout all diffusion steps. Therefore, we can provide the user with even better control over the fidelity to the original image by changing the number of injection steps. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_30", "text": " Instead of replacing one word with another, the user may wish to add a new specification to the generated image. In this case, we keep the attention maps of the original prompt, while allowing the generator to address the newly added words. For example, see fig. 7 (top), where we add “crushed” to the “car”, resulting in the generation of additional details over the original image while the background is still preserved. See the appendix (fig. 14) for more examples. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_31", "text": " Global editing. Preserving the image composition is not only valuable for localized editing, but also an important aspect of global editing. In this setting, the editing should affect all parts of the image, but still retain the original composition, such as the location and identity of the objects. As shown in fig. 7 (bottom), we retain the image content while adding “snow” or changing the lightning. Additional examples appear in fig. 8, including translating a sketch into a photo-realistic image and inducing an artistic style. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_32", "text": " Fader Control using Attention Re-weighting. While controlling the image by editing the prompt is very effective, we find that it still does not allow full control over the generated image. Consider the prompt “snowy mountain”. A user may want to control the amount of snow on the mountain. However, it is quite difficult to describe the desired amount of snow through text. Instead, we suggest a fader control , where the user controls the magnitude of the effect induced by a specific word, as depicted in fig. 9. As described in section 3, we achieve such control by re-scaling the attention of the specified word. Additional results are in the appendix (fig. 15). ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_33", "text": " Real Image Editing. Editing a real image requires finding an initial noise vector that produces the given input image when fed into the diffusion process. This process, known as inversion, has recently drawn considerable attention for GANs, e.g., (51, 1, 3, 35, 50, 43, 45, 47), but has not yet been fully addressed for text-guided diffusion models. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_34", "text": " In the following, we show preliminary editing results on real images, based on common inversion techniques for diffusion models. First, a rather naïve approach is to add Gaussian noise to the input image, and then perform a predefined number of diffusion steps. Since this approach results in significant distortions, we adopt an improved inversion approach (10, 40), which is based on the deterministic DDIM model rather than the DDPM model. We perform the diffusion process in the reverse direction, that is x0⟶xT⟶subscript𝑥0subscript𝑥𝑇x_{0}\\longrightarrow x_{T} instead of xT⟶x0⟶subscript𝑥𝑇subscript𝑥0x_{T}\\longrightarrow x_{0}, where x0subscript𝑥0x_{0} is set to be the given real image. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_35", "text": " This inversion process often produces satisfying results, as presented in fig. 10. However, the inversion is not sufficiently accurate in many other cases, as in fig. 11. This is partially due to a distortion-editability tradeoff , where we recognize that reducing the classifier-free guidance parameter (i.e., reducing the prompt influence) improves reconstruction but constrains our ability to perform significant manipulations. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_36", "text": " To alleviate this limitation, we propose to restore the unedited regions of the original image using a mask, directly extracted from the attention maps. Note that here the mask is generated with no guidance from the user. As presented in fig. 12, this approach works well even using the naïve DDPM inversion scheme (adding noise followed by denoising). Note that the cat’s identity is well-preserved under various editing operations, while the mask is produced only from the prompt itself. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_37", "text": " In this work, we uncovered the powerful capabilities of the cross-attention layers within text-to-image diffusion models. We showed that these high-dimensional layers have an interpretable representation of spatial maps that play a key role in tying the words in the text prompt to the spatial layout of the synthesized image. With this observation, we showed how various manipulations of the prompt can directly control attributes in the synthesized image, paving the way to various applications including local and global editing. This work is a first step towards providing users with simple and intuitive means to edit images, leveraging textual semantic power. It enables users to navigate through a semantic, textual, space, which exhibits incremental changes after each step, rather than producing the desired image from scratch after each text manipulation. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_38", "text": " While we have demonstrated semantic control by changing only textual prompts, our technique is still subject to a few limitations to be addressed in follow-up work. First, the current inversion process results in a visible distortion over some of the test images. In addition, the inversion requires the user to come up with a suitable prompt. This could be challenging for complicated compositions. Note that the challenge of inversion for text-guided diffusion models is an orthogonal endeavor to our work, which will be thoroughly studied in the future. Second, the current attention maps are of low resolution, as the cross-attention is placed in the network’s bottleneck. This bounds our ability to perform even more precise localized editing. To alleviate this, we suggest incorporating cross-attention also in higher-resolution layers. We leave this for future works since it requires analyzing the training procedure which is out of our current scope. Finally, we recognize that our current method cannot be used to spatially move existing objects across the image and also leave this kind of control for future work. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" }, { "id": "2208.01626_all_39", "text": " We thank Noa Glaser, Adi Zicher, Yaron Brodsky and Shlomi Fruchter for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained models of Imagen . Special thanks to Yossi Matias for early inspiring discussion on the problem and for motivating and encouraging us to develop technologies along the avenue of intuitive interaction. ", "title": "Prompt-to-Prompt Image Editing with Cross Attention Control" } ]
What baseline is used for creating feature maps in the proposed SSD framework?
To create feature maps in SSD, VGG-16 was used as a baseline [4].
[ 4 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective Search work  through the current leading results on PASCAL VOC, COCO, and ILSVRC detection all based on Faster R-CNN albeit with deeper features such as . While accurate, these approaches have been too computationally intensive for embedded systems and, even with high-end hardware, too slow for real-time applications. Often detection speed for these approaches is measured in seconds per frame (SPF), and even the fastest high-accuracy detector, Faster R-CNN, operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline (see related work in Sec. 4), but so far, significantly increased speed comes only at the cost of significantly decreased detection accuracy. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_1", "text": " This paper presents the first deep network based object detector that does not resample pixels or features for bounding box hypotheses and and is as accurate as approaches that do. This results in a significant improvement in speed for high-accuracy detection (59 FPS with mAP 74.3% on VOC2007 test, vs. Faster R-CNN 7 FPS with mAP 73.2% or YOLO 45 FPS with mAP 63.4%). The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. We are not the first to do this (cf (4, 5)), but by adding a series of improvements, we manage to increase the accuracy significantly over previous attempts. Our improvements include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these modifications—especially using multiple layers for prediction at different scales—we can achieve high-accuracy using relatively low resolution input, further increasing detection speed. While these contributions may seem small independently, we note that the resulting system improves accuracy on real-time detection for PASCAL VOC from 63.4% mAP for YOLO to 74.3% mAP for our SSD. This is a larger relative improvement in detection accuracy than that from the recent, very high-profile work on residual networks . Furthermore, significantly improving the speed of high-quality detection can broaden the range of settings where computer vision is useful. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_2", "text": " We summarize our contributions as follows: • We introduce SSD, a single-shot detector for multiple categories that is faster than the previous state-of-the-art for single shot detectors (YOLO), and significantly more accurate, in fact as accurate as slower techniques that perform explicit region proposals and pooling (including Faster R-CNN). • The core of SSD is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps. • To achieve high detection accuracy we produce predictions of different scales from feature maps of different scales, and explicitly separate predictions by aspect ratio. • These design features lead to simple end-to-end training and high accuracy, even on low resolution input images, further improving the speed vs accuracy trade-off. • Experiments include timing and accuracy analysis on models with varying input size evaluated on PASCAL VOC, COCO, and ILSVRC and are compared to a range of recent state-of-the-art approaches. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_3", "text": " This section describes our proposed SSD framework for detection (Sec. 2.1) and the associated training methodology (Sec. 2.2). Afterwards, Sec. 3 presents dataset-specific model details and experimental results. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_4", "text": " The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification (truncated before any classification layers), which we will call the base network222We use the VGG-16 network as a base, but other networks should also produce good results.. We then add auxiliary structure to the network to produce detections with the following key features: ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_5", "text": " Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat and YOLO that operate on a single scale feature map). ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_6", "text": " Convolutional predictors for detection Each added feature layer (or optionally an existing feature layer from the base network) can produce a fixed set of detection predictions using a set of convolutional filters. These are indicated on top of the SSD network architecture in Fig. 2. For a feature layer of size m×n𝑚𝑛m\\times n with p𝑝p channels, the basic element for predicting parameters of a potential detection is a 3×3×p33𝑝3\\times 3\\times p small kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n𝑚𝑛m\\times n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location (cf the architecture of YOLO that uses an intermediate fully connected layer instead of a convolutional filter for this step). ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_7", "text": " Default boxes and aspect ratios We associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k𝑘k at a given location, we compute c𝑐c class scores and the 444 offsets relative to the original default box shape. This results in a total of (c+4)​k𝑐4𝑘(c+4)k filters that are applied around each location in the feature map, yielding (c+4)​k​m​n𝑐4𝑘𝑚𝑛(c+4)kmn outputs for a m×n𝑚𝑛m\\times n feature map. For an illustration of default boxes, please refer to Fig. 1. Our default boxes are similar to the anchor boxes used in Faster R-CNN , however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several feature maps let us efficiently discretize the space of possible output box shapes. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_8", "text": " The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs. Some version of this is also required for training in YOLO and for the region proposal stage of Faster R-CNN and MultiBox. Once this assignment is determined, the loss function and back propagation are applied end-to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_9", "text": " During training we need to determine which default boxes correspond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox ). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simplifies the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_10", "text": " The SSD training objective is derived from the MultiBox objective (7, 8) but is extended to handle multiple object categories. Let xi​jp={1,0}superscriptsubscript𝑥𝑖𝑗𝑝10x_{ij}^{p}=\\{1,0\\} be an indicator for matching the i𝑖i-th default box to the j𝑗j-th ground truth box of category p𝑝p. In the matching strategy above, we can have ∑ixi​jp≥1subscript𝑖superscriptsubscript𝑥𝑖𝑗𝑝1\\sum_{i}x_{ij}^{p}\\geq 1. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf): L​(x,c,l,g)=1N​(Lc​o​n​f​(x,c)+α​Ll​o​c​(x,l,g))𝐿𝑥𝑐𝑙𝑔1𝑁subscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐𝛼subscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔L(x,c,l,g)=\\frac{1}{N}(L_{conf}(x,c)+\\alpha L_{loc}(x,l,g)) (1) where N is the number of matched default boxes. If N=0𝑁0N=0, wet set the loss to 0. The localization loss is a Smooth L1 loss  between the predicted box (l𝑙l) and the ground truth box (g𝑔g) parameters. Similar to Faster R-CNN , we regress to offsets for the center (c​x,c​y𝑐𝑥𝑐𝑦cx,cy) of the default bounding box (d𝑑d) and for its width (w𝑤w) and height (hℎh). Ll​o​c​(x,l,g)=∑i∈P​o​sN∑m∈{c​x,c​y,w,h}xi​jk​smoothL1​(lim−g^jm)g^jc​x=(gjc​x−dic​x)/diwg^jc​y=(gjc​y−dic​y)/dihg^jw=log⁡(gjwdiw)g^jh=log⁡(gjhdih)formulae-sequencesubscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔superscriptsubscript𝑖𝑃𝑜𝑠𝑁subscript𝑚𝑐𝑥𝑐𝑦𝑤ℎsuperscriptsubscript𝑥𝑖𝑗𝑘subscriptsmoothL1superscriptsubscript𝑙𝑖𝑚superscriptsubscript^𝑔𝑗𝑚superscriptsubscript^𝑔𝑗𝑐𝑥superscriptsubscript𝑔𝑗𝑐𝑥superscriptsubscript𝑑𝑖𝑐𝑥superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗𝑐𝑦superscriptsubscript𝑔𝑗𝑐𝑦superscriptsubscript𝑑𝑖𝑐𝑦superscriptsubscript𝑑𝑖ℎsuperscriptsubscript^𝑔𝑗𝑤superscriptsubscript𝑔𝑗𝑤superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗ℎsuperscriptsubscript𝑔𝑗ℎsuperscriptsubscript𝑑𝑖ℎ\\begin{split}L_{loc}(x,l,g)=\\sum_{i\\in Pos}^{N}\\sum_{m\\in\\{cx,cy,w,h\\}}&x_{ij}^{k}\\text{smooth}_{\\text{L1}}(l_{i}^{m}-\\hat{g}_{j}^{m})\\\\ \\hat{g}_{j}^{cx}=(g_{j}^{cx}-d_{i}^{cx})/d_{i}^{w}\\quad\\quad&\\hat{g}_{j}^{cy}=(g_{j}^{cy}-d_{i}^{cy})/d_{i}^{h}\\\\ \\hat{g}_{j}^{w}=\\log\\Big{(}\\frac{g_{j}^{w}}{d_{i}^{w}}\\Big{)}\\quad\\quad&\\hat{g}_{j}^{h}=\\log\\Big{(}\\frac{g_{j}^{h}}{d_{i}^{h}}\\Big{)}\\end{split} (2) The confidence loss is the softmax loss over multiple classes confidences (c𝑐c). Lc​o​n​f​(x,c)=−∑i∈P​o​sNxi​jp​l​o​g​(c^ip)−∑i∈N​e​gl​o​g​(c^i0)wherec^ip=exp⁡(cip)∑pexp⁡(cip)formulae-sequencesubscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐superscriptsubscript𝑖𝑃𝑜𝑠𝑁superscriptsubscript𝑥𝑖𝑗𝑝𝑙𝑜𝑔superscriptsubscript^𝑐𝑖𝑝subscript𝑖𝑁𝑒𝑔𝑙𝑜𝑔superscriptsubscript^𝑐𝑖0wheresuperscriptsubscript^𝑐𝑖𝑝superscriptsubscript𝑐𝑖𝑝subscript𝑝superscriptsubscript𝑐𝑖𝑝L_{conf}(x,c)=-\\sum_{i\\in Pos}^{N}x_{ij}^{p}log(\\hat{c}_{i}^{p})-\\sum_{i\\in Neg}log(\\hat{c}_{i}^{0})\\quad\\text{where}\\quad\\hat{c}_{i}^{p}=\\frac{\\exp(c_{i}^{p})}{\\sum_{p}\\exp(c_{i}^{p})} (3) and the weight term α𝛼\\alpha is set to 1 by cross validation. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_11", "text": " To handle different object scales, some methods (4, 9) suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parameters across all object scales. Previous works (10, 11) have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more fine details of the input objects. Similarly,   showed that adding global context pooled from a feature map can help smooth the segmentation results. Motivated by these methods, we use both the lower and upper feature maps for detection. Figure 1 shows two exemplar feature maps (8×8888\\times 8 and 4×4444\\times 4) which are used in the framework. In practice, we can use many more with small computational overhead. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_12", "text": " Feature maps from different levels within a network are known to have different (empirical) receptive field sizes . Fortunately, within the SSD framework, the default boxes do not necessary need to correspond to the actual receptive fields of each layer. We design the tiling of default boxes so that specific feature maps learn to be responsive to particular scales of the objects. Suppose we want to use m𝑚m feature maps for prediction. The scale of the default boxes for each feature map is computed as: sk=smin+smax−sminm−1​(k−1),k∈(1,m)formulae-sequencesubscript𝑠𝑘subscript𝑠minsubscript𝑠maxsubscript𝑠min𝑚1𝑘1𝑘1𝑚s_{k}=s_{\\text{min}}+\\frac{s_{\\text{max}}-s_{\\text{min}}}{m-1}(k-1),\\quad k\\in(1,m) (4) where sminsubscript𝑠mins_{\\text{min}} is 0.2 and smaxsubscript𝑠maxs_{\\text{max}} is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as ar∈{1,2,3,12,13}subscript𝑎𝑟1231213a_{r}\\in\\{1,2,3,\\frac{1}{2},\\frac{1}{3}\\}. We can compute the width (wka=sk​arsuperscriptsubscript𝑤𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟w_{k}^{a}=s_{k}\\sqrt{a_{r}}) and height (hka=sk/arsuperscriptsubscriptℎ𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟h_{k}^{a}=s_{k}/\\sqrt{a_{r}}) for each default box. For the aspect ratio of 1, we also add a default box whose scale is sk′=sk​sk+1subscriptsuperscript𝑠′𝑘subscript𝑠𝑘subscript𝑠𝑘1s^{\\prime}_{k}=\\sqrt{s_{k}s_{k+1}}, resulting in 6 default boxes per feature map location. We set the center of each default box to (i+0.5|fk|,j+0.5|fk|)𝑖0.5subscript𝑓𝑘𝑗0.5subscript𝑓𝑘(\\frac{i+0.5}{|f_{k}|},\\frac{j+0.5}{|f_{k}|}), where |fk|subscript𝑓𝑘|f_{k}| is the size of the k𝑘k-th square feature map, i,j∈(0,|fk|)𝑖𝑗0subscript𝑓𝑘i,j\\in(0,|f_{k}|). In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_13", "text": " By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. 1, the dog is matched to a default box in the 4×4444\\times 4 feature map, but not to any default boxes in the 8×8888\\times 8 feature map. This is because those boxes have different scales and do not match the dog box, and therefore are considered as negatives during training. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_14", "text": " After the matching step, most of the default boxes are negatives, especially when the number of possible default boxes is large. This introduces a significant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest confidence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_15", "text": " To make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options: • Use the entire original input image. • Sample a patch so that the minimum jaccard overlap with the objects is 0.1, 0.3, 0.5, 0.7, or 0.9. • Randomly sample a patch. The size of each sampled patch is (0.1, 1) of the original image size, and the aspect ratio is between 1212\\frac{1}{2} and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to fixed size and is horizontally flipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in . ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_16", "text": " Our experiments are all based on VGG16 , which is pre-trained on the ILSVRC CLS-LOC dataset . Similar to DeepLab-LargeFOV , we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change pool5 from 2×2−s​222𝑠22\\times 2-s2 to 3×3−s​133𝑠13\\times 3-s1, and use the à trous algorithm  to fill the ”holes”. We remove all the dropout layers and the fc8 layer. We fine-tune the resulting model using SGD with initial learning rate 10−3superscript10310^{-3}, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe  and is open source at: https://github.com/weiliu89/caffe/tree/ssd . ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_17", "text": " On this dataset, we compare against Fast R-CNN  and Faster R-CNN  on VOC2007 test (4952 images). All methods fine-tune on the same pre-trained VGG16 network. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_18", "text": " Figure 2 shows the architecture details of the SSD300 model. We use conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 to predict both location and confidences. We set default box with scale 0.1 on conv4_3333For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.15, and 0.07 on conv4_3.. We initialize the parameters for all the newly added convolutional layers with the ”xavier” method . For conv4_3, conv10_2 and conv11_2, we only associate 4 default boxes at each feature map location – omitting aspect ratios of 1313\\frac{1}{3} and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2.3. Since, as pointed out in , conv4_3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in  to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10−3superscript10310^{-3} learning rate for 40k iterations, then continue training for 10k iterations with 10−4superscript10410^{-4} and 10−5superscript10510^{-5}. When training on VOC2007 trainval, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger 512×512512512512\\times 512 input image, it is even more accurate, surpassing Faster R-CNN by 1.7% mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by 1.1% and that SSD512 is 3.6% better. If we take models trained on COCO trainval35k as described in Sec. 3.4 and fine-tuning them on the 07+12 dataset with SSD512, we achieve the best results: 81.6% mAP. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_19", "text": " To understand the performance of our two SSD models in more details, we used the detection analysis tool from . Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its confident detections are correct. The recall is around 85-90%, and is much higher with “weak” (0.1 jaccard overlap) criteria. Compared to R-CNN , SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 300×300300300300\\times 300 to 512×512512512512\\times 512) can help improve detecting small objects, but there is still a lot of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_20", "text": " To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300×300300300300\\times 300), except for specified changes to the settings or component(s). ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_21", "text": " Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal flip to train. We use a more extensive sampling strategy, similar to YOLO . Table 2 shows that we can improve 8.8% mAP with this sampling strategy. We do not know how much our sampling strategy will benefit Fast and Faster R-CNN, but they are likely to benefit less because they use a feature pooling step during classification that is relatively robust to object translation by design. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_22", "text": " More default box shapes is better. As described in Sec. 2.2.3, by default we use 6 default boxes per location. If we remove the boxes with 1313\\frac{1}{3} and 3 aspect ratios, the performance drops by 0.6%. By further removing the boxes with 1212\\frac{1}{2} and 2 aspect ratios, the performance drops another 2.1%. Using a variety of default box shapes seems to make the task of predicting boxes easier for the network. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_23", "text": " Atrous is faster. As described in Sec. 3, we used the atrous version of a subsampled VGG16, following DeepLab-LargeFOV . If we use the full VGG16, keeping pool5 with 2×2−s​222𝑠22\\times 2-s2 and not subsampling parameters from fc6 and fc7, and add conv5_3 for prediction, the result is about the same while the speed is about 20% slower. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_24", "text": " We use the same settings as those used for our basic VOC2007 experiments above, except that we use VOC2012 trainval and VOC2007 trainval and test (21503 images) for training, and test on VOC2012 test (10991 images). We train the models with 10−3superscript10310^{-3} learning rate for 60k iterations, then 10−4superscript10410^{-4} for 20k iterations. Table 4 shows the results of our SSD300 and SSD512444\\ssmallhttp://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?cls=mean&challengeid=11&compid=4 model. We see the same performance trend as we observed on VOC2007 test. Our SSD300 improves accuracy over Fast/Faster R-CNN. By increasing the training and testing image size to 512×512512512512\\times 512, we are 4.5% more accurate than Faster R-CNN. Compared to YOLO, SSD is significantly more accurate, likely due to the use of convolutional default boxes from multiple feature maps and our matching strategy during training. When fine-tuned from models trained on COCO, our SSD512 achieves 80.0% mAP, which is 4.1% higher than Faster R-CNN. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_25", "text": " To further validate the SSD framework, we trained our SSD300 and SSD512 architectures on the COCO dataset. Since objects in COCO tend to be smaller than PASCAL VOC, we use smaller default boxes for all layers. We follow the strategy mentioned in Sec. 2.2.3, but now our smallest default box has a scale of 0.15 instead of 0.2, and the scale of the default box on conv4_3 is 0.07 (e.g. 21 pixels for a 300×300300300300\\times 300 image)555For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.1, and 0.04 on conv4_3.. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_26", "text": " We use the trainval35k  for training. We first train the model with 10−3superscript10310^{-3} learning rate for 160k iterations, and then continue training for 40k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. Table 5 shows the results on test-dev2015. Similar to what we observed on the PASCAL VOC dataset, SSD300 is better than Fast R-CNN in both mAP@0.5 and mAP@(0.5:0.95). SSD300 has a similar mAP@0.75 as ION  and Faster R-CNN , but is worse in mAP@0.5. By increasing the image size to 512×512512512512\\times 512, our SSD512 is better than Faster R-CNN  in both criteria. Interestingly, we observe that SSD512 is 5.3% better in mAP@0.75, but is only 1.2% better in mAP@0.5. We also observe that it has much better AP (4.8%) and AR (4.6%) for large objects, but has relatively less improvement in AP (1.3%) and AR (2.0%) for small objects. Compared to ION, the improvement in AR for large and small objects is more similar (5.4% vs. 3.9%). We conjecture that Faster R-CNN is more competitive on smaller objects with SSD because it performs two box refinement steps, in both the RPN part and in the Fast R-CNN part. In Fig. 3.2, we show some detection examples on COCO test-dev with the SSD512 model. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_27", "text": " We applied the same network architecture we used for COCO to the ILSVRC DET dataset . We train a SSD300 model using the ILSVRC2014 DET train and val1 as used in . We first train the model with 10−3superscript10310^{-3} learning rate for 320k iterations, and then continue training for 80k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. We can achieve 43.4 mAP on the val2 set . Again, it validates that SSD is a general framework for high quality real-time detection. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_28", "text": " ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_29", "text": " Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a ”zoom in” operation and can generate many larger training examples. To implement a ”zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16×16\\times of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new ”expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of 2%-3% mAP across multiple datasets, as shown in Table 6. In specific, Figure 3.2 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_30", "text": " An alternative way of improving SSD is to design a better tiling of default boxes so that its position and scale are better aligned with the receptive field of each position on a feature map. We leave this for future work. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_31", "text": " ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_32", "text": " Considering the large number of boxes generated from our method, it is essential to perform non-maximum suppression (nms) efficiently during inference. By using a confidence threshold of 0.01, we can filter out most boxes. We then apply nms with jaccard overlap of 0.45 per class and keep the top 200 detections per image. This step costs about 1.7 msec per image for SSD300 and 20 VOC classes, which is close to the total time (2.4 msec) spent on all newly added layers. We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon E5-2667v3@3.20GHz. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_33", "text": " Table 7 shows the comparison between SSD, Faster R-CNN, and YOLO. Both our SSD300 and SSD512 method outperforms Faster R-CNN in both speed and accuracy. Although Fast YOLO can run at 155 FPS, it has lower accuracy by almost 22% mAP. To the best of our knowledge, SSD300 is the first real-time method to achieve above 70% mAP. Note that about 80% of the forward time is spent on the base network (VGG16 in our case). Therefore, using a faster base network could even further improve the speed, which can possibly make the SSD512 model real-time as well. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_34", "text": " There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches – Deformable Part Model (DPM)  and Selective Search  – had comparable performance. However, after the dramatic improvement brought on by R-CNN , which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_35", "text": " The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet  speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN  extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox  for learning objectness. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_36", "text": " The second set of approaches improve the quality of proposal generation using deep neural networks. In the most recent works like MultiBox (7, 8), the Selective Search region proposals, which are based on low-level image features, are replaced by proposals generated directly from a separate deep neural network. This further improves the detection accuracy but results in a somewhat complex setup, requiring the training of two neural networks with a dependency between them. Faster R-CNN  replaces selective search proposals by ones learned from a region proposal network (RPN), and introduces a method to integrate the RPN with Fast R-CNN by alternating between fine-tuning shared convolutional layers and prediction layers for these two networks. This way region proposals are used to pool mid-level features and the final classification step is less expensive. Our SSD is very similar to the region proposal network (RPN) in Faster R-CNN in that we also use a fixed set of (default) boxes for prediction, similar to the anchor boxes in the RPN. But instead of using these to pool features and evaluate another classifier, we simultaneously produce a score for each object category in each box. Thus, our approach avoids the complication of merging RPN with Fast R-CNN and is easier to train, faster, and straightforward to integrate in other tasks. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_37", "text": " Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat , a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO  uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat ; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO . ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_38", "text": " This paper introduces SSD, a fast single-shot object detector for multiple categories. A key feature of our model is the use of multi-scale convolutional bounding box outputs attached to multiple feature maps at the top of the network. This representation allows us to efficiently model the space of possible box shapes. We experimentally validate that given appropriate training strategies, a larger number of carefully chosen default bounding boxes results in improved performance. We build SSD models with at least an order of magnitude more box predictions sampling location, scale, and aspect ratio, than existing methods (5, 7). We demonstrate that given the same VGG-16 base architecture, SSD compares favorably to its state-of-the-art object detector counterparts in terms of both accuracy and speed. Our SSD512 model significantly outperforms the state-of-the-art Faster R-CNN  in terms of accuracy on PASCAL VOC and COCO, while being 3×3\\times faster. Our real time SSD300 model runs at 59 FPS, which is faster than the current real time YOLO  alternative, while producing markedly superior detection accuracy. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_39", "text": " Apart from its standalone utility, we believe that our monolithic and relatively simple SSD model provides a useful building block for larger systems that employ an object detection component. A promising future direction is to explore its use as part of a system using recurrent neural networks to detect and track objects in video simultaneously. ", "title": "SSD: Single Shot MultiBox Detector" }, { "id": "1512.02325_all_40", "text": " This work was started as an internship project at Google and continued at UNC. We would like to thank Alex Toshev for helpful discussions and are indebted to the Image Understanding and DistBelief teams at Google. We also thank Philip Ammirato and Patrick Poirson for helpful comments. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1446631, 1526367, 1533771. ", "title": "SSD: Single Shot MultiBox Detector" } ]
What is the main weak point of conventional Multi-task learning for zero-shot learning with multiple types of commonsense knowledge?
Conventional Multi-Task Learning (MTL) is known to be prone to interference between various tasks, as well as a phenomenon known as catastrophic forgetting, wherein the model struggles to retain knowledge of different types acquired during MTL [2].
[ 2 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap et al., 2019b), CommonsenseQA (Talmor et al., 2018), and PhysicalIQA (Bisk et al., 2020), each requiring different type of commonsense knowledge (e.g., social, taxonomic, causal, declarative, etc) to select the correct answer. While large-scale neural systems (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019b) have shown human-level accuracy on these benchmarks, recent studies (Mitra et al., 2019) also criticize that these models solve individual datasets, rather than learning how to perform general semantic reasoning. To this end, Ma et al. (2021) suggested zero-shot evaluation as a genuine measure for the reasoning capability of the machine. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_1", "text": " Inspired by this new metric, in this work, we focus on building unsupervised zero-shot multiple-choice QA systems. That is, we target an arbitrary commonsense reasoning task where conventional approaches (that rely heavily on task-specific supervision) are not applicable to such zero-shot learning scenarios. To learn QA models without expensive annotation efforts, recent works (Ma et al., 2021; Banerjee and Baral, 2020; Malaviya et al., 2020) propose to generate a synthetic QA dataset using a commonsense KG such as ATOMIC (Sap et al., 2019a) and ConceptNet (Speer et al., 2017). Such an approach mostly focuses only on one specific type of reasoning relations (e.g., if-then relation, or declarative relation), neglecting the fact that real-world QA systems require simultaneously considering different types of reasoning abilities (e.g., declarative and social, or causal and physical reasoning; Ilievski et al., 2021; Chang et al., 2021). ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_2", "text": " To consider different types of reasoning, this paper extends ideas from the aforementioned zero-shot learning to the multi-source case such that it benefits from different types of commonsense knowledge on individual KGs. For example, ATOMIC (Sap et al., 2019a) focuses on social commonsense while ConceptNet (Speer et al., 2017) contains conceptual knowledge. A practical approach is multi-task learning (MTL; Caruana, 1997; Liu et al., 2019a), which learns a shared encoder for different synthetic QA datasets from multiple KGs. Despite its effectiveness, MTL scheme suffers from interference among different KGs, which results in forgetting previously learned knowledge when trained on new KG which has different kinds of knowledge (Pilault et al., 2021; Pfeiffer et al., 2021; Wang et al., 2021a; Wu et al., 2020). ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_3", "text": " To address these limitations, we propose a novel, modularized framework that aims to learn multiple expert models for KGs, then conduct zero-shot fusion to allow collaboration among KGs. For this purpose, we leverage AdapterFusion (Pfeiffer et al., 2021) where multiple tiny modules between Transformer blocks called adapters (Houlsby et al., 2019) can be combined after independent training, thus allowing a continual integration of the adapters without retraining the entire framework. Specifically, we treat the adapters as different KG-specific experts, and combine them using an attention-like fusion module. To improve the fusion of adapters, we suggest a KG-alignment adapter that guides to the apt expert adapters. Here, we use KGs in three different synthetic supervision training: (1) KG-specific QA datasets to train the KG-specific expert adapters, (2) a KG classification datasets to train the KG-alignment adapter, and (3) a balanced mixture of KG-specific QA datasets to train the fusion module. Our modularized method alleviates the interference between different KGs, which is the pitfall of MTL from our empirical observation, and thus combines multiple KGs into a synergetic zero-shot framework. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_4", "text": " Our contributions are: (1) We suggest a simple, yet effective KG modularization strategy for the use of multiple KGs in commonsense reasoning. (2) We then explore the use of AdapterFusion (Pfeiffer et al., 2021) for better knowledge aggregation based on the KG modularization in zero-shot setting. We believe that such modularized transfer learning is critical to using different knowledge sources synergetically against interference between them. (3) In extensive experiments on various commonsense reasoning benchmarks, our framework achieves significant improvements over baselines using a single KG, even using multiple KGs, which implies the robustness in commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_5", "text": " Many researchers have recently focused on building unsupervised models without any benchmark supervisions (i.e., zero-shot learning). In such zero-shot setting, KGs are often used as an external resource for improving model prior (e.g., continually learned from pre-trained language models) (Banerjee and Baral, 2020; Bosselut and Choi, 2019; Ma et al., 2021), especially for commonsense reasoning, as much existing work couples language models with neural/symbolic commonsense KGs. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_6", "text": " However, most of existing work are either assuming the existence of the alignment information between tasks and KGs (Banerjee and Baral, 2020) or an integrated KG (Ma et al., 2021). For example, ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20} (Hwang et al., 2021), a commonsense KG which incorporates tuples from ConceptNet and ATOMIC with new relations and further crowdsourcing, combines multiple KGs into a new integrated KG, but as widely known (Ilievski et al., 2020; Hwang et al., 2021), heterogeneous schema between different KGs may limit triplets that can be integrated.111Only 172K tuples of the 3.4M tuples and 5 relations of 36 relations in ConceptNet are integrated into ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20}. Rather than such symbolic KG integration with the inevitable loss of knowledge, in this work, we explore the neural KG integration leveraging the multiple KGs without additional processing and alignment information between KG and task. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_7", "text": " The idea of having specialized parameters, or so-called experts, has been widely studied to integrate multiple sources of knowledge via transfer learning. The adapter module (Rebuffi et al., 2017; Houlsby et al., 2019) has been explored as one of such approaches, introducing a small number of task-specific parameters at every layer of pre-trained language model (PLM) while sharing the parameters of underlying PLM which is fixed. To address the limitations of transfer learning due to high re-training cost, many works utilize the multiple adapter modules for individual tasks with different domains (Puigcerver et al., 2020; Bapna et al., 2019; Rücklé et al., 2020; Madotto et al., 2021) considering each adapter to be an expert of each domain. Similar to our work, K-Adapter (Wang et al., 2021a) encodes factual and linguistic knowledge to each adapter, but in this paper, we further explore how to mitigate catastrophic forgetting or interference among multiple adapters for better knowledge transfer in zero-shot setting. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_8", "text": " MTL (Liu et al., 2019a; Zhang and Yang, 2017; Caruana, 1997) learns a shared representation while aggregating knowledge across multiple learning tasks, often leading to better generalization ability of a model. However, parametric aggregation of knowledge with MTL has following limitations: (1) retraining the full model when adding new tasks (Houlsby et al., 2019; Pfeiffer et al., 2021, 2020b) (2) catastrophic forgetting and interference between tasks leading to difficulties of solving each task equally well (Pilault et al., 2021; Wu et al., 2020; Yu et al., 2020) and (3) inconsistent effect (Lourie et al., 2021). To deal with these challenges, Mixture-of-Experts (MoE) is a parameterized generalization of ensembling techniques, which has been adapted for MTL with gating network trained to optimize each task (Ma et al., 2018). However, simple linear gating networks are too shallow and thus may destruct task knowledge for commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_9", "text": " To address this problem, AdapterFusion (Pfeiffer et al., 2021) has been proposed to fuse task specific parameters called adapters for the given target task leveraging attention-like mechanism. AdapterFusion aggregates adapters, which is trained independently for each task, in a non-destructive manner mitigating aforementioned MTL problems such as forgetting and interference between tasks. Recently, it has been used for zero-shot cross-lingual transfer framework (Pfeiffer et al., 2020c; Wang et al., 2021b), which motivates our work to transfer multi-source knowledge with less interference for zero-shot commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_10", "text": " In our setup, we repurpose synthetic QA generation (Ma et al., 2021) for the task of knowledge-driven zero-shot learning for commonsense reasoning, i.e., we transform a KG into multiple (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) pairs where Qisubscript𝑄𝑖Q_{i} is a natural language question and Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} is the set of options with m𝑚m answer candidates. Specifically, given a triple (eh​e​a​d,r,et​a​i​l)superscript𝑒ℎ𝑒𝑎𝑑𝑟superscript𝑒𝑡𝑎𝑖𝑙(e^{head},r,e^{tail}) in a KG, where eh​e​a​dsuperscript𝑒ℎ𝑒𝑎𝑑e^{head}, et​a​i​lsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and r𝑟r denote head/tail entity and relation respectively, we transform eh​e​a​dsuperscript𝑒ℎ𝑒𝑎𝑑e^{head} and r𝑟r into a natural language question Qisubscript𝑄𝑖Q_{i} using templates. For the option set Aisubscript𝐴𝑖A_{i}, we use the combination of the correct answer et​a​i​lsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and m−1𝑚1m-1 distractors which are tail entities from other triples sampled randomly (Ma et al., 2021). Details are described in Appendix B. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_11", "text": " First, we modularize the KGs to preserve their intrinsic knowledge. Considering the importance of using a suitable and well-aligned KG (Ma et al., 2019, 2021) on a downstream task, the subtle difference between each KG should be learned by the model without any interference from each other. Accordingly, we adopt the adapter module (Houlsby et al., 2019) which repurposes a pre-trained language model (PLM) to incorporate each KG as tiny modules in between Transformer blocks. Specifically, as illustrated in Figure 2 (except for green area), the adapter training strategy involves injecting new layers (parameterized by ΦΦ\\Phi) into the original PLM (parameterized by θ𝜃\\theta). The weights of the original PLM are untouched, while the new adapter layers are initialized at random. Formally, we call each adapter trained with 𝒟Q​Aksubscriptsuperscript𝒟𝑘𝑄𝐴\\mbox{${\\cal D}$}^{k}_{QA} as an expert adapter for KG k𝑘k, parameterized by ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_12", "text": " When a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) is given for dataset 𝒟Q​Aksuperscriptsubscript𝒟𝑄𝐴𝑘\\mbox{${\\cal D}$}_{QA}^{k}, we first concatenate question Qisubscript𝑄𝑖Q_{i} and each answer option Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} to generate input sequences Ti={Ti,1,…,Ti,m}subscript𝑇𝑖subscript𝑇𝑖1…subscript𝑇𝑖𝑚T_{i}=\\{T_{i,1},...,T_{i,m}\\}. Then, we compute a score Si,jsubscript𝑆𝑖𝑗S_{i,j} (Ma et al., 2021) for the answer candidate Ai,jsubscript𝐴𝑖𝑗A_{i,j} is computed as follows: Si,j=−1|Ti,j|​∑t=1|Ti,j|l​o​g​P​(wt|…​wt−1,wt+1​…;θ,Φ)subscript𝑆𝑖𝑗1subscript𝑇𝑖𝑗superscriptsubscript𝑡1subscript𝑇𝑖𝑗𝑙𝑜𝑔𝑃conditionalsubscript𝑤𝑡…subscript𝑤𝑡1subscript𝑤𝑡1…𝜃ΦS_{i,j}=-\\frac{1}{|T_{i,j}|}\\sum_{t=1}^{|T_{i,j}|}logP(w_{t}|...w_{t-1},w_{t+1}...;\\theta,\\Phi) (2) where wtsubscript𝑤𝑡w_{t} is a word token in the sequence Ti,jsubscript𝑇𝑖𝑗T_{i,j} and P𝑃P is the conditional probability from Transformer blocks parameterized by θ𝜃\\theta and ΦΦ\\Phi. To train the adapter ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, we use the marginal ranking loss (Ma et al., 2021) as follows: ℒQ​A=1m​∑i=1Nk∑j=1j≠l​a​b​e​lmm​a​x​(0,η−Si,l​a​b​e​l+Si,j)subscriptℒ𝑄𝐴1𝑚superscriptsubscript𝑖1subscript𝑁𝑘superscriptsubscript𝑗1𝑗𝑙𝑎𝑏𝑒𝑙𝑚𝑚𝑎𝑥0𝜂subscript𝑆𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑆𝑖𝑗\\mbox{${\\cal L}$}_{QA}=\\frac{1}{m}\\sum_{i=1}^{N_{k}}\\sum_{\\begin{subarray}{c}j=1\\\\ j\\neq label\\end{subarray}}^{m}max(0,\\eta-S_{i,label}+S_{i,j}) (3) where η𝜂\\eta represents the margin. ΦQ​Ak←argminΦℒQ​A​(𝒟Q​Ak;θ,Φ)←superscriptsubscriptΦ𝑄𝐴𝑘subscriptargminΦsubscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃Φ\\Phi_{QA}^{k}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\Phi) (4) where KG-invariant parameters θ𝜃\\theta are fixed and only KG-dependent parameters ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k} are learned, which enables to store the corresponding knowledge separately without any interference. Further, we can parallelize the training of adapter for all KGs. The efficiency of adapter training allows our modularization to be more scalable. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_13", "text": " Once the expert adapters are learned, we combine the knowledge from each expert adapter using an attention-like mechanism. We present a novel fusion strategy as shown in Figure 2, which is referred to as the zero-shot fusion. In contrast to AdapterFusion (Pfeiffer et al., 2021) where the focus is learning to transfer knowledge to a specific target task, our zero-shot fusion aims to generalize this transfer to any arbitrary target task. Specifically, the zero-shot fusion parameters ΨΨ\\Psi learn to combine fixed expert adapters which are parameterized by ΦQ​A1,…,ΦQ​AKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}. In each Transformer layer l𝑙l of PLM with the injected fusion layer, the zero-shot fusion parameters ΨQ​AsubscriptΨ𝑄𝐴\\Psi_{QA} consist of query, key, and value matrices, denoted by WlQsuperscriptsubscriptW𝑙𝑄\\textbf{W}_{l}^{Q}, WlKsuperscriptsubscriptW𝑙𝐾\\textbf{W}_{l}^{K}, and WlVsuperscriptsubscriptW𝑙𝑉\\textbf{W}_{l}^{V} respectively. These parameters are used to learn the balancing between the representation of each expert adapters through attention-like mechanism. While fixing both the parameters θ𝜃\\theta and all expert adapters ΦQ​A1,…,ΦQ​AKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}, the only trainable weights ΨQ​AsubscriptΨ𝑄𝐴\\Psi_{QA} on the fusion layer learns to combine the knowledge from different K𝐾K expert adapters by using the subset of {𝒟Q​Ak}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K} by random sampling. Here, we balance the ratio between the K𝐾K knowledge-driven datasets as N𝑁N samples (details are in Appendix D). Formally, ΨQ​A←argminΨ​∑k=1KℒQ​A​(𝒟Q​Ak;θ,{ΦQ​Ak}k=1K,Ψ)←subscriptΨ𝑄𝐴subscriptargminΨsuperscriptsubscript𝑘1𝐾subscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃superscriptsubscriptsuperscriptsubscriptΦ𝑄𝐴𝑘𝑘1𝐾Ψ\\Psi_{QA}\\leftarrow\\operatorname*{argmin}_{\\Psi}\\sum_{k=1}^{K}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\{\\Phi_{QA}^{k}\\}_{k=1}^{K},\\Psi) (5) where ΨΨ\\Psi refers to the initialized zero-shot fusion parameters. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_14", "text": " More specifically, in the l𝑙l-th Transformer layer, let hP​L​Mlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} and hEk,lsuperscriptsubscriptℎ𝐸𝑘𝑙h_{E}^{k,l} be the representations of underlying PLM parameterized by θ𝜃\\theta and an expert adapter parameterized by ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, respectively. Then, using the hidden representation hP​L​Mlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} of PLM as a query, the fusion layer performs the attention-like function as follows: Kl,VlsubscriptK𝑙subscriptV𝑙\\displaystyle\\textbf{K}_{l},\\textbf{V}_{l} =(hE1,l,…,hEK,l)absentsuperscriptsubscriptℎ𝐸1𝑙…superscriptsubscriptℎ𝐸𝐾𝑙\\displaystyle=(h_{E}^{1,l},...,h_{E}^{K,l}) (6) QlsubscriptQ𝑙\\displaystyle\\textbf{Q}_{l} =hP​L​Mlabsentsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙\\displaystyle=h_{PLM}^{l} (7) zlsubscriptz𝑙\\displaystyle\\textbf{z}_{l} =Attention​(Ql​WlQ,Kl​WlK,Vl​WlV)absentAttentionsubscriptQ𝑙superscriptsubscriptW𝑙𝑄subscriptK𝑙superscriptsubscriptW𝑙𝐾subscriptV𝑙superscriptsubscriptW𝑙𝑉\\displaystyle=\\text{Attention}(\\textbf{Q}_{l}\\textbf{W}_{l}^{Q},\\textbf{K}_{l}\\textbf{W}_{l}^{K},\\textbf{V}_{l}\\textbf{W}_{l}^{V}) (8) where zlsubscriptz𝑙\\textbf{z}_{l} is passed to the next Transformer layer. Given a sample, the zero-shot fusion learns the suitable balancing parameters between the expert adapters for zero-shot reasoning. Eventually, it learns to identify generalizability across commonsense reasoning tasks. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_15", "text": " AdapterFusion uses the PLM hidden representation hP​L​Mlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} as a query which is learned when training on a specific downstream task. In our zero-shot setting, however, we use a mixture of synthetic QA for fusion training, which is not exactly a training dataset for a downstream task. To compensate for this issue, we present KG-Classifier adapter, which is a KG alignment-aware adapter, which is motivated from the fact that the ability to find which KG has an alignment with the given sample can be helpful as a role of providing a guidance for better performance (Ma et al., 2019, 2021). ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_16", "text": " Specifically, we propose a novel training task for KG-Classifier adapter, which requires predicting the KG for the given sample of the task. For that, given {𝒟Q​Ak}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}, we first transform a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) into a new KG classification sample (Qi;Ai,l​a​b​e​l)subscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙(Q_{i};A_{i,label}) where (;)(;) is the concatenation. Then, we obtain a new label yi∈{0,1}Ksubscript𝑦𝑖superscript01𝐾y_{i}\\in\\{0,1\\}^{K} indicating the corresponding KG source. The samples are in Appendix E. Formally, KG classification dataset 𝒟K​G​Csubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC} is defined as: 𝒟K​G​C={((Qi;Ai,l​a​b​e​l),yi)}i=1Msubscript𝒟𝐾𝐺𝐶superscriptsubscriptsubscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑦𝑖𝑖1𝑀\\mbox{${\\cal D}$}_{KGC}=\\{((Q_{i};A_{i,label}),y_{i})\\}_{i=1}^{M} (9) where M𝑀M is the total size of {𝒟Q​Ak}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_17", "text": " Based on 𝒟K​G​Csubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC}, we learn the KG-Classifier adapter parameterized by θ𝜃\\theta and ΦK​G​CsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC}. First, a classification sample i𝑖i is encoded into hC​L​S∈ℝHsubscriptℎ𝐶𝐿𝑆superscriptℝ𝐻h_{CLS}\\in\\mathbb{R}^{H} then scored as y^i∈ℝKsubscript^𝑦𝑖superscriptℝ𝐾\\hat{y}_{i}\\in\\mathbb{R}^{K} with a linear layer WK​G​C∈ℝK×Hsubscript𝑊𝐾𝐺𝐶superscriptℝ𝐾𝐻W_{KGC}\\in\\mathbb{R}^{K\\times H}, i.e., y^i=WK​G​C​hC​L​Ssubscript^𝑦𝑖subscript𝑊𝐾𝐺𝐶subscriptℎ𝐶𝐿𝑆\\hat{y}_{i}=W_{KGC}h_{CLS}. Once y^isubscript^𝑦𝑖\\hat{y}_{i} is normalized by a softmax layer, the network is trained to minimize the cross-entropy loss ℒK​G​Csubscriptℒ𝐾𝐺𝐶\\mbox{${\\cal L}$}_{KGC} between the prediction y^isubscript^𝑦𝑖\\hat{y}_{i} and its ground truth yisubscript𝑦𝑖y_{i}: ΦK​G​C←argminΦ​∑i=1MℒK​G​C​(yi,y^i;θ,Φ)←subscriptΦ𝐾𝐺𝐶subscriptargminΦsuperscriptsubscript𝑖1𝑀subscriptℒ𝐾𝐺𝐶subscript𝑦𝑖subscript^𝑦𝑖𝜃Φ\\Phi_{KGC}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\sum_{i=1}^{M}\\mbox{${\\cal L}$}_{KGC}(y_{i},\\hat{y}_{i};\\theta,\\Phi) (10) ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_18", "text": " We propose to use the representation of KG-Classifier adapter as a query in attention-like mechanism, referred to as the zero-shot fusion with KG-Classifier adapter. That is, using the hidden representation hK​G​Clsuperscriptsubscriptℎ𝐾𝐺𝐶𝑙h_{KGC}^{l} of a KG-Classifier adapter parameterized by ΦK​G​CsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC} as a query, we substitute QlsubscriptQ𝑙\\textbf{Q}_{l} in Eq. (11) as follows: Ql=hK​G​ClsubscriptQ𝑙superscriptsubscriptℎ𝐾𝐺𝐶𝑙\\textbf{Q}_{l}=h_{KGC}^{l} (11) The overall zero-shot fusion architecture including KG-Classifier is illustrated in Figure 2. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_19", "text": " In this section we evaluate the efficacy of our framework on five commonsense reasoning tasks. We denote KG-Classifier adapter by KG-C adapter. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_20", "text": " All our experiments are conducted in a zero-shot setting, in which the models do not have access to the official training data or labels of the benchmark. For the evaluation, we use the validation set of each benchmark222Since the official test sets are not publicly available, however, the validation set of each benchmark can be role as an test set since it is not used for hyperparameter tuning or model selection. We use accuracy as a metric. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_21", "text": " We evaluate our proposed framework on five question-answering benchmarks for commonsense reasoning: SocialIQA (SIQA) (Sap et al., 2019b), CommonsenseQA (CSQA) (Talmor et al., 2018), Abductive NLI (a-NLI) (Bhagavatula et al., 2020), PhysicalIQA (PIQA) (Bisk et al., 2020), and WinoGrande (WG) (Sakaguchi et al., 2020). Each commonsense benchmark evaluates a specific kind of knowledge: social commonsense for SIQA, concept-level commonsense for CSQA, abductive reasoning for a-NLI, physical commonsense for PIQA, and pronoun resolution ability for WG.333Some benchmarks have a strong alignment with a certain KG due to its construction strategy: SIQA-ATOMIC, and CSQA-ConceptNet. To make a direct comparison with Ma et al. (2021), we use the same KGs to generate data samples. The details are presented in Appendix G. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_22", "text": " We compare our framework with the following baselines. First, to show the characteristics of each benchmark, we use the random or the most frequent label as Random and Majority baseline, respectively. RoBERTa-L and GPT2-L is the performance of each PLM without any fine-tuning. Also, as the baseline for the unsupervised learning model using KGs, we report the performance of Self-talk (Shwartz et al., 2020), COMET-DynaGen (Bosselut and Choi, 2019), SMLM (Banerjee and Baral, 2020) as presented in original papers. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_23", "text": " For further analysis in §§\\S4.4 and §§\\S4.5, we set the following models that are pre-trained on the synthetic QA datasets from KGs as baselines: • Single-Task Learning (STL): The model is pre-trained on a synthetic QA dataset generated from a single KG. Specifically, we experiment two architectural choices: PLM (STL-PLM) and PLM with adapters (STL-Adapter). For each architecture, there are four STL models for each of synthetic QA datasets derived from ATOMIC, ConceptNet, WikiData, and WordNet. We note that the trained STL-Adapter is an expert adapter from a specific KG in our framework. The performance of each STL baseline is shown in Appendix I Table 9 and Table 10. • Multi-Task Learning (MTL): The model is pre-trained on multiple synthetic QA datasets, each of which is generated from a KG. We experiment with a PLM trained on all four aforementioned synthetic QA datasets. We note that the difference between STL-PLM and MTL is whether to use one synthetic QA dataset or multiple synthetic QA datasets for its training. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_24", "text": " We employ RoBERTa-L (Liu et al., 2019b) from Hugging Face’s transformers toolkit for all experiments. We follow the default settings from  Ma et al. (2021). Our implementation uses Adapter (Houlsby et al., 2019) and AdapterFusion (Pfeiffer et al., 2021) as a base model architecture from AdpaterHub (Pfeiffer et al., 2020a). We run our experiments with three different random seeds. The implementation details are described in Appendix H. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_25", "text": " Table 2 shows the zero-shot evaluation results on five benchmark datasets. Generally, zero-shot fusion scores higher than the baselines across all benchmarks, and further, zero-shot fusion shows the best performance in all benchmarks except WG. We note that although Ma et al. (2021) uses the synthetic QA dataset after sample filtering, our method achieves comparable performance with the best performance in WG, even with the raw dataset. Also, the average score of all evaluation benchmarks (the last column of Table 2) shows that zero-shot fusion has generalisability in commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_26", "text": " In addition, zero-shot fusion achieves consistent improvements over MTL. These results indicate that our proposed zero-shot fusion method attributes to fusing the knowledge of multiple KGs more synergetically regardless of the task. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_27", "text": " Moreover, as an ablation, we compare the zero-shot fusion with and without KG-C adapter to explore the efficacy of the KG-C adapter. We can observe that zero-shot fusion with KG-C adapter improves the average accuracy by 0.4%, which implies that the use of KG-C adapter improves the overall performance and makes our method generalize better on most of the evaluation benchmarks. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_28", "text": " To assess the effects of the KG-C adapter itself, we visualize and compare the final layer (CLS) token representation between PLM and KG-C adapter. Figure 3 shows t-SNE (Van der Maaten and Hinton, 2008) plots of all representation of five benchmark datasets. In this figure, every sample is mapped into a 1024-dimensional feature space through RoBERTa-L model and projected back into a two-dimensional plane by t-SNE. We can observe that KG-C adapter can separate the samples of different benchmarks well despite being unseen data. It verifies that KG-awareness acquired with the KG classification task is beneficial to categorize the given sample. The KG-C adapter can thus generate a relevant KG-aware query for a given sample and help to fuse representations from suitable expert adapters in our proposed framework. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_29", "text": " Further, we explore how the KG-C adapter affects zero-shot fusion which is based on an attention-like mechanism (Pfeiffer et al., 2021) compared to zero-shot fusion without KG-C adapter. Here, while zero-shot fusion without KG-C adapter simply uses the representation of PLM as a query, zero-shot fusion with KG-C adapter leverages the representation of KG-C adapter. To illustrate this strength, we visualize the attention probability of (CLS) token from each fusion layer as a representative in Figure 4. The column of the darker cell indicates the adapter that has the bigger influence on the fused representation. We can observe that zero-shot fusion with KG-C adapter fuses the knowledge from different experts with a subtle difference rather than focusing on a single expert severely. This implies that KG-C adapter enables the delicate balancing between multiple knowledge sources based on the KG-alignment awareness, which leads to performance improvements in commonsense reasoning tasks. Interestingly, both cases have the ability not to focus on the expert adapter based on WikiData, which can be seen as a redundant expert.444The zero-shot fusion with KG-C adapter using AT, CN, and WN shows the best average performance in Table 10. This observation would benefit from the further study that explores the optimal combination of KGs by expert selection or rejection. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_30", "text": " In this experiment, we compare the amount of interference in the MTL and zero-shot fusion with KG-C adapter. We propose a novel evaluation metric, the interference ratio, which is the percentage of the incorrectly predicted samples by the multi-KG models among the correctly predicted samples from the STL models in common. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_31", "text": " Using the interference ratio, we can precisely compare the negative effects of multi-KG models on knowledge aggregation since the only reason to get the correct samples wrong is the interference caused by learning with additional KGs. We present the interference ratio of the models on five benchmark datasets in Figure 5. This figure shows that MTL has the higher interference ratio than the competing models across all benchmarks. Our method achieves a substantially better ratio, especially when KG-C adapter is used. This demonstrates the efficacy of our framework in mitigating interference between knowledge, which is one of the major problems of MTL. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_32", "text": " To verify the ability of our model to aggregate different types of KGs, we compare the relative performance gains of MTL and zero-shot fusion with KG-C adapter when increasing the number of KGs. The performance of all KG-combinations for each framework is presented in Table 9 and Table 10. We visualize the improvement of performance for five benchmark development sets, leveraging heatmaps in Figure 6. Here, for the sake of brevity, we denote our framework with KG-C adapter as our method. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_33", "text": " For MTL in Figure 6 (a), the color of the cell denotes the relative improvement of MTL with the combination of KGs over the best performance among the STL-PLM of KGs. Also, for our method in Figure 6 (b), the relative improvement is measured based on the best performance among the STL-Adapter of KGs, considering the difference of the base architecture for MTL (i.e. PLM) and zero-shot fusion (i.e. PLM with adapter). The green and red colors denote the increase and decrease of performance, respectively, when using multiple KGs together. The greener color on the cells indicates that the approach benefits from an increasing number of KGs, which implies aggregating knowledge successfully. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_34", "text": " In Figure 6, while the MTL tends to show the decrease of the performance when more KGs are utilized for training, our method obtains relative performance improvement across most of benchmarks. In both framework, the slightly degraded performance of the combination of KGs without ATOMIC could be due to the strong alignment between ATOMIC and SIQA. Except for the above case, we can observe that as more KGs are leveraged, the color of the cell gets greener, which implies that our method gains more advantages for better performance. This demonstrates that our method enables knowledge aggregation for multiple KGs synergetically. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_35", "text": " Despite the existence of various types of commonsense KGs, utilizing multiple KGs has not been explored enough in the commonsense reasoning field. Motivated by this, this paper proposes a modularized transfer learning framework to fuse the knowledge from multiple KGs efficiently for zero-shot commonsense reasoning. Our framework consists of KG modularization for expert adapter, zero-shot fusion and KG-Classifier adapter. Extensive experiments show that our framework obtains strong improvements over MTL on five commonsense reasoning benchmarks. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_36", "text": " In the future, our work can be extended to adapt our methods to further various multiple KGs with studies of appropriate scale for KG modularization. In addition, based on our hypothesis that the existence of an optimal combination, we can explore the study for the optional use of modularized KG experts for the best transfer learning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" } ]
Is GLUE a benchmark for BERT or corpus for BERT?
GLUE is the benchmark dataset for BERT [22].
[ 22 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of the methods contribute the most. Training is computationally expensive, limiting the amount of tuning that can be done, and is often done with private training data of varying sizes, limiting our ability to measure the effects of the modeling advances. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_1", "text": " We present a replication study of BERT pretraining Devlin et al. (2019), which includes a careful evaluation of the effects of hyperparmeter tuning and training set size. We find that BERT was significantly undertrained and propose an improved recipe for training BERT models, which we call RoBERTa, that can match or exceed the performance of all of the post-BERT methods. Our modifications are simple, they include: (1) training the model longer, with bigger batches, over more data; (2) removing the next sentence prediction objective; (3) training on longer sequences; and (4) dynamically changing the masking pattern applied to the training data. We also collect a large new dataset (CC-News) of comparable size to other privately used datasets, to better control for training set size effects. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_2", "text": " When controlling for training data, our improved training procedure improves upon the published BERT results on both GLUE and SQuAD. When trained for longer over additional data, our model achieves a score of 88.5 on the public GLUE leaderboard, matching the 88.4 reported by Yang et al. (2019). Our model establishes a new state-of-the-art on 4/9 of the GLUE tasks: MNLI, QNLI, RTE and STS-B. We also match state-of-the-art results on SQuAD and RACE. Overall, we re-establish that BERT’s masked language model training objective is competitive with other recently proposed training objectives such as perturbed autoregressive language modeling Yang et al. (2019).222It is possible that these other methods could also improve with more tuning. We leave this exploration to future work. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_3", "text": " In summary, the contributions of this paper are: (1) We present a set of important BERT design choices and training strategies and introduce alternatives that lead to better downstream task performance; (2) We use a novel dataset, CC-News, and confirm that using more data for pretraining further improves performance on downstream tasks; (3) Our training improvements show that masked language model pretraining, under the right design choices, is competitive with all other recently published methods. We release our model, pretraining and fine-tuning code implemented in PyTorch Paszke et al. (2017). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_4", "text": " In this section, we give a brief overview of the BERT Devlin et al. (2019) pretraining approach and some of the training choices that we will examine experimentally in the following section. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_5", "text": " BERT takes as input a concatenation of two segments (sequences of tokens), x1,…,xNsubscript𝑥1…subscript𝑥𝑁x_{1},\\ldots,x_{N} and y1,…,yMsubscript𝑦1…subscript𝑦𝑀y_{1},\\ldots,y_{M}. Segments usually consist of more than one natural sentence. The two segments are presented as a single input sequence to BERT with special tokens delimiting them: (𝐶𝐿𝑆),x1,…,xN,(𝑆𝐸𝑃),y1,…,yM,(𝐸𝑂𝑆)delimited-()𝐶𝐿𝑆subscript𝑥1…subscript𝑥𝑁delimited-()𝑆𝐸𝑃subscript𝑦1…subscript𝑦𝑀delimited-()𝐸𝑂𝑆(\\mathit{CLS}),x_{1},\\ldots,x_{N},(\\mathit{SEP}),y_{1},\\ldots,y_{M},(\\mathit{EOS}). M𝑀M and N𝑁N are constrained such that M+N<T𝑀𝑁𝑇M+N<T, where T𝑇T is a parameter that controls the maximum sequence length during training. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_6", "text": " The model is first pretrained on a large unlabeled text corpus and subsequently finetuned using end-task labeled data. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_7", "text": " BERT uses the now ubiquitous transformer architecture Vaswani et al. (2017), which we will not review in detail. We use a transformer architecture with L𝐿L layers. Each block uses A𝐴A self-attention heads and hidden dimension H𝐻H. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_8", "text": " During pretraining, BERT uses two objectives: masked language modeling and next sentence prediction. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_9", "text": " A random sample of the tokens in the input sequence is selected and replaced with the special token (𝑀𝐴𝑆𝐾)delimited-()𝑀𝐴𝑆𝐾(\\mathit{MASK}). The MLM objective is a cross-entropy loss on predicting the masked tokens. BERT uniformly selects 15% of the input tokens for possible replacement. Of the selected tokens, 80% are replaced with (𝑀𝐴𝑆𝐾)delimited-()𝑀𝐴𝑆𝐾(\\mathit{MASK}), 10% are left unchanged, and 10% are replaced by a randomly selected vocabulary token. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_10", "text": " In the original implementation, random masking and replacement is performed once in the beginning and saved for the duration of training, although in practice, data is duplicated so the mask is not always the same for every training sentence (see Section 4.1). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_11", "text": " NSP is a binary classification loss for predicting whether two segments follow each other in the original text. Positive examples are created by taking consecutive sentences from the text corpus. Negative examples are created by pairing segments from different documents. Positive and negative examples are sampled with equal probability. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_12", "text": " The NSP objective was designed to improve performance on downstream tasks, such as Natural Language Inference Bowman et al. (2015), which require reasoning about the relationships between pairs of sentences. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_13", "text": " BERT is optimized with Adam Kingma and Ba (2015) using the following parameters: β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.999subscript𝛽20.999\\beta_{2}=0.999, ϵ=1e-6italic-ϵ1e-6\\epsilon=\\text{1e-6} and L2subscript𝐿2L_{2} weight decay of 0.010.010.01. The learning rate is warmed up over the first 10,000 steps to a peak value of 1e-4, and then linearly decayed. BERT trains with a dropout of 0.1 on all layers and attention weights, and a GELU activation function Hendrycks and Gimpel (2016). Models are pretrained for S=1,000,000𝑆1,000,000S=\\text{1,000,000} updates, with minibatches containing B=256𝐵256B=\\text{256} sequences of maximum length T=512𝑇512T=\\text{512} tokens. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_14", "text": " BERT is trained on a combination of BookCorpus Zhu et al. (2015) plus English Wikipedia, which totals 16GB of uncompressed text.333Yang et al. (2019) use the same dataset but report having only 13GB of text after data cleaning. This is most likely due to subtle differences in cleaning of the Wikipedia data. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_15", "text": " In this section, we describe the experimental setup for our replication study of BERT. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_16", "text": " We reimplement BERT in fairseq Ott et al. (2019). We primarily follow the original BERT optimization hyperparameters, given in Section 2, except for the peak learning rate and number of warmup steps, which are tuned separately for each setting. We additionally found training to be very sensitive to the Adam epsilon term, and in some cases we obtained better performance or improved stability after tuning it. Similarly, we found setting β2=0.98subscript𝛽20.98\\beta_{2}=0.98 to improve stability when training with large batch sizes. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_17", "text": " We pretrain with sequences of at most T=512𝑇512T=512 tokens. Unlike Devlin et al. (2019), we do not randomly inject short sequences, and we do not train with a reduced sequence length for the first 90% of updates. We train only with full-length sequences. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_18", "text": " We train with mixed precision floating point arithmetic on DGX-1 machines, each with 8 ×\\times 32GB Nvidia V100 GPUs interconnected by Infiniband Micikevicius et al. (2018). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_19", "text": " BERT-style pretraining crucially relies on large quantities of text. Baevski et al. (2019) demonstrate that increasing data size can result in improved end-task performance. Several efforts have trained on datasets larger and more diverse than the original BERT Radford et al. (2019); Yang et al. (2019); Zellers et al. (2019). Unfortunately, not all of the additional datasets can be publicly released. For our study, we focus on gathering as much data as possible for experimentation, allowing us to match the overall quality and quantity of data as appropriate for each comparison. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_20", "text": " We consider five English-language corpora of varying sizes and domains, totaling over 160GB of uncompressed text. We use the following text corpora: • BookCorpus Zhu et al. (2015) plus English Wikipedia. This is the original data used to train BERT. (16GB). • CC-News, which we collected from the English portion of the CommonCrawl News dataset Nagel (2016). The data contains 63 million English news articles crawled between September 2016 and February 2019. (76GB after filtering).444We use news-please Hamborg et al. (2017) to collect and extract CC-News. CC-News is similar to the RealNews dataset described in Zellers et al. (2019). • OpenWebText Gokaslan and Cohen (2019), an open-source recreation of the WebText corpus described in Radford et al. (2019). The text is web content extracted from URLs shared on Reddit with at least three upvotes. (38GB).555The authors and their affiliated institutions are not in any way affiliated with the creation of the OpenWebText dataset. • Stories, a dataset introduced in Trinh and Le (2018) containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. (31GB). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_21", "text": " Following previous work, we evaluate our pretrained models on downstream tasks using the following three benchmarks. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_22", "text": " The General Language Understanding Evaluation (GLUE) benchmark Wang et al. (2019b) is a collection of 9 datasets for evaluating natural language understanding systems.666The datasets are: CoLA Warstadt et al. (2018), Stanford Sentiment Treebank (SST) Socher et al. (2013), Microsoft Research Paragraph Corpus (MRPC) Dolan and Brockett (2005), Semantic Textual Similarity Benchmark (STS) Agirre et al. (2007), Quora Question Pairs (QQP) Iyer et al. (2016), Multi-Genre NLI (MNLI) Williams et al. (2018), Question NLI (QNLI) Rajpurkar et al. (2016), Recognizing Textual Entailment (RTE) Dagan et al. (2006); Bar-Haim et al. (2006); Giampiccolo et al. (2007); Bentivogli et al. (2009) and Winograd NLI (WNLI) Levesque et al. (2011). Tasks are framed as either single-sentence classification or sentence-pair classification tasks. The GLUE organizers provide training and development data splits as well as a submission server and leaderboard that allows participants to evaluate and compare their systems on private held-out test data. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_23", "text": " For the replication study in Section 4, we report results on the development sets after finetuning the pretrained models on the corresponding single-task training data (i.e., without multi-task training or ensembling). Our finetuning procedure follows the original BERT paper Devlin et al. (2019). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_24", "text": " In Section 5 we additionally report test set results obtained from the public leaderboard. These results depend on a several task-specific modifications, which we describe in Section 5.1. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_25", "text": " The Stanford Question Answering Dataset (SQuAD) provides a paragraph of context and a question. The task is to answer the question by extracting the relevant span from the context. We evaluate on two versions of SQuAD: V1.1 and V2.0 Rajpurkar et al. (2016, 2018). In V1.1 the context always contains an answer, whereas in V2.0 some questions are not answered in the provided context, making the task more challenging. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_26", "text": " For SQuAD V1.1 we adopt the same span prediction method as BERT Devlin et al. (2019). For SQuAD V2.0, we add an additional binary classifier to predict whether the question is answerable, which we train jointly by summing the classification and span loss terms. During evaluation, we only predict span indices on pairs that are classified as answerable. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_27", "text": " The ReAding Comprehension from Examinations (RACE) Lai et al. (2017) task is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The dataset is collected from English examinations in China, which are designed for middle and high school students. In RACE, each passage is associated with multiple questions. For every question, the task is to select one correct answer from four options. RACE has significantly longer context than other popular reading comprehension datasets and the proportion of questions that requires reasoning is very large. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_28", "text": " This section explores and quantifies which choices are important for successfully pretraining BERT models. We keep the model architecture fixed.777Studying architectural changes, including larger architectures, is an important area for future work. Specifically, we begin by training BERT models with the same configuration as BERTbasebase{}_{\\textsc{base}} (L=12𝐿12L=12, H=768𝐻768H=768, A=12𝐴12A=12, 110M params). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_29", "text": " As discussed in Section 2, BERT relies on randomly masking and predicting tokens. The original BERT implementation performed masking once during data preprocessing, resulting in a single static mask. To avoid using the same mask for each training instance in every epoch, training data was duplicated 10 times so that each sequence is masked in 10 different ways over the 40 epochs of training. Thus, each training sequence was seen with the same mask four times during training. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_30", "text": " We compare this strategy with dynamic masking where we generate the masking pattern every time we feed a sequence to the model. This becomes crucial when pretraining for more steps or with larger datasets. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_31", "text": " Table 1 compares the published BERTbasebase{}_{\\textsc{base}} results from Devlin et al. (2019) to our reimplementation with either static or dynamic masking. We find that our reimplementation with static masking performs similar to the original BERT model, and dynamic masking is comparable or slightly better than static masking. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_32", "text": " Given these results and the additional efficiency benefits of dynamic masking, we use dynamic masking in the remainder of the experiments. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_33", "text": " In the original BERT pretraining procedure, the model observes two concatenated document segments, which are either sampled contiguously from the same document (with p=0.5𝑝0.5p=0.5) or from distinct documents. In addition to the masked language modeling objective, the model is trained to predict whether the observed document segments come from the same or distinct documents via an auxiliary Next Sentence Prediction (NSP) loss. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_34", "text": " The NSP loss was hypothesized to be an important factor in training the original BERT model. Devlin et al. (2019) observe that removing NSP hurts performance, with significant performance degradation on QNLI, MNLI, and SQuAD 1.1. However, some recent work has questioned the necessity of the NSP loss Lample and Conneau (2019); Yang et al. (2019); Joshi et al. (2019). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_35", "text": " To better understand this discrepancy, we compare several alternative training formats: • segment-pair+nsp: This follows the original input format used in BERT Devlin et al. (2019), with the NSP loss. Each input has a pair of segments, which can each contain multiple natural sentences, but the total combined length must be less than 512 tokens. • sentence-pair+nsp: Each input contains a pair of natural sentences, either sampled from a contiguous portion of one document or from separate documents. Since these inputs are significantly shorter than 512 tokens, we increase the batch size so that the total number of tokens remains similar to segment-pair+nsp. We retain the NSP loss. • full-sentences: Each input is packed with full sentences sampled contiguously from one or more documents, such that the total length is at most 512 tokens. Inputs may cross document boundaries. When we reach the end of one document, we begin sampling sentences from the next document and add an extra separator token between documents. We remove the NSP loss. • doc-sentences: Inputs are constructed similarly to full-sentences, except that they may not cross document boundaries. Inputs sampled near the end of a document may be shorter than 512 tokens, so we dynamically increase the batch size in these cases to achieve a similar number of total tokens as full-sentences. We remove the NSP loss. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_36", "text": " Table 2 shows results for the four different settings. We first compare the original segment-pair input format from Devlin et al. (2019) to the sentence-pair format; both formats retain the NSP loss, but the latter uses single sentences. We find that using individual sentences hurts performance on downstream tasks, which we hypothesize is because the model is not able to learn long-range dependencies. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_37", "text": " We next compare training without the NSP loss and training with blocks of text from a single document (doc-sentences). We find that this setting outperforms the originally published BERTbasebase{}_{\\textsc{base}} results and that removing the NSP loss matches or slightly improves downstream task performance, in contrast to Devlin et al. (2019). It is possible that the original BERT implementation may only have removed the loss term while still retaining the segment-pair input format. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_38", "text": " Finally we find that restricting sequences to come from a single document (doc-sentences) performs slightly better than packing sequences from multiple documents (full-sentences). However, because the doc-sentences format results in variable batch sizes, we use full-sentences in the remainder of our experiments for easier comparison with related work. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_39", "text": " Past work in Neural Machine Translation has shown that training with very large mini-batches can both improve optimization speed and end-task performance when the learning rate is increased appropriately Ott et al. (2018). Recent work has shown that BERT is also amenable to large batch training You et al. (2019). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_40", "text": " Devlin et al. (2019) originally trained BERTbasebase{}_{\\textsc{base}} for 1M steps with a batch size of 256 sequences. This is equivalent in computational cost, via gradient accumulation, to training for 125K steps with a batch size of 2K sequences, or for 31K steps with a batch size of 8K. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_41", "text": " In Table 3 we compare perplexity and end-task performance of BERTbasebase{}_{\\textsc{base}} as we increase the batch size, controlling for the number of passes through the training data. We observe that training with large batches improves perplexity for the masked language modeling objective, as well as end-task accuracy. Large batches are also easier to parallelize via distributed data parallel training,888Large batch training can improve training efficiency even without large scale parallel hardware through gradient accumulation, whereby gradients from multiple mini-batches are accumulated locally before each optimization step. This functionality is supported natively in fairseq Ott et al. (2019). and in later experiments we train with batches of 8K sequences. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_42", "text": " Notably You et al. (2019) train BERT with even larger batche sizes, up to 32K sequences. We leave further exploration of the limits of large batch training to future work. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_43", "text": " Byte-Pair Encoding (BPE) Sennrich et al. (2016) is a hybrid between character- and word-level representations that allows handling the large vocabularies common in natural language corpora. Instead of full words, BPE relies on subwords units, which are extracted by performing statistical analysis of the training corpus. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_44", "text": " BPE vocabulary sizes typically range from 10K-100K subword units. However, unicode characters can account for a sizeable portion of this vocabulary when modeling large and diverse corpora, such as the ones considered in this work. Radford et al. (2019) introduce a clever implementation of BPE that uses bytes instead of unicode characters as the base subword units. Using bytes makes it possible to learn a subword vocabulary of a modest size (50K units) that can still encode any input text without introducing any “unknown” tokens. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_45", "text": " The original BERT implementation Devlin et al. (2019) uses a character-level BPE vocabulary of size 30K, which is learned after preprocessing the input with heuristic tokenization rules. Following Radford et al. (2019), we instead consider training BERT with a larger byte-level BPE vocabulary containing 50K subword units, without any additional preprocessing or tokenization of the input. This adds approximately 15M and 20M additional parameters for BERTbasebase{}_{\\textsc{base}} and BERTlargelarge{}_{\\textsc{large}}, respectively. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_46", "text": " Early experiments revealed only slight differences between these encodings, with the Radford et al. (2019) BPE achieving slightly worse end-task performance on some tasks. Nevertheless, we believe the advantages of a universal encoding scheme outweighs the minor degredation in performance and use this encoding in the remainder of our experiments. A more detailed comparison of these encodings is left to future work. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_47", "text": " In the previous section we propose modifications to the BERT pretraining procedure that improve end-task performance. We now aggregate these improvements and evaluate their combined impact. We call this configuration RoBERTa for Robustly optimized BERT approach. Specifically, RoBERTa is trained with dynamic masking (Section 4.1), full-sentences without NSP loss (Section 4.2), large mini-batches (Section 4.3) and a larger byte-level BPE (Section 4.4). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_48", "text": " Additionally, we investigate two other important factors that have been under-emphasized in previous work: (1) the data used for pretraining, and (2) the number of training passes through the data. For example, the recently proposed XLNet architecture Yang et al. (2019) is pretrained using nearly 10 times more data than the original BERT Devlin et al. (2019). It is also trained with a batch size eight times larger for half as many optimization steps, thus seeing four times as many sequences in pretraining compared to BERT. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_49", "text": " To help disentangle the importance of these factors from other modeling choices (e.g., the pretraining objective), we begin by training RoBERTa following the BERTlargelarge{}_{\\textsc{large}} architecture (L=24𝐿24L=24, H=1024𝐻1024H=1024, A=16𝐴16A=16, 355M parameters). We pretrain for 100K steps over a comparable BookCorpus plus Wikipedia dataset as was used in Devlin et al. (2019). We pretrain our model using 1024 V100 GPUs for approximately one day. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_50", "text": " We present our results in Table 4. When controlling for training data, we observe that RoBERTa provides a large improvement over the originally reported BERTlargelarge{}_{\\textsc{large}} results, reaffirming the importance of the design choices we explored in Section 4. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_51", "text": " Next, we combine this data with the three additional datasets described in Section 3.2. We train RoBERTa over the combined data with the same number of training steps as before (100K). In total, we pretrain over 160GB of text. We observe further improvements in performance across all downstream tasks, validating the importance of data size and diversity in pretraining.999Our experiments conflate increases in data size and diversity. We leave a more careful analysis of these two dimensions to future work. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_52", "text": " Finally, we pretrain RoBERTa for significantly longer, increasing the number of pretraining steps from 100K to 300K, and then further to 500K. We again observe significant gains in downstream task performance, and the 300K and 500K step models outperform XLNetlargelarge{}_{\\textsc{large}} across most tasks. We note that even our longest-trained model does not appear to overfit our data and would likely benefit from additional training. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_53", "text": " In the rest of the paper, we evaluate our best RoBERTa model on the three different benchmarks: GLUE, SQuaD and RACE. Specifically we consider RoBERTa trained for 500K steps over all five of the datasets introduced in Section 3.2. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_54", "text": " For GLUE we consider two finetuning settings. In the first setting (single-task, dev) we finetune RoBERTa separately for each of the GLUE tasks, using only the training data for the corresponding task. We consider a limited hyperparameter sweep for each task, with batch sizes ∈{16,32}absent1632\\in\\{16,32\\} and learning rates ∈{1​e−5,2​e−5,3​e−5}absent1𝑒52𝑒53𝑒5\\in\\{1e-5,2e-5,3e-5\\}, with a linear warmup for the first 6% of steps followed by a linear decay to 0. We finetune for 10 epochs and perform early stopping based on each task’s evaluation metric on the dev set. The rest of the hyperparameters remain the same as during pretraining. In this setting, we report the median development set results for each task over five random initializations, without model ensembling. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_55", "text": " In the second setting (ensembles, test), we compare RoBERTa to other approaches on the test set via the GLUE leaderboard. While many submissions to the GLUE leaderboard depend on multi-task finetuning, our submission depends only on single-task finetuning. For RTE, STS and MRPC we found it helpful to finetune starting from the MNLI single-task model, rather than the baseline pretrained RoBERTa. We explore a slightly wider hyperparameter space, described in the Appendix, and ensemble between 5 and 7 models per task. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_56", "text": " Two of the GLUE tasks require task-specific finetuning approaches to achieve competitive leaderboard results. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_57", "text": " QNLI: Recent submissions on the GLUE leaderboard adopt a pairwise ranking formulation for the QNLI task, in which candidate answers are mined from the training set and compared to one another, and a single (question, candidate) pair is classified as positive Liu et al. (2019b, a); Yang et al. (2019). This formulation significantly simplifies the task, but is not directly comparable to BERT Devlin et al. (2019). Following recent work, we adopt the ranking approach for our test submission, but for direct comparison with BERT we report development set results based on a pure classification approach. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_58", "text": " WNLI: We found the provided NLI-format data to be challenging to work with. Instead we use the reformatted WNLI data from SuperGLUE Wang et al. (2019a), which indicates the span of the query pronoun and referent. We finetune RoBERTa using the margin ranking loss from Kocijan et al. (2019). For a given input sentence, we use spaCy Honnibal and Montani (2017) to extract additional candidate noun phrases from the sentence and finetune our model so that it assigns higher scores to positive referent phrases than for any of the generated negative candidate phrases. One unfortunate consequence of this formulation is that we can only make use of the positive training examples, which excludes over half of the provided training examples.101010While we only use the provided WNLI training data, our results could potentially be improved by augmenting this with additional pronoun disambiguation datasets. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_59", "text": " We present our results in Table 5. In the first setting (single-task, dev), RoBERTa achieves state-of-the-art results on all 9 of the GLUE task development sets. Crucially, RoBERTa uses the same masked language modeling pretraining objective and architecture as BERTlargelarge{}_{\\textsc{large}}, yet consistently outperforms both BERTlargelarge{}_{\\textsc{large}} and XLNetlargelarge{}_{\\textsc{large}}. This raises questions about the relative importance of model architecture and pretraining objective, compared to more mundane details like dataset size and training time that we explore in this work. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_60", "text": " In the second setting (ensembles, test), we submit RoBERTa to the GLUE leaderboard and achieve state-of-the-art results on 4 out of 9 tasks and the highest average score to date. This is especially exciting because RoBERTa does not depend on multi-task finetuning, unlike most of the other top submissions. We expect future work may further improve these results by incorporating more sophisticated multi-task finetuning procedures. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_61", "text": " We adopt a much simpler approach for SQuAD compared to past work. In particular, while both BERT Devlin et al. (2019) and XLNet Yang et al. (2019) augment their training data with additional QA datasets, we only finetune RoBERTa using the provided SQuAD training data. Yang et al. (2019) also employed a custom layer-wise learning rate schedule to finetune XLNet, while we use the same learning rate for all layers. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_62", "text": " For SQuAD v1.1 we follow the same finetuning procedure as Devlin et al. (2019). For SQuAD v2.0, we additionally classify whether a given question is answerable; we train this classifier jointly with the span predictor by summing the classification and span loss terms. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_63", "text": " We present our results in Table 6. On the SQuAD v1.1 development set, RoBERTa matches the state-of-the-art set by XLNet. On the SQuAD v2.0 development set, RoBERTa sets a new state-of-the-art, improving over XLNet by 0.4 points (EM) and 0.6 points (F1). ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_64", "text": " We also submit RoBERTa to the public SQuAD 2.0 leaderboard and evaluate its performance relative to other systems. Most of the top systems build upon either BERT Devlin et al. (2019) or XLNet Yang et al. (2019), both of which rely on additional external training data. In contrast, our submission does not use any additional data. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_65", "text": " Our single RoBERTa model outperforms all but one of the single model submissions, and is the top scoring system among those that do not rely on data augmentation. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_66", "text": " In RACE, systems are provided with a passage of text, an associated question, and four candidate answers. Systems are required to classify which of the four candidate answers is correct. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_67", "text": " We modify RoBERTa for this task by concatenating each candidate answer with the corresponding question and passage. We then encode each of these four sequences and pass the resulting (CLS) representations through a fully-connected layer, which is used to predict the correct answer. We truncate question-answer pairs that are longer than 128 tokens and, if needed, the passage so that the total length is at most 512 tokens. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_68", "text": " Results on the RACE test sets are presented in Table 7. RoBERTa achieves state-of-the-art results on both middle-school and high-school settings. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_69", "text": " Pretraining methods have been designed with different training objectives, including language modeling Dai and Le (2015); Peters et al. (2018); Howard and Ruder (2018), machine translation McCann et al. (2017), and masked language modeling Devlin et al. (2019); Lample and Conneau (2019). Many recent papers have used a basic recipe of finetuning models for each end task Howard and Ruder (2018); Radford et al. (2018), and pretraining with some variant of a masked language model objective. However, newer methods have improved performance by multi-task fine tuning Dong et al. (2019), incorporating entity embeddings Sun et al. (2019), span prediction Joshi et al. (2019), and multiple variants of autoregressive pretraining Song et al. (2019); Chan et al. (2019); Yang et al. (2019). Performance is also typically improved by training bigger models on more data Devlin et al. (2019); Baevski et al. (2019); Yang et al. (2019); Radford et al. (2019). Our goal was to replicate, simplify, and better tune the training of BERT, as a reference point for better understanding the relative performance of all of these methods. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_70", "text": " We carefully evaluate a number of design decisions when pretraining BERT models. We find that performance can be substantially improved by training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data. Our improved pretraining procedure, which we call RoBERTa, achieves state-of-the-art results on GLUE, RACE and SQuAD, without multi-task finetuning for GLUE or additional data for SQuAD. These results illustrate the importance of these previously overlooked design decisions and suggest that BERT’s pretraining objective remains competitive with recently proposed alternatives. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "id": "1907.11692_all_71", "text": " We additionally use a novel dataset, CC-News, and release our models and code for pretraining and finetuning at: https://github.com/pytorch/fairseq. ", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" } ]
What characteristics did MobileNet showed better performance when compared to other models.
MobileNets showed better performance at reducing model size, computational complexity and latency while maintaining comparable accuracy when compared with the other models [46].
[ 46 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order to achieve higher accuracy (27, 31, 29, 8). However, these advances to improve accuracy are not necessarily making networks more efficient with respect to size and speed. In many real world applications such as robotics, self-driving car and augmented reality, the recognition tasks need to be carried out in a timely fashion on a computationally limited platform. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_1", "text": " This paper describes an efficient network architecture and a set of two hyper-parameters in order to build very small, low latency models that can be easily matched to the design requirements for mobile and embedded vision applications. Section 2 reviews prior work in building small models. Section 3 describes the MobileNet architecture and two hyper-parameters width multiplier and resolution multiplier to define smaller and more efficient MobileNets. Section 4 describes experiments on ImageNet as well a variety of different applications and use cases. Section 5 closes with a summary and conclusion. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_2", "text": " There has been rising interest in building small and efficient neural networks in the recent literature, e.g. (16, 34, 12, 36, 22). Many different approaches can be generally categorized into either compressing pretrained networks or training small networks directly. This paper proposes a class of network architectures that allows a model developer to specifically choose a small network that matches the resource restrictions (latency, size) for their application. MobileNets primarily focus on optimizing for latency but also yield small networks. Many papers on small networks focus only on size but do not consider speed. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_3", "text": " MobileNets are built primarily from depthwise separable convolutions initially introduced in and subsequently used in Inception models to reduce the computation in the first few layers. Flattened networks build a network out of fully factorized convolutions and showed the potential of extremely factorized networks. Independent of this current paper, Factorized Networks introduces a similar factorized convolution as well as the use of topological connections. Subsequently, the Xception network demonstrated how to scale up depthwise separable filters to out perform Inception V3 networks. Another small network is Squeezenet which uses a bottleneck approach to design a very small network. Other reduced computation networks include structured transform networks and deep fried convnets . ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_4", "text": " A different approach for obtaining small networks is shrinking, factorizing or compressing pretrained networks. Compression based on product quantization , hashing , and pruning, vector quantization and Huffman coding have been proposed in the literature. Additionally various factorizations have been proposed to speed up pretrained networks (14, 20). Another method for training small networks is distillation which uses a larger network to teach a smaller network. It is complementary to our approach and is covered in some of our use cases in section 4. Another emerging approach is low bit networks (4, 22, 11). ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_5", "text": " In this section we first describe the core layers that MobileNet is built on which are depthwise separable filters. We then describe the MobileNet network structure and conclude with descriptions of the two model shrinking hyper-parameters width multiplier and resolution multiplier. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_6", "text": " The MobileNet model is based on depthwise separable convolutions which is a form of factorized convolutions which factorize a standard convolution into a depthwise convolution and a 1×1111\\times 1 convolution called a pointwise convolution. For MobileNets the depthwise convolution applies a single filter to each input channel. The pointwise convolution then applies a 1×1111\\times 1 convolution to combine the outputs the depthwise convolution. A standard convolution both filters and combines inputs into a new set of outputs in one step. The depthwise separable convolution splits this into two layers, a separate layer for filtering and a separate layer for combining. This factorization has the effect of drastically reducing computation and model size. Figure 2 shows how a standard convolution 2(a) is factorized into a depthwise convolution 2(b) and a 1×1111\\times 1 pointwise convolution 2(c). ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_7", "text": " A standard convolutional layer takes as input a DF×DF×Msubscript𝐷𝐹subscript𝐷𝐹𝑀D_{F}\\times D_{F}\\times M feature map 𝐅𝐅\\mathbf{F} and produces a DF×DF×Nsubscript𝐷𝐹subscript𝐷𝐹𝑁D_{F}\\times D_{F}\\times N feature map 𝐆𝐆\\mathbf{G} where DFsubscript𝐷𝐹D_{F} is the spatial width and height of a square input feature map111We assume that the output feature map has the same spatial dimensions as the input and both feature maps are square. Our model shrinking results generalize to feature maps with arbitrary sizes and aspect ratios., M𝑀M is the number of input channels (input depth), DGsubscript𝐷𝐺D_{G} is the spatial width and height of a square output feature map and N𝑁N is the number of output channel (output depth). ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_8", "text": " The standard convolutional layer is parameterized by convolution kernel 𝐊𝐊\\mathbf{K} of size DK×DK×M×Nsubscript𝐷𝐾subscript𝐷𝐾𝑀𝑁D_{K}\\times D_{K}\\times M\\times N where DKsubscript𝐷𝐾D_{K} is the spatial dimension of the kernel assumed to be square and M𝑀M is number of input channels and N𝑁N is the number of output channels as defined previously. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_9", "text": " The output feature map for standard convolution assuming stride one and padding is computed as: ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_10", "text": " 𝐆k,l,n=∑i,j,m𝐊i,j,m,n⋅𝐅k+i−1,l+j−1,msubscript𝐆𝑘𝑙𝑛subscript𝑖𝑗𝑚⋅subscript𝐊𝑖𝑗𝑚𝑛subscript𝐅𝑘𝑖1𝑙𝑗1𝑚\\mathbf{G}_{k,l,n}=\\sum_{i,j,m}\\mathbf{K}_{i,j,m,n}\\cdot\\mathbf{F}_{k+i-1,l+j-1,m} (1) ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_11", "text": " Standard convolutions have the computational cost of: ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_12", "text": " DK⋅DK⋅M⋅N⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot M\\cdot N\\cdot D_{F}\\cdot D_{F} (2) where the computational cost depends multiplicatively on the number of input channels M𝑀M, the number of output channels N𝑁N the kernel size Dk×Dksubscript𝐷𝑘subscript𝐷𝑘D_{k}\\times D_{k} and the feature map size DF×DFsubscript𝐷𝐹subscript𝐷𝐹D_{F}\\times D_{F}. MobileNet models address each of these terms and their interactions. First it uses depthwise separable convolutions to break the interaction between the number of output channels and the size of the kernel. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_13", "text": " The standard convolution operation has the effect of filtering features based on the convolutional kernels and combining features in order to produce a new representation. The filtering and combination steps can be split into two steps via the use of factorized convolutions called depthwise separable convolutions for substantial reduction in computational cost. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_14", "text": " Depthwise separable convolution are made up of two layers: depthwise convolutions and pointwise convolutions. We use depthwise convolutions to apply a single filter per each input channel (input depth). Pointwise convolution, a simple 1×1111\\times 1 convolution, is then used to create a linear combination of the output of the depthwise layer. MobileNets use both batchnorm and ReLU nonlinearities for both layers. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_15", "text": " Depthwise convolution with one filter per input channel (input depth) can be written as: ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_16", "text": " 𝐆^k,l,m=∑i,j𝐊^i,j,m⋅𝐅k+i−1,l+j−1,msubscript^𝐆𝑘𝑙𝑚subscript𝑖𝑗⋅subscript^𝐊𝑖𝑗𝑚subscript𝐅𝑘𝑖1𝑙𝑗1𝑚\\hat{\\mathbf{G}}_{k,l,m}=\\sum_{i,j}\\hat{\\mathbf{K}}_{i,j,m}\\cdot\\mathbf{F}_{k+i-1,l+j-1,m} (3) where 𝐊^^𝐊\\hat{\\mathbf{K}} is the depthwise convolutional kernel of size DK×DK×Msubscript𝐷𝐾subscript𝐷𝐾𝑀D_{K}\\times D_{K}\\times M where the mt​hsubscript𝑚𝑡ℎm_{th} filter in 𝐊^^𝐊\\hat{\\mathbf{K}} is applied to the mt​hsubscript𝑚𝑡ℎm_{th} channel in 𝐅𝐅\\mathbf{F} to produce the mt​hsubscript𝑚𝑡ℎm_{th} channel of the filtered output feature map 𝐆^^𝐆\\hat{\\mathbf{G}}. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_17", "text": " Depthwise convolution has a computational cost of: ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_18", "text": " DK⋅DK⋅M⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot M\\cdot D_{F}\\cdot D_{F} (4) ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_19", "text": " Depthwise convolution is extremely efficient relative to standard convolution. However it only filters input channels, it does not combine them to create new features. So an additional layer that computes a linear combination of the output of depthwise convolution via 1×1111\\times 1 convolution is needed in order to generate these new features. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_20", "text": " The combination of depthwise convolution and 1×1111\\times 1 (pointwise) convolution is called depthwise separable convolution which was originally introduced in . ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_21", "text": " Depthwise separable convolutions cost: ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_22", "text": " DK⋅DK⋅M⋅DF⋅DF+M⋅N⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀subscript𝐷𝐹subscript𝐷𝐹⋅𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot M\\cdot D_{F}\\cdot D_{F}+M\\cdot N\\cdot D_{F}\\cdot D_{F} (5) which is the sum of the depthwise and 1×1111\\times 1 pointwise convolutions. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_23", "text": " By expressing convolution as a two step process of filtering and combining we get a reduction in computation of: ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_24", "text": " DK⋅DK⋅M⋅DF⋅DF+M⋅N⋅DF⋅DFDK⋅DK⋅M⋅N⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀subscript𝐷𝐹subscript𝐷𝐹⋅𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹⋅subscript𝐷𝐾subscript𝐷𝐾𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹\\displaystyle\\frac{D_{K}\\cdot D_{K}\\cdot M\\cdot D_{F}\\cdot D_{F}+M\\cdot N\\cdot D_{F}\\cdot D_{F}}{D_{K}\\cdot D_{K}\\cdot M\\cdot N\\cdot D_{F}\\cdot D_{F}} =\\displaystyle= 1N+1DK21𝑁1superscriptsubscript𝐷𝐾2\\displaystyle\\frac{1}{N}+\\frac{1}{D_{K}^{2}} ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_25", "text": " MobileNet uses 3×3333\\times 3 depthwise separable convolutions which uses between 8 to 9 times less computation than standard convolutions at only a small reduction in accuracy as seen in Section 4. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_26", "text": " Additional factorization in spatial dimension such as in (16, 31) does not save much additional computation as very little computation is spent in depthwise convolutions. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_27", "text": " The MobileNet structure is built on depthwise separable convolutions as mentioned in the previous section except for the first layer which is a full convolution. By defining the network in such simple terms we are able to easily explore network topologies to find a good network. The MobileNet architecture is defined in Table 1. All layers are followed by a batchnorm and ReLU nonlinearity with the exception of the final fully connected layer which has no nonlinearity and feeds into a softmax layer for classification. Figure 3 contrasts a layer with regular convolutions, batchnorm and ReLU nonlinearity to the factorized layer with depthwise convolution, 1×1111\\times 1 pointwise convolution as well as batchnorm and ReLU after each convolutional layer. Down sampling is handled with strided convolution in the depthwise convolutions as well as in the first layer. A final average pooling reduces the spatial resolution to 1 before the fully connected layer. Counting depthwise and pointwise convolutions as separate layers, MobileNet has 28 layers. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_28", "text": " It is not enough to simply define networks in terms of a small number of Mult-Adds. It is also important to make sure these operations can be efficiently implementable. For instance unstructured sparse matrix operations are not typically faster than dense matrix operations until a very high level of sparsity. Our model structure puts nearly all of the computation into dense 1×1111\\times 1 convolutions. This can be implemented with highly optimized general matrix multiply (GEMM) functions. Often convolutions are implemented by a GEMM but require an initial reordering in memory called im2col in order to map it to a GEMM. For instance, this approach is used in the popular Caffe package . 1×1111\\times 1 convolutions do not require this reordering in memory and can be implemented directly with GEMM which is one of the most optimized numerical linear algebra algorithms. MobileNet spends 95%percent9595\\% of it’s computation time in 1×1111\\times 1 convolutions which also has 75%percent7575\\% of the parameters as can be seen in Table 2. Nearly all of the additional parameters are in the fully connected layer. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_29", "text": " MobileNet models were trained in TensorFlow using RMSprop with asynchronous gradient descent similar to Inception V3 . However, contrary to training large models we use less regularization and data augmentation techniques because small models have less trouble with overfitting. When training MobileNets we do not use side heads or label smoothing and additionally reduce the amount image of distortions by limiting the size of small crops that are used in large Inception training . Additionally, we found that it was important to put very little or no weight decay (l2 regularization) on the depthwise filters since their are so few parameters in them. For the ImageNet benchmarks in the next section all models were trained with same training parameters regardless of the size of the model. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_30", "text": " Although the base MobileNet architecture is already small and low latency, many times a specific use case or application may require the model to be smaller and faster. In order to construct these smaller and less computationally expensive models we introduce a very simple parameter α𝛼\\alpha called width multiplier. The role of the width multiplier α𝛼\\alpha is to thin a network uniformly at each layer. For a given layer and width multiplier α𝛼\\alpha, the number of input channels M𝑀M becomes α​M𝛼𝑀\\alpha M and the number of output channels N𝑁N becomes α​N𝛼𝑁\\alpha N. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_31", "text": " The computational cost of a depthwise separable convolution with width multiplier α𝛼\\alpha is: DK⋅DK⋅α​M⋅DF⋅DF+α​M⋅α​N⋅DF⋅DF⋅⋅subscript𝐷𝐾subscript𝐷𝐾𝛼𝑀subscript𝐷𝐹subscript𝐷𝐹⋅⋅𝛼𝑀𝛼𝑁subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot\\alpha M\\cdot D_{F}\\cdot D_{F}+\\alpha M\\cdot\\alpha N\\cdot D_{F}\\cdot D_{F} (6) where α∈(0,1)𝛼01\\alpha\\in(0,1) with typical settings of 1, 0.75, 0.5 and 0.25. α=1𝛼1\\alpha=1 is the baseline MobileNet and α<1𝛼1\\alpha<1 are reduced MobileNets. Width multiplier has the effect of reducing computational cost and the number of parameters quadratically by roughly α2superscript𝛼2\\alpha^{2}. Width multiplier can be applied to any model structure to define a new smaller model with a reasonable accuracy, latency and size trade off. It is used to define a new reduced structure that needs to be trained from scratch. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_32", "text": " The second hyper-parameter to reduce the computational cost of a neural network is a resolution multiplier ρ𝜌\\rho. We apply this to the input image and the internal representation of every layer is subsequently reduced by the same multiplier. In practice we implicitly set ρ𝜌\\rho by setting the input resolution. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_33", "text": " We can now express the computational cost for the core layers of our network as depthwise separable convolutions with width multiplier α𝛼\\alpha and resolution multiplier ρ𝜌\\rho: DK⋅DK⋅α​M⋅ρ​DF⋅ρ​DF+α​M⋅α​N⋅ρ​DF⋅ρ​DF⋅⋅⋅subscript𝐷𝐾subscript𝐷𝐾𝛼𝑀𝜌subscript𝐷𝐹𝜌subscript𝐷𝐹⋅⋅⋅𝛼𝑀𝛼𝑁𝜌subscript𝐷𝐹𝜌subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot\\alpha M\\cdot\\rho D_{F}\\cdot\\rho D_{F}+\\alpha M\\cdot\\alpha N\\cdot\\rho D_{F}\\cdot\\rho D_{F} (7) where ρ∈(0,1)𝜌01\\rho\\in(0,1) which is typically set implicitly so that the input resolution of the network is 224, 192, 160 or 128. ρ=1𝜌1\\rho=1 is the baseline MobileNet and ρ<1𝜌1\\rho<1 are reduced computation MobileNets. Resolution multiplier has the effect of reducing computational cost by ρ2superscript𝜌2\\rho^{2}. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_34", "text": " As an example we can look at a typical layer in MobileNet and see how depthwise separable convolutions, width multiplier and resolution multiplier reduce the cost and parameters. Table 3 shows the computation and number of parameters for a layer as architecture shrinking methods are sequentially applied to the layer. The first row shows the Mult-Adds and parameters for a full convolutional layer with an input feature map of size 14×14×512141451214\\times 14\\times 512 with a kernel K𝐾K of size 3×3×512×512335125123\\times 3\\times 512\\times 512. We will look in detail in the next section at the trade offs between resources and accuracy. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_35", "text": " In this section we first investigate the effects of depthwise convolutions as well as the choice of shrinking by reducing the width of the network rather than the number of layers. We then show the trade offs of reducing the network based on the two hyper-parameters: width multiplier and resolution multiplier and compare results to a number of popular models. We then investigate MobileNets applied to a number of different applications. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_36", "text": " First we show results for MobileNet with depthwise separable convolutions compared to a model built with full convolutions. In Table 4 we see that using depthwise separable convolutions compared to full convolutions only reduces accuracy by 1%percent11\\% on ImageNet was saving tremendously on mult-adds and parameters. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_37", "text": " We next show results comparing thinner models with width multiplier to shallower models using less layers. To make MobileNet shallower, the 555 layers of separable filters with feature size 14×14×512141451214\\times 14\\times 512 in Table 1 are removed. Table 5 shows that at similar computation and number of parameters, that making MobileNets thinner is 3%percent33\\% better than making them shallower. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_38", "text": " Table 6 shows the accuracy, computation and size trade offs of shrinking the MobileNet architecture with the width multiplier α𝛼\\alpha. Accuracy drops off smoothly until the architecture is made too small at α=0.25𝛼0.25\\alpha=0.25. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_39", "text": " Table 7 shows the accuracy, computation and size trade offs for different resolution multipliers by training MobileNets with reduced input resolutions. Accuracy drops off smoothly across resolution. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_40", "text": " Figure 4 shows the trade off between ImageNet Accuracy and computation for the 16 models made from the cross product of width multiplier α∈{1,0.75,0.5,0.25}𝛼10.750.50.25\\alpha\\in\\{1,0.75,0.5,0.25\\} and resolutions {224,192,160,128}224192160128\\{224,192,160,128\\}. Results are log linear with a jump when models get very small at α=0.25𝛼0.25\\alpha=0.25. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_41", "text": " Figure 5 shows the trade off between ImageNet Accuracy and number of parameters for the 16 models made from the cross product of width multiplier α∈{1,0.75,0.5,0.25}𝛼10.750.50.25\\alpha\\in\\{1,0.75,0.5,0.25\\} and resolutions {224,192,160,128}224192160128\\{224,192,160,128\\}. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_42", "text": " Table 8 compares full MobileNet to the original GoogleNet and VGG16 . MobileNet is nearly as accurate as VGG16 while being 32 times smaller and 27 times less compute intensive. It is more accurate than GoogleNet while being smaller and more than 2.5 times less computation. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_43", "text": " Table 9 compares a reduced MobileNet with width multiplier α=0.5𝛼0.5\\alpha=0.5 and reduced resolution 160×160160160160\\times 160. Reduced MobileNet is 4%percent44\\% better than AlexNet while being 45×45\\times smaller and 9.4×9.4\\times less compute than AlexNet. It is also 4%percent44\\% better than Squeezenet at about the same size and 22×22\\times less computation. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_44", "text": " We train MobileNet for fine grained recognition on the Stanford Dogs dataset . We extend the approach of and collect an even larger but noisy training set than from the web. We use the noisy web data to pretrain a fine grained dog recognition model and then fine tune the model on the Stanford Dogs training set. Results on Stanford Dogs test set are in Table 10. MobileNet can almost achieve the state of the art results from at greatly reduced computation and size. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_45", "text": " PlaNet casts the task of determining where on earth a photo was taken as a classification problem. The approach divides the earth into a grid of geographic cells that serve as the target classes and trains a convolutional neural network on millions of geo-tagged photos. PlaNet has been shown to successfully localize a large variety of photos and to outperform Im2GPS (6, 7) that addresses the same task. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_46", "text": " We re-train PlaNet using the MobileNet architecture on the same data. While the full PlaNet model based on the Inception V3 architecture has 52 million parameters and 5.74 billion mult-adds. The MobileNet model has only 13 million parameters with the usual 3 million for the body and 10 million for the final layer and 0.58 Million mult-adds. As shown in Tab. 11, the MobileNet version delivers only slightly decreased performance compared to PlaNet despite being much more compact. Moreover, it still outperforms Im2GPS by a large margin. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_47", "text": " Another use-case for MobileNet is compressing large systems with unknown or esoteric training procedures. In a face attribute classification task, we demonstrate a synergistic relationship between MobileNet and distillation , a knowledge transfer technique for deep networks. We seek to reduce a large face attribute classifier with 757575 million parameters and 160016001600 million Mult-Adds. The classifier is trained on a multi-attribute dataset similar to YFCC100M . ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_48", "text": " We distill a face attribute classifier using the MobileNet architecture. Distillation  works by training the classifier to emulate the outputs of a larger model222The emulation quality is measured by averaging the per-attribute cross-entropy over all attributes. instead of the ground-truth labels, hence enabling training from large (and potentially infinite) unlabeled datasets. Marrying the scalability of distillation training and the parsimonious parameterization of MobileNet, the end system not only requires no regularization (e.g. weight-decay and early-stopping), but also demonstrates enhanced performances. It is evident from Tab. 12 that the MobileNet-based classifier is resilient to aggressive model shrinking: it achieves a similar mean average precision across attributes (mean AP) as the in-house while consuming only 1%percent11\\% the Multi-Adds. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_49", "text": " MobileNet can also be deployed as an effective base network in modern object detection systems. We report results for MobileNet trained for object detection on COCO data based on the recent work that won the 2016 COCO challenge . In table 13, MobileNet is compared to VGG and Inception V2 under both Faster-RCNN and SSD framework. In our experiments, SSD is evaluated with 300 input resolution (SSD 300) and Faster-RCNN is compared with both 300 and 600 input resolution (Faster-RCNN 300, Faster-RCNN 600). The Faster-RCNN model evaluates 300 RPN proposal boxes per image. The models are trained on COCO train+val excluding 8k minival images and evaluated on minival. For both frameworks, MobileNet achieves comparable results to other networks with only a fraction of computational complexity and model size. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_50", "text": " The FaceNet model is a state of the art face recognition model . It builds face embeddings based on the triplet loss. To build a mobile FaceNet model we use distillation to train by minimizing the squared differences of the output of FaceNet and MobileNet on the training data. Results for very small MobileNet models can be found in table 14. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" }, { "id": "1704.04861_all_51", "text": " We proposed a new model architecture called MobileNets based on depthwise separable convolutions. We investigated some of the important design decisions leading to an efficient model. We then demonstrated how to build smaller and faster MobileNets using width multiplier and resolution multiplier by trading off a reasonable amount of accuracy to reduce size and latency. We then compared different MobileNets to popular models demonstrating superior size, speed and accuracy characteristics. We concluded by demonstrating MobileNet’s effectiveness when applied to a wide variety of tasks. As a next step to help adoption and exploration of MobileNets, we plan on releasing models in Tensor Flow. ", "title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" } ]
What are the limitations of the parametric aggregation of knowledge with MTL?
(1) retraining the full model when adding new tasks (2) catastrophic forgetting and interference between tasks leading to difficulties of solving each task equally well and (3) inconsistent effect [2].
[ 2 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap et al., 2019b), CommonsenseQA (Talmor et al., 2018), and PhysicalIQA (Bisk et al., 2020), each requiring different type of commonsense knowledge (e.g., social, taxonomic, causal, declarative, etc) to select the correct answer. While large-scale neural systems (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019b) have shown human-level accuracy on these benchmarks, recent studies (Mitra et al., 2019) also criticize that these models solve individual datasets, rather than learning how to perform general semantic reasoning. To this end, Ma et al. (2021) suggested zero-shot evaluation as a genuine measure for the reasoning capability of the machine. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_1", "text": " Inspired by this new metric, in this work, we focus on building unsupervised zero-shot multiple-choice QA systems. That is, we target an arbitrary commonsense reasoning task where conventional approaches (that rely heavily on task-specific supervision) are not applicable to such zero-shot learning scenarios. To learn QA models without expensive annotation efforts, recent works (Ma et al., 2021; Banerjee and Baral, 2020; Malaviya et al., 2020) propose to generate a synthetic QA dataset using a commonsense KG such as ATOMIC (Sap et al., 2019a) and ConceptNet (Speer et al., 2017). Such an approach mostly focuses only on one specific type of reasoning relations (e.g., if-then relation, or declarative relation), neglecting the fact that real-world QA systems require simultaneously considering different types of reasoning abilities (e.g., declarative and social, or causal and physical reasoning; Ilievski et al., 2021; Chang et al., 2021). ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_2", "text": " To consider different types of reasoning, this paper extends ideas from the aforementioned zero-shot learning to the multi-source case such that it benefits from different types of commonsense knowledge on individual KGs. For example, ATOMIC (Sap et al., 2019a) focuses on social commonsense while ConceptNet (Speer et al., 2017) contains conceptual knowledge. A practical approach is multi-task learning (MTL; Caruana, 1997; Liu et al., 2019a), which learns a shared encoder for different synthetic QA datasets from multiple KGs. Despite its effectiveness, MTL scheme suffers from interference among different KGs, which results in forgetting previously learned knowledge when trained on new KG which has different kinds of knowledge (Pilault et al., 2021; Pfeiffer et al., 2021; Wang et al., 2021a; Wu et al., 2020). ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_3", "text": " To address these limitations, we propose a novel, modularized framework that aims to learn multiple expert models for KGs, then conduct zero-shot fusion to allow collaboration among KGs. For this purpose, we leverage AdapterFusion (Pfeiffer et al., 2021) where multiple tiny modules between Transformer blocks called adapters (Houlsby et al., 2019) can be combined after independent training, thus allowing a continual integration of the adapters without retraining the entire framework. Specifically, we treat the adapters as different KG-specific experts, and combine them using an attention-like fusion module. To improve the fusion of adapters, we suggest a KG-alignment adapter that guides to the apt expert adapters. Here, we use KGs in three different synthetic supervision training: (1) KG-specific QA datasets to train the KG-specific expert adapters, (2) a KG classification datasets to train the KG-alignment adapter, and (3) a balanced mixture of KG-specific QA datasets to train the fusion module. Our modularized method alleviates the interference between different KGs, which is the pitfall of MTL from our empirical observation, and thus combines multiple KGs into a synergetic zero-shot framework. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_4", "text": " Our contributions are: (1) We suggest a simple, yet effective KG modularization strategy for the use of multiple KGs in commonsense reasoning. (2) We then explore the use of AdapterFusion (Pfeiffer et al., 2021) for better knowledge aggregation based on the KG modularization in zero-shot setting. We believe that such modularized transfer learning is critical to using different knowledge sources synergetically against interference between them. (3) In extensive experiments on various commonsense reasoning benchmarks, our framework achieves significant improvements over baselines using a single KG, even using multiple KGs, which implies the robustness in commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_5", "text": " Many researchers have recently focused on building unsupervised models without any benchmark supervisions (i.e., zero-shot learning). In such zero-shot setting, KGs are often used as an external resource for improving model prior (e.g., continually learned from pre-trained language models) (Banerjee and Baral, 2020; Bosselut and Choi, 2019; Ma et al., 2021), especially for commonsense reasoning, as much existing work couples language models with neural/symbolic commonsense KGs. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_6", "text": " However, most of existing work are either assuming the existence of the alignment information between tasks and KGs (Banerjee and Baral, 2020) or an integrated KG (Ma et al., 2021). For example, ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20} (Hwang et al., 2021), a commonsense KG which incorporates tuples from ConceptNet and ATOMIC with new relations and further crowdsourcing, combines multiple KGs into a new integrated KG, but as widely known (Ilievski et al., 2020; Hwang et al., 2021), heterogeneous schema between different KGs may limit triplets that can be integrated.111Only 172K tuples of the 3.4M tuples and 5 relations of 36 relations in ConceptNet are integrated into ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20}. Rather than such symbolic KG integration with the inevitable loss of knowledge, in this work, we explore the neural KG integration leveraging the multiple KGs without additional processing and alignment information between KG and task. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_7", "text": " The idea of having specialized parameters, or so-called experts, has been widely studied to integrate multiple sources of knowledge via transfer learning. The adapter module (Rebuffi et al., 2017; Houlsby et al., 2019) has been explored as one of such approaches, introducing a small number of task-specific parameters at every layer of pre-trained language model (PLM) while sharing the parameters of underlying PLM which is fixed. To address the limitations of transfer learning due to high re-training cost, many works utilize the multiple adapter modules for individual tasks with different domains (Puigcerver et al., 2020; Bapna et al., 2019; Rücklé et al., 2020; Madotto et al., 2021) considering each adapter to be an expert of each domain. Similar to our work, K-Adapter (Wang et al., 2021a) encodes factual and linguistic knowledge to each adapter, but in this paper, we further explore how to mitigate catastrophic forgetting or interference among multiple adapters for better knowledge transfer in zero-shot setting. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_8", "text": " MTL (Liu et al., 2019a; Zhang and Yang, 2017; Caruana, 1997) learns a shared representation while aggregating knowledge across multiple learning tasks, often leading to better generalization ability of a model. However, parametric aggregation of knowledge with MTL has following limitations: (1) retraining the full model when adding new tasks (Houlsby et al., 2019; Pfeiffer et al., 2021, 2020b) (2) catastrophic forgetting and interference between tasks leading to difficulties of solving each task equally well (Pilault et al., 2021; Wu et al., 2020; Yu et al., 2020) and (3) inconsistent effect (Lourie et al., 2021). To deal with these challenges, Mixture-of-Experts (MoE) is a parameterized generalization of ensembling techniques, which has been adapted for MTL with gating network trained to optimize each task (Ma et al., 2018). However, simple linear gating networks are too shallow and thus may destruct task knowledge for commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_9", "text": " To address this problem, AdapterFusion (Pfeiffer et al., 2021) has been proposed to fuse task specific parameters called adapters for the given target task leveraging attention-like mechanism. AdapterFusion aggregates adapters, which is trained independently for each task, in a non-destructive manner mitigating aforementioned MTL problems such as forgetting and interference between tasks. Recently, it has been used for zero-shot cross-lingual transfer framework (Pfeiffer et al., 2020c; Wang et al., 2021b), which motivates our work to transfer multi-source knowledge with less interference for zero-shot commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_10", "text": " In our setup, we repurpose synthetic QA generation (Ma et al., 2021) for the task of knowledge-driven zero-shot learning for commonsense reasoning, i.e., we transform a KG into multiple (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) pairs where Qisubscript𝑄𝑖Q_{i} is a natural language question and Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} is the set of options with m𝑚m answer candidates. Specifically, given a triple (eh​e​a​d,r,et​a​i​l)superscript𝑒ℎ𝑒𝑎𝑑𝑟superscript𝑒𝑡𝑎𝑖𝑙(e^{head},r,e^{tail}) in a KG, where eh​e​a​dsuperscript𝑒ℎ𝑒𝑎𝑑e^{head}, et​a​i​lsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and r𝑟r denote head/tail entity and relation respectively, we transform eh​e​a​dsuperscript𝑒ℎ𝑒𝑎𝑑e^{head} and r𝑟r into a natural language question Qisubscript𝑄𝑖Q_{i} using templates. For the option set Aisubscript𝐴𝑖A_{i}, we use the combination of the correct answer et​a​i​lsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and m−1𝑚1m-1 distractors which are tail entities from other triples sampled randomly (Ma et al., 2021). Details are described in Appendix B. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_11", "text": " First, we modularize the KGs to preserve their intrinsic knowledge. Considering the importance of using a suitable and well-aligned KG (Ma et al., 2019, 2021) on a downstream task, the subtle difference between each KG should be learned by the model without any interference from each other. Accordingly, we adopt the adapter module (Houlsby et al., 2019) which repurposes a pre-trained language model (PLM) to incorporate each KG as tiny modules in between Transformer blocks. Specifically, as illustrated in Figure 2 (except for green area), the adapter training strategy involves injecting new layers (parameterized by ΦΦ\\Phi) into the original PLM (parameterized by θ𝜃\\theta). The weights of the original PLM are untouched, while the new adapter layers are initialized at random. Formally, we call each adapter trained with 𝒟Q​Aksubscriptsuperscript𝒟𝑘𝑄𝐴\\mbox{${\\cal D}$}^{k}_{QA} as an expert adapter for KG k𝑘k, parameterized by ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_12", "text": " When a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) is given for dataset 𝒟Q​Aksuperscriptsubscript𝒟𝑄𝐴𝑘\\mbox{${\\cal D}$}_{QA}^{k}, we first concatenate question Qisubscript𝑄𝑖Q_{i} and each answer option Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} to generate input sequences Ti={Ti,1,…,Ti,m}subscript𝑇𝑖subscript𝑇𝑖1…subscript𝑇𝑖𝑚T_{i}=\\{T_{i,1},...,T_{i,m}\\}. Then, we compute a score Si,jsubscript𝑆𝑖𝑗S_{i,j} (Ma et al., 2021) for the answer candidate Ai,jsubscript𝐴𝑖𝑗A_{i,j} is computed as follows: Si,j=−1|Ti,j|​∑t=1|Ti,j|l​o​g​P​(wt|…​wt−1,wt+1​…;θ,Φ)subscript𝑆𝑖𝑗1subscript𝑇𝑖𝑗superscriptsubscript𝑡1subscript𝑇𝑖𝑗𝑙𝑜𝑔𝑃conditionalsubscript𝑤𝑡…subscript𝑤𝑡1subscript𝑤𝑡1…𝜃ΦS_{i,j}=-\\frac{1}{|T_{i,j}|}\\sum_{t=1}^{|T_{i,j}|}logP(w_{t}|...w_{t-1},w_{t+1}...;\\theta,\\Phi) (2) where wtsubscript𝑤𝑡w_{t} is a word token in the sequence Ti,jsubscript𝑇𝑖𝑗T_{i,j} and P𝑃P is the conditional probability from Transformer blocks parameterized by θ𝜃\\theta and ΦΦ\\Phi. To train the adapter ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, we use the marginal ranking loss (Ma et al., 2021) as follows: ℒQ​A=1m​∑i=1Nk∑j=1j≠l​a​b​e​lmm​a​x​(0,η−Si,l​a​b​e​l+Si,j)subscriptℒ𝑄𝐴1𝑚superscriptsubscript𝑖1subscript𝑁𝑘superscriptsubscript𝑗1𝑗𝑙𝑎𝑏𝑒𝑙𝑚𝑚𝑎𝑥0𝜂subscript𝑆𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑆𝑖𝑗\\mbox{${\\cal L}$}_{QA}=\\frac{1}{m}\\sum_{i=1}^{N_{k}}\\sum_{\\begin{subarray}{c}j=1\\\\ j\\neq label\\end{subarray}}^{m}max(0,\\eta-S_{i,label}+S_{i,j}) (3) where η𝜂\\eta represents the margin. ΦQ​Ak←argminΦℒQ​A​(𝒟Q​Ak;θ,Φ)←superscriptsubscriptΦ𝑄𝐴𝑘subscriptargminΦsubscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃Φ\\Phi_{QA}^{k}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\Phi) (4) where KG-invariant parameters θ𝜃\\theta are fixed and only KG-dependent parameters ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k} are learned, which enables to store the corresponding knowledge separately without any interference. Further, we can parallelize the training of adapter for all KGs. The efficiency of adapter training allows our modularization to be more scalable. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_13", "text": " Once the expert adapters are learned, we combine the knowledge from each expert adapter using an attention-like mechanism. We present a novel fusion strategy as shown in Figure 2, which is referred to as the zero-shot fusion. In contrast to AdapterFusion (Pfeiffer et al., 2021) where the focus is learning to transfer knowledge to a specific target task, our zero-shot fusion aims to generalize this transfer to any arbitrary target task. Specifically, the zero-shot fusion parameters ΨΨ\\Psi learn to combine fixed expert adapters which are parameterized by ΦQ​A1,…,ΦQ​AKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}. In each Transformer layer l𝑙l of PLM with the injected fusion layer, the zero-shot fusion parameters ΨQ​AsubscriptΨ𝑄𝐴\\Psi_{QA} consist of query, key, and value matrices, denoted by WlQsuperscriptsubscriptW𝑙𝑄\\textbf{W}_{l}^{Q}, WlKsuperscriptsubscriptW𝑙𝐾\\textbf{W}_{l}^{K}, and WlVsuperscriptsubscriptW𝑙𝑉\\textbf{W}_{l}^{V} respectively. These parameters are used to learn the balancing between the representation of each expert adapters through attention-like mechanism. While fixing both the parameters θ𝜃\\theta and all expert adapters ΦQ​A1,…,ΦQ​AKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}, the only trainable weights ΨQ​AsubscriptΨ𝑄𝐴\\Psi_{QA} on the fusion layer learns to combine the knowledge from different K𝐾K expert adapters by using the subset of {𝒟Q​Ak}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K} by random sampling. Here, we balance the ratio between the K𝐾K knowledge-driven datasets as N𝑁N samples (details are in Appendix D). Formally, ΨQ​A←argminΨ​∑k=1KℒQ​A​(𝒟Q​Ak;θ,{ΦQ​Ak}k=1K,Ψ)←subscriptΨ𝑄𝐴subscriptargminΨsuperscriptsubscript𝑘1𝐾subscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃superscriptsubscriptsuperscriptsubscriptΦ𝑄𝐴𝑘𝑘1𝐾Ψ\\Psi_{QA}\\leftarrow\\operatorname*{argmin}_{\\Psi}\\sum_{k=1}^{K}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\{\\Phi_{QA}^{k}\\}_{k=1}^{K},\\Psi) (5) where ΨΨ\\Psi refers to the initialized zero-shot fusion parameters. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_14", "text": " More specifically, in the l𝑙l-th Transformer layer, let hP​L​Mlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} and hEk,lsuperscriptsubscriptℎ𝐸𝑘𝑙h_{E}^{k,l} be the representations of underlying PLM parameterized by θ𝜃\\theta and an expert adapter parameterized by ΦQ​AksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, respectively. Then, using the hidden representation hP​L​Mlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} of PLM as a query, the fusion layer performs the attention-like function as follows: Kl,VlsubscriptK𝑙subscriptV𝑙\\displaystyle\\textbf{K}_{l},\\textbf{V}_{l} =(hE1,l,…,hEK,l)absentsuperscriptsubscriptℎ𝐸1𝑙…superscriptsubscriptℎ𝐸𝐾𝑙\\displaystyle=(h_{E}^{1,l},...,h_{E}^{K,l}) (6) QlsubscriptQ𝑙\\displaystyle\\textbf{Q}_{l} =hP​L​Mlabsentsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙\\displaystyle=h_{PLM}^{l} (7) zlsubscriptz𝑙\\displaystyle\\textbf{z}_{l} =Attention​(Ql​WlQ,Kl​WlK,Vl​WlV)absentAttentionsubscriptQ𝑙superscriptsubscriptW𝑙𝑄subscriptK𝑙superscriptsubscriptW𝑙𝐾subscriptV𝑙superscriptsubscriptW𝑙𝑉\\displaystyle=\\text{Attention}(\\textbf{Q}_{l}\\textbf{W}_{l}^{Q},\\textbf{K}_{l}\\textbf{W}_{l}^{K},\\textbf{V}_{l}\\textbf{W}_{l}^{V}) (8) where zlsubscriptz𝑙\\textbf{z}_{l} is passed to the next Transformer layer. Given a sample, the zero-shot fusion learns the suitable balancing parameters between the expert adapters for zero-shot reasoning. Eventually, it learns to identify generalizability across commonsense reasoning tasks. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_15", "text": " AdapterFusion uses the PLM hidden representation hP​L​Mlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} as a query which is learned when training on a specific downstream task. In our zero-shot setting, however, we use a mixture of synthetic QA for fusion training, which is not exactly a training dataset for a downstream task. To compensate for this issue, we present KG-Classifier adapter, which is a KG alignment-aware adapter, which is motivated from the fact that the ability to find which KG has an alignment with the given sample can be helpful as a role of providing a guidance for better performance (Ma et al., 2019, 2021). ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_16", "text": " Specifically, we propose a novel training task for KG-Classifier adapter, which requires predicting the KG for the given sample of the task. For that, given {𝒟Q​Ak}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}, we first transform a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) into a new KG classification sample (Qi;Ai,l​a​b​e​l)subscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙(Q_{i};A_{i,label}) where (;)(;) is the concatenation. Then, we obtain a new label yi∈{0,1}Ksubscript𝑦𝑖superscript01𝐾y_{i}\\in\\{0,1\\}^{K} indicating the corresponding KG source. The samples are in Appendix E. Formally, KG classification dataset 𝒟K​G​Csubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC} is defined as: 𝒟K​G​C={((Qi;Ai,l​a​b​e​l),yi)}i=1Msubscript𝒟𝐾𝐺𝐶superscriptsubscriptsubscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑦𝑖𝑖1𝑀\\mbox{${\\cal D}$}_{KGC}=\\{((Q_{i};A_{i,label}),y_{i})\\}_{i=1}^{M} (9) where M𝑀M is the total size of {𝒟Q​Ak}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_17", "text": " Based on 𝒟K​G​Csubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC}, we learn the KG-Classifier adapter parameterized by θ𝜃\\theta and ΦK​G​CsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC}. First, a classification sample i𝑖i is encoded into hC​L​S∈ℝHsubscriptℎ𝐶𝐿𝑆superscriptℝ𝐻h_{CLS}\\in\\mathbb{R}^{H} then scored as y^i∈ℝKsubscript^𝑦𝑖superscriptℝ𝐾\\hat{y}_{i}\\in\\mathbb{R}^{K} with a linear layer WK​G​C∈ℝK×Hsubscript𝑊𝐾𝐺𝐶superscriptℝ𝐾𝐻W_{KGC}\\in\\mathbb{R}^{K\\times H}, i.e., y^i=WK​G​C​hC​L​Ssubscript^𝑦𝑖subscript𝑊𝐾𝐺𝐶subscriptℎ𝐶𝐿𝑆\\hat{y}_{i}=W_{KGC}h_{CLS}. Once y^isubscript^𝑦𝑖\\hat{y}_{i} is normalized by a softmax layer, the network is trained to minimize the cross-entropy loss ℒK​G​Csubscriptℒ𝐾𝐺𝐶\\mbox{${\\cal L}$}_{KGC} between the prediction y^isubscript^𝑦𝑖\\hat{y}_{i} and its ground truth yisubscript𝑦𝑖y_{i}: ΦK​G​C←argminΦ​∑i=1MℒK​G​C​(yi,y^i;θ,Φ)←subscriptΦ𝐾𝐺𝐶subscriptargminΦsuperscriptsubscript𝑖1𝑀subscriptℒ𝐾𝐺𝐶subscript𝑦𝑖subscript^𝑦𝑖𝜃Φ\\Phi_{KGC}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\sum_{i=1}^{M}\\mbox{${\\cal L}$}_{KGC}(y_{i},\\hat{y}_{i};\\theta,\\Phi) (10) ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_18", "text": " We propose to use the representation of KG-Classifier adapter as a query in attention-like mechanism, referred to as the zero-shot fusion with KG-Classifier adapter. That is, using the hidden representation hK​G​Clsuperscriptsubscriptℎ𝐾𝐺𝐶𝑙h_{KGC}^{l} of a KG-Classifier adapter parameterized by ΦK​G​CsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC} as a query, we substitute QlsubscriptQ𝑙\\textbf{Q}_{l} in Eq. (11) as follows: Ql=hK​G​ClsubscriptQ𝑙superscriptsubscriptℎ𝐾𝐺𝐶𝑙\\textbf{Q}_{l}=h_{KGC}^{l} (11) The overall zero-shot fusion architecture including KG-Classifier is illustrated in Figure 2. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_19", "text": " In this section we evaluate the efficacy of our framework on five commonsense reasoning tasks. We denote KG-Classifier adapter by KG-C adapter. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_20", "text": " All our experiments are conducted in a zero-shot setting, in which the models do not have access to the official training data or labels of the benchmark. For the evaluation, we use the validation set of each benchmark222Since the official test sets are not publicly available, however, the validation set of each benchmark can be role as an test set since it is not used for hyperparameter tuning or model selection. We use accuracy as a metric. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_21", "text": " We evaluate our proposed framework on five question-answering benchmarks for commonsense reasoning: SocialIQA (SIQA) (Sap et al., 2019b), CommonsenseQA (CSQA) (Talmor et al., 2018), Abductive NLI (a-NLI) (Bhagavatula et al., 2020), PhysicalIQA (PIQA) (Bisk et al., 2020), and WinoGrande (WG) (Sakaguchi et al., 2020). Each commonsense benchmark evaluates a specific kind of knowledge: social commonsense for SIQA, concept-level commonsense for CSQA, abductive reasoning for a-NLI, physical commonsense for PIQA, and pronoun resolution ability for WG.333Some benchmarks have a strong alignment with a certain KG due to its construction strategy: SIQA-ATOMIC, and CSQA-ConceptNet. To make a direct comparison with Ma et al. (2021), we use the same KGs to generate data samples. The details are presented in Appendix G. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_22", "text": " We compare our framework with the following baselines. First, to show the characteristics of each benchmark, we use the random or the most frequent label as Random and Majority baseline, respectively. RoBERTa-L and GPT2-L is the performance of each PLM without any fine-tuning. Also, as the baseline for the unsupervised learning model using KGs, we report the performance of Self-talk (Shwartz et al., 2020), COMET-DynaGen (Bosselut and Choi, 2019), SMLM (Banerjee and Baral, 2020) as presented in original papers. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_23", "text": " For further analysis in §§\\S4.4 and §§\\S4.5, we set the following models that are pre-trained on the synthetic QA datasets from KGs as baselines: • Single-Task Learning (STL): The model is pre-trained on a synthetic QA dataset generated from a single KG. Specifically, we experiment two architectural choices: PLM (STL-PLM) and PLM with adapters (STL-Adapter). For each architecture, there are four STL models for each of synthetic QA datasets derived from ATOMIC, ConceptNet, WikiData, and WordNet. We note that the trained STL-Adapter is an expert adapter from a specific KG in our framework. The performance of each STL baseline is shown in Appendix I Table 9 and Table 10. • Multi-Task Learning (MTL): The model is pre-trained on multiple synthetic QA datasets, each of which is generated from a KG. We experiment with a PLM trained on all four aforementioned synthetic QA datasets. We note that the difference between STL-PLM and MTL is whether to use one synthetic QA dataset or multiple synthetic QA datasets for its training. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_24", "text": " We employ RoBERTa-L (Liu et al., 2019b) from Hugging Face’s transformers toolkit for all experiments. We follow the default settings from  Ma et al. (2021). Our implementation uses Adapter (Houlsby et al., 2019) and AdapterFusion (Pfeiffer et al., 2021) as a base model architecture from AdpaterHub (Pfeiffer et al., 2020a). We run our experiments with three different random seeds. The implementation details are described in Appendix H. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_25", "text": " Table 2 shows the zero-shot evaluation results on five benchmark datasets. Generally, zero-shot fusion scores higher than the baselines across all benchmarks, and further, zero-shot fusion shows the best performance in all benchmarks except WG. We note that although Ma et al. (2021) uses the synthetic QA dataset after sample filtering, our method achieves comparable performance with the best performance in WG, even with the raw dataset. Also, the average score of all evaluation benchmarks (the last column of Table 2) shows that zero-shot fusion has generalisability in commonsense reasoning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_26", "text": " In addition, zero-shot fusion achieves consistent improvements over MTL. These results indicate that our proposed zero-shot fusion method attributes to fusing the knowledge of multiple KGs more synergetically regardless of the task. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_27", "text": " Moreover, as an ablation, we compare the zero-shot fusion with and without KG-C adapter to explore the efficacy of the KG-C adapter. We can observe that zero-shot fusion with KG-C adapter improves the average accuracy by 0.4%, which implies that the use of KG-C adapter improves the overall performance and makes our method generalize better on most of the evaluation benchmarks. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_28", "text": " To assess the effects of the KG-C adapter itself, we visualize and compare the final layer (CLS) token representation between PLM and KG-C adapter. Figure 3 shows t-SNE (Van der Maaten and Hinton, 2008) plots of all representation of five benchmark datasets. In this figure, every sample is mapped into a 1024-dimensional feature space through RoBERTa-L model and projected back into a two-dimensional plane by t-SNE. We can observe that KG-C adapter can separate the samples of different benchmarks well despite being unseen data. It verifies that KG-awareness acquired with the KG classification task is beneficial to categorize the given sample. The KG-C adapter can thus generate a relevant KG-aware query for a given sample and help to fuse representations from suitable expert adapters in our proposed framework. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_29", "text": " Further, we explore how the KG-C adapter affects zero-shot fusion which is based on an attention-like mechanism (Pfeiffer et al., 2021) compared to zero-shot fusion without KG-C adapter. Here, while zero-shot fusion without KG-C adapter simply uses the representation of PLM as a query, zero-shot fusion with KG-C adapter leverages the representation of KG-C adapter. To illustrate this strength, we visualize the attention probability of (CLS) token from each fusion layer as a representative in Figure 4. The column of the darker cell indicates the adapter that has the bigger influence on the fused representation. We can observe that zero-shot fusion with KG-C adapter fuses the knowledge from different experts with a subtle difference rather than focusing on a single expert severely. This implies that KG-C adapter enables the delicate balancing between multiple knowledge sources based on the KG-alignment awareness, which leads to performance improvements in commonsense reasoning tasks. Interestingly, both cases have the ability not to focus on the expert adapter based on WikiData, which can be seen as a redundant expert.444The zero-shot fusion with KG-C adapter using AT, CN, and WN shows the best average performance in Table 10. This observation would benefit from the further study that explores the optimal combination of KGs by expert selection or rejection. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_30", "text": " In this experiment, we compare the amount of interference in the MTL and zero-shot fusion with KG-C adapter. We propose a novel evaluation metric, the interference ratio, which is the percentage of the incorrectly predicted samples by the multi-KG models among the correctly predicted samples from the STL models in common. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_31", "text": " Using the interference ratio, we can precisely compare the negative effects of multi-KG models on knowledge aggregation since the only reason to get the correct samples wrong is the interference caused by learning with additional KGs. We present the interference ratio of the models on five benchmark datasets in Figure 5. This figure shows that MTL has the higher interference ratio than the competing models across all benchmarks. Our method achieves a substantially better ratio, especially when KG-C adapter is used. This demonstrates the efficacy of our framework in mitigating interference between knowledge, which is one of the major problems of MTL. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_32", "text": " To verify the ability of our model to aggregate different types of KGs, we compare the relative performance gains of MTL and zero-shot fusion with KG-C adapter when increasing the number of KGs. The performance of all KG-combinations for each framework is presented in Table 9 and Table 10. We visualize the improvement of performance for five benchmark development sets, leveraging heatmaps in Figure 6. Here, for the sake of brevity, we denote our framework with KG-C adapter as our method. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_33", "text": " For MTL in Figure 6 (a), the color of the cell denotes the relative improvement of MTL with the combination of KGs over the best performance among the STL-PLM of KGs. Also, for our method in Figure 6 (b), the relative improvement is measured based on the best performance among the STL-Adapter of KGs, considering the difference of the base architecture for MTL (i.e. PLM) and zero-shot fusion (i.e. PLM with adapter). The green and red colors denote the increase and decrease of performance, respectively, when using multiple KGs together. The greener color on the cells indicates that the approach benefits from an increasing number of KGs, which implies aggregating knowledge successfully. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_34", "text": " In Figure 6, while the MTL tends to show the decrease of the performance when more KGs are utilized for training, our method obtains relative performance improvement across most of benchmarks. In both framework, the slightly degraded performance of the combination of KGs without ATOMIC could be due to the strong alignment between ATOMIC and SIQA. Except for the above case, we can observe that as more KGs are leveraged, the color of the cell gets greener, which implies that our method gains more advantages for better performance. This demonstrates that our method enables knowledge aggregation for multiple KGs synergetically. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_35", "text": " Despite the existence of various types of commonsense KGs, utilizing multiple KGs has not been explored enough in the commonsense reasoning field. Motivated by this, this paper proposes a modularized transfer learning framework to fuse the knowledge from multiple KGs efficiently for zero-shot commonsense reasoning. Our framework consists of KG modularization for expert adapter, zero-shot fusion and KG-Classifier adapter. Extensive experiments show that our framework obtains strong improvements over MTL on five commonsense reasoning benchmarks. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" }, { "id": "2206.03715_all_36", "text": " In the future, our work can be extended to adapt our methods to further various multiple KGs with studies of appropriate scale for KG modularization. In addition, based on our hypothesis that the existence of an optimal combination, we can explore the study for the optional use of modularized KG experts for the best transfer learning. ", "title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning" } ]
Just like AEVB, Wake Sleep algorithm employs a recognition model that approximates the true posterior. Is this true?
Yes, is it true [20]. Both of them employ a recognition model that approximates the true posterior [28]. Authors compare performance of AEVB to the wake-sleep algorithm [HDFN95] employing the same encoder (also called recognition model) for the wake-sleep algorithm and the variational autoencoder [33].
[ 20, 28, 33 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior. Unfortunately, the common mean-field approach requires analytical solutions of expectations w.r.t. the approximate posterior, which are also intractable in the general case. We show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for efficient approximate posterior inference in almost any model with continuous latent variables and/or parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_1", "text": " For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can also be used for a host of tasks such as recognition, denoising, representation and visualization purposes. When a neural network is used for the recognition model, we arrive at the variational auto-encoder. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_2", "text": " The strategy in this section can be used to derive a lower bound estimator (a stochastic objective function) for a variety of directed graphical models with continuous latent variables. We will restrict ourselves here to the common case where we have an i.i.d. dataset with latent variables per datapoint, and where we like to perform maximum likelihood (ML) or maximum a posteriori (MAP) inference on the (global) parameters, and variational inference on the latent variables. It is, for example, straightforward to extend this scenario to the case where we also perform variational inference on the global parameters; that algorithm is put in the appendix, but experiments with that case are left to future work. Note that our method can be applied to online, non-stationary settings, e.g. streaming data, but here we assume a fixed dataset for simplicity. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_3", "text": " Let us consider some dataset 𝐗={𝐱(i)}i=1N𝐗superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑁\\mathbf{X}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{N} consisting of N𝑁N i.i.d. samples of some continuous or discrete variable 𝐱𝐱\\mathbf{x}. We assume that the data are generated by some random process, involving an unobserved continuous random variable 𝐳𝐳\\mathbf{z}. The process consists of two steps: (1) a value 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} is generated from some prior distribution p𝜽∗​(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}); (2) a value 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is generated from some conditional distribution p𝜽∗​(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}). We assume that the prior p𝜽∗​(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}) and likelihood p𝜽∗​(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}) come from parametric families of distributions p𝜽​(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) and p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), and that their PDFs are differentiable almost everywhere w.r.t. both 𝜽𝜽\\boldsymbol{\\theta} and 𝐳𝐳\\mathbf{z}. Unfortunately, a lot of this process is hidden from our view: the true parameters 𝜽∗superscript𝜽\\boldsymbol{\\theta}^{*} as well as the values of the latent variables 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} are unknown to us. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_4", "text": " Very importantly, we do not make the common simplifying assumptions about the marginal or posterior probabilities. Conversely, we are here interested in a general algorithm that even works efficiently in the case of: 1. Intractability: the case where the integral of the marginal likelihood p𝜽​(𝐱)=∫p𝜽​(𝐳)​p𝜽​(𝐱|𝐳)​𝑑𝐳subscript𝑝𝜽𝐱subscript𝑝𝜽𝐳subscript𝑝𝜽conditional𝐱𝐳differential-d𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x})=\\int p_{\\boldsymbol{\\theta}}(\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})\\,d\\mathbf{z} is intractable (so we cannot evaluate or differentiate the marginal likelihood), where the true posterior density p𝜽​(𝐳|𝐱)=p𝜽​(𝐱|𝐳)​p𝜽​(𝐳)/p𝜽​(𝐱)subscript𝑝𝜽conditional𝐳𝐱subscript𝑝𝜽conditional𝐱𝐳subscript𝑝𝜽𝐳subscript𝑝𝜽𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x})=p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{z})/p_{\\boldsymbol{\\theta}}(\\mathbf{x}) is intractable (so the EM algorithm cannot be used), and where the required integrals for any reasonable mean-field VB algorithm are also intractable. These intractabilities are quite common and appear in cases of moderately complicated likelihood functions p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), e.g. a neural network with a nonlinear hidden layer. 2. A large dataset: we have so much data that batch optimization is too costly; we would like to make parameter updates using small minibatches or even single datapoints. Sampling-based solutions, e.g. Monte Carlo EM, would in general be too slow, since it involves a typically expensive sampling loop per datapoint. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_5", "text": " We are interested in, and propose a solution to, three related problems in the above scenario: 1. Efficient approximate ML or MAP estimation for the parameters 𝜽𝜽\\boldsymbol{\\theta}. The parameters can be of interest themselves, e.g. if we are analyzing some natural process. They also allow us to mimic the hidden random process and generate artificial data that resembles the real data. 2. Efficient approximate posterior inference of the latent variable 𝐳𝐳\\mathbf{z} given an observed value 𝐱𝐱\\mathbf{x} for a choice of parameters 𝜽𝜽\\boldsymbol{\\theta}. This is useful for coding or data representation tasks. 3. Efficient approximate marginal inference of the variable 𝐱𝐱\\mathbf{x}. This allows us to perform all kinds of inference tasks where a prior over 𝐱𝐱\\mathbf{x} is required. Common applications in computer vision include image denoising, inpainting and super-resolution. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_6", "text": " For the purpose of solving the above problems, let us introduce a recognition model qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}): an approximation to the intractable true posterior p𝜽​(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}). Note that in contrast with the approximate posterior in mean-field variational inference, it is not necessarily factorial and its parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} are not computed from some closed-form expectation. Instead, we’ll introduce a method for learning the recognition model parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} jointly with the generative model parameters 𝜽𝜽\\boldsymbol{\\theta}. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_7", "text": " From a coding theory perspective, the unobserved variables 𝐳𝐳\\mathbf{z} have an interpretation as a latent representation or code. In this paper we will therefore also refer to the recognition model qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as a probabilistic encoder, since given a datapoint 𝐱𝐱\\mathbf{x} it produces a distribution (e.g. a Gaussian) over the possible values of the code 𝐳𝐳\\mathbf{z} from which the datapoint 𝐱𝐱\\mathbf{x} could have been generated. In a similar vein we will refer to p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) as a probabilistic decoder, since given a code 𝐳𝐳\\mathbf{z} it produces a distribution over the possible corresponding values of 𝐱𝐱\\mathbf{x}. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_8", "text": " The marginal likelihood is composed of a sum over the marginal likelihoods of individual datapoints log⁡p𝜽​(𝐱(1),⋯,𝐱(N))=∑i=1Nlog⁡p𝜽​(𝐱(i))subscript𝑝𝜽superscript𝐱1⋯superscript𝐱𝑁superscriptsubscript𝑖1𝑁subscript𝑝𝜽superscript𝐱𝑖\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(1)},\\cdots,\\mathbf{x}^{(N)})=\\sum_{i=1}^{N}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}), which can each be rewritten as: logp𝜽(𝐱(i))=DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳|𝐱(i)))+ℒ(𝜽,ϕ;𝐱(i))\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})=D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}^{(i)}))+\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (1) The first RHS term is the KL divergence of the approximate from the true posterior. Since this KL-divergence is non-negative, the second RHS term ℒ​(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) is called the (variational) lower bound on the marginal likelihood of datapoint i𝑖i, and can be written as: log⁡p𝜽​(𝐱(i))≥ℒ​(𝜽,ϕ;𝐱(i))subscript𝑝𝜽superscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})\\geq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =𝔼qϕ​(𝐳|𝐱)​(−log⁡qϕ​(𝐳|𝐱)+log⁡p𝜽​(𝐱,𝐳))absentsubscript𝔼subscript𝑞bold-italic-ϕconditional𝐳𝐱delimited-()subscript𝑞bold-italic-ϕconditional𝐳𝐱subscript𝑝𝜽𝐱𝐳\\displaystyle=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})}\\left(-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})+\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})\\right) (2) which can also be written as: ℒ(𝜽,ϕ;𝐱(i))=−DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+𝔼qϕ​(𝐳|𝐱(i))(logp𝜽(𝐱(i)|𝐳))\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) (3) We want to differentiate and optimize the lower bound ℒ​(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) w.r.t. both the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and generative parameters 𝜽𝜽\\boldsymbol{\\theta}. However, the gradient of the lower bound w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi} is a bit problematic. The usual (naïve) Monte Carlo gradient estimator for this type of problem is: ∇ϕ𝔼qϕ​(𝐳)​(f​(𝐳))=𝔼qϕ​(𝐳)​(f​(𝐳)​∇qϕ​(𝐳)log⁡qϕ​(𝐳))≃1L​∑l=1Lf​(𝐳)​∇qϕ​(𝐳(l))log⁡qϕ​(𝐳(l))subscript∇bold-italic-ϕsubscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript∇subscript𝑞bold-italic-ϕ𝐳subscript𝑞bold-italic-ϕ𝐳similar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝐳subscript∇subscript𝑞bold-italic-ϕsuperscript𝐳𝑙subscript𝑞bold-italic-ϕsuperscript𝐳𝑙\\nabla_{\\boldsymbol{\\phi}}\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z})\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)}) where 𝐳(l)∼qϕ​(𝐳|𝐱(i))similar-tosuperscript𝐳𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). This gradient estimator exhibits exhibits very high variance (see e.g.  (BJP12)) and is impractical for our purposes. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_9", "text": " In this section we introduce a practical estimator of the lower bound and its derivatives w.r.t. the parameters. We assume an approximate posterior in the form qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), but please note that the technique can be applied to the case qϕ​(𝐳)subscript𝑞bold-italic-ϕ𝐳q_{\\boldsymbol{\\phi}}(\\mathbf{z}), i.e. where we do not condition on 𝐱𝐱\\mathbf{x}, as well. The fully variational Bayesian method for inferring a posterior over the parameters is given in the appendix. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_10", "text": " Under certain mild conditions outlined in section 2.4 for a chosen approximate posterior qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) we can reparameterize the random variable 𝐳~∼qϕ​(𝐳|𝐱)similar-to~𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\widetilde{\\mathbf{z}}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) using a differentiable transformation gϕ​(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) of an (auxiliary) noise variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}: 𝐳~=gϕ​(ϵ,𝐱)​ with ​ϵ∼p​(ϵ)~𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱 with bold-italic-ϵsimilar-to𝑝bold-italic-ϵ\\displaystyle\\widetilde{\\mathbf{z}}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x})\\text{\\quad with \\quad}\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon}) (4) See section 2.4 for general strategies for chosing such an approriate distribution p​(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}) and function gϕ​(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}). We can now form Monte Carlo estimates of expectations of some function f​(𝐳)𝑓𝐳f(\\mathbf{z}) w.r.t. qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as follows: 𝔼qϕ​(𝐳|𝐱(i))​(f​(𝐳))=𝔼p​(ϵ)​(f​(gϕ​(ϵ,𝐱(i))))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()𝑓𝐳subscript𝔼𝑝bold-italic-ϵdelimited-()𝑓subscript𝑔bold-italic-ϕbold-italic-ϵsuperscript𝐱𝑖\\displaystyle\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{p(\\boldsymbol{\\epsilon})}\\left(f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}^{(i)}))\\right) ≃1L​∑l=1Lf​(gϕ​(ϵ(l),𝐱(i)))​ where ​ϵ(l)∼p​(ϵ)similar-to-or-equalsabsent1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖 where superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle\\simeq\\frac{1}{L}\\sum_{l=1}^{L}{f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}))}\\text{\\quad where \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (5) We apply this technique to the variational lower bound (eq. (2)), yielding our generic Stochastic Gradient Variational Bayes (SGVB) estimator ℒ~A​(𝜽,ϕ;𝐱(i))≃ℒ​(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}): ℒ~A​(𝜽,ϕ;𝐱(i))superscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =1L​∑l=1Llog⁡p𝜽​(𝐱(i),𝐳(i,l))−log⁡qϕ​(𝐳(i,l)|𝐱(i))absent1𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽superscript𝐱𝑖superscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditionalsuperscript𝐳𝑖𝑙superscript𝐱𝑖\\displaystyle=\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)},\\mathbf{z}^{(i,l)})-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(i,l)}|\\mathbf{x}^{(i)}) where ​𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ​(ϵ(i,l),𝐱(i))​ and ​ϵ(l)∼p​(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (6) Often, the KL-divergence DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z})) of eq. (3) can be integrated analytically (see appendix B), such that only the expected reconstruction error 𝔼qϕ​(𝐳|𝐱(i))​(log⁡p𝜽​(𝐱(i)|𝐳))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()subscript𝑝𝜽conditionalsuperscript𝐱𝑖𝐳\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) requires estimation by sampling. The KL-divergence term can then be interpreted as regularizing ϕbold-italic-ϕ\\boldsymbol{\\phi}, encouraging the approximate posterior to be close to the prior p𝜽​(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}). This yields a second version of the SGVB estimator ℒ~B​(𝜽,ϕ;𝐱(i))≃ℒ​(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}), corresponding to eq. (3), which typically has less variance than the generic estimator: ℒ~B​(𝜽,ϕ;𝐱(i))superscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =−DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+1L∑l=1L(logp𝜽(𝐱(i)|𝐳(i,l)))\\displaystyle=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\frac{1}{L}\\sum_{l=1}^{L}(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)})) where ​𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ​(ϵ(i,l),𝐱(i))​ and ​ϵ(l)∼p​(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (7) Given multiple datapoints from a dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints, we can construct an estimator of the marginal likelihood lower bound of the full dataset, based on minibatches: ℒ​(𝜽,ϕ;𝐗)≃ℒ~M​(𝜽,ϕ;𝐗M)=NM​∑i=1Mℒ~​(𝜽,ϕ;𝐱(i))similar-to-or-equalsℒ𝜽bold-italic-ϕ𝐗superscript~ℒ𝑀𝜽bold-italic-ϕsuperscript𝐗𝑀𝑁𝑀superscriptsubscript𝑖1𝑀~ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X})\\simeq\\widetilde{\\mathcal{L}}^{M}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}^{M})=\\frac{N}{M}\\sum_{i=1}^{M}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (8) where the minibatch 𝐗M={𝐱(i)}i=1Msuperscript𝐗𝑀superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑀\\mathbf{X}^{M}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{M} is a randomly drawn sample of M𝑀M datapoints from the full dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints. In our experiments we found that the number of samples L𝐿L per datapoint can be set to 111 as long as the minibatch size M𝑀M was large enough, e.g. M=100𝑀100M=100. Derivatives ∇𝜽,ϕℒ~​(𝜽;𝐗M)subscript∇𝜽bold-italic-ϕ~ℒ𝜽superscript𝐗𝑀\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta};\\mathbf{X}^{M}) can be taken, and the resulting gradients can be used in conjunction with stochastic optimization methods such as SGD or Adagrad (DHS10). See algorithm 1 for a basic approach to compute the stochastic gradients. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_11", "text": " A connection with auto-encoders becomes clear when looking at the objective function given at eq. (7). The first term is (the KL divergence of the approximate posterior from the prior) acts as a regularizer, while the second term is a an expected negative reconstruction error. The function gϕ(.)g_{\\boldsymbol{\\phi}}(.) is chosen such that it maps a datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and a random noise vector ϵ(l)superscriptbold-italic-ϵ𝑙\\boldsymbol{\\epsilon}^{(l)} to a sample from the approximate posterior for that datapoint: 𝐳(i,l)=gϕ​(ϵ(l),𝐱(i))superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}) where 𝐳(i,l)∼qϕ​(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). Subsequently, the sample 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)} is then input to function log⁡p𝜽​(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}), which equals the probability density (or mass) of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} under the generative model, given 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)}. This term is a negative reconstruction error in auto-encoder parlance. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_12", "text": " In order to solve our problem we invoked an alternative method for generating samples from qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). The essential parameterization trick is quite simple. Let 𝐳𝐳\\mathbf{z} be a continuous random variable, and 𝐳∼qϕ​(𝐳|𝐱)similar-to𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\mathbf{z}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) be some conditional distribution. It is then often possible to express the random variable 𝐳𝐳\\mathbf{z} as a deterministic variable 𝐳=gϕ​(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}), where ϵbold-italic-ϵ\\boldsymbol{\\epsilon} is an auxiliary variable with independent marginal p​(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}), and gϕ(.)g_{\\boldsymbol{\\phi}}(.) is some vector-valued function parameterized by ϕbold-italic-ϕ\\boldsymbol{\\phi}. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_13", "text": " This reparameterization is useful for our case since it can be used to rewrite an expectation w.r.t qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) such that the Monte Carlo estimate of the expectation is differentiable w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi}. A proof is as follows. Given the deterministic mapping 𝐳=gϕ​(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) we know that qϕ​(𝐳|𝐱)​∏id​zi=p​(ϵ)​∏id​ϵisubscript𝑞bold-italic-ϕconditional𝐳𝐱subscriptproduct𝑖𝑑subscript𝑧𝑖𝑝bold-italic-ϵsubscriptproduct𝑖𝑑subscriptitalic-ϵ𝑖q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})\\prod_{i}dz_{i}=p(\\boldsymbol{\\epsilon})\\prod_{i}d\\epsilon_{i}. Therefore111Note that for infinitesimals we use the notational convention d​𝐳=∏id​zi𝑑𝐳subscriptproduct𝑖𝑑subscript𝑧𝑖d\\mathbf{z}=\\prod_{i}dz_{i}, ∫qϕ​(𝐳|𝐱)​f​(𝐳)​𝑑𝐳=∫p​(ϵ)​f​(𝐳)​𝑑ϵ=∫p​(ϵ)​f​(gϕ​(ϵ,𝐱))​𝑑ϵsubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳𝑝bold-italic-ϵ𝑓𝐳differential-dbold-italic-ϵ𝑝bold-italic-ϵ𝑓subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱differential-dbold-italic-ϵ\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}=\\int p(\\boldsymbol{\\epsilon})f(\\mathbf{z})\\,d\\boldsymbol{\\epsilon}=\\int p(\\boldsymbol{\\epsilon})f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}))\\,d\\boldsymbol{\\epsilon}. It follows that a differentiable estimator can be constructed: ∫qϕ​(𝐳|𝐱)​f​(𝐳)​𝑑𝐳≃1L​∑l=1Lf​(gϕ​(𝐱,ϵ(l)))similar-to-or-equalssubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕ𝐱superscriptbold-italic-ϵ𝑙\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(g_{\\boldsymbol{\\phi}}(\\mathbf{x},\\boldsymbol{\\epsilon}^{(l)})) where ϵ(l)∼p​(ϵ)similar-tosuperscriptbold-italic-ϵ𝑙𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}). In section 2.3 we applied this trick to obtain a differentiable estimator of the variational lower bound. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_14", "text": " Take, for example, the univariate Gaussian case: let z∼p​(z|x)=𝒩​(μ,σ2)similar-to𝑧𝑝conditional𝑧𝑥𝒩𝜇superscript𝜎2z\\sim p(z|x)=\\mathcal{N}(\\mu,\\sigma^{2}). In this case, a valid reparameterization is z=μ+σ​ϵ𝑧𝜇𝜎italic-ϵz=\\mu+\\sigma\\epsilon, where ϵitalic-ϵ\\epsilon is an auxiliary noise variable ϵ∼𝒩​(0,1)similar-toitalic-ϵ𝒩01\\epsilon\\sim\\mathcal{N}(0,1). Therefore, 𝔼𝒩​(z;μ,σ2)​(f​(z))=𝔼𝒩​(ϵ;0,1)​(f​(μ+σ​ϵ))≃1L​∑l=1Lf​(μ+σ​ϵ(l))subscript𝔼𝒩𝑧𝜇superscript𝜎2delimited-()𝑓𝑧subscript𝔼𝒩italic-ϵ01delimited-()𝑓𝜇𝜎italic-ϵsimilar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝜇𝜎superscriptitalic-ϵ𝑙\\mathbb{E}_{\\mathcal{N}(z;\\mu,\\sigma^{2})}\\left(f(z)\\right)=\\mathbb{E}_{\\mathcal{N}(\\epsilon;0,1)}\\left(f(\\mu+\\sigma\\epsilon)\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mu+\\sigma\\epsilon^{(l)}) where ϵ(l)∼𝒩​(0,1)similar-tosuperscriptitalic-ϵ𝑙𝒩01\\epsilon^{(l)}\\sim\\mathcal{N}(0,1). ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_15", "text": " For which qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) can we choose such a differentiable transformation gϕ(.)g_{\\boldsymbol{\\phi}}(.) and auxiliary variable ϵ∼p​(ϵ)similar-tobold-italic-ϵ𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon})? Three basic approaches are: 1. Tractable inverse CDF. In this case, let ϵ∼𝒰​(𝟎,𝐈)similar-tobold-italic-ϵ𝒰0𝐈\\boldsymbol{\\epsilon}\\sim\\mathcal{U}(\\mathbf{0},\\mathbf{I}), and let gϕ​(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) be the inverse CDF of qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). Examples: Exponential, Cauchy, Logistic, Rayleigh, Pareto, Weibull, Reciprocal, Gompertz, Gumbel and Erlang distributions. 2. Analogous to the Gaussian example, for any ”location-scale” family of distributions we can choose the standard distribution (with location=0location0\\text{location}=0, scale=1scale1\\text{scale}=1) as the auxiliary variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}, and let g(.)=location+scale⋅ϵg(.)=\\text{location}+\\text{scale}\\cdot\\boldsymbol{\\epsilon}. Examples: Laplace, Elliptical, Student’s t, Logistic, Uniform, Triangular and Gaussian distributions. 3. Composition: It is often possible to express random variables as different transformations of auxiliary variables. Examples: Log-Normal (exponentiation of normally distributed variable), Gamma (a sum over exponentially distributed variables), Dirichlet (weighted sum of Gamma variates), Beta, Chi-Squared, and F distributions. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_16", "text": " When all three approaches fail, good approximations to the inverse CDF exist requiring computations with time complexity comparable to the PDF (see e.g.  (Dev86) for some methods). ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_17", "text": " In this section we’ll give an example where we use a neural network for the probabilistic encoder qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) (the approximation to the posterior of the generative model p𝜽​(𝐱,𝐳)subscript𝑝𝜽𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})) and where the parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and 𝜽𝜽\\boldsymbol{\\theta} are optimized jointly with the AEVB algorithm. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_18", "text": " Let the prior over the latent variables be the centered isotropic multivariate Gaussian p𝜽​(𝐳)=𝒩​(𝐳;𝟎,𝐈)subscript𝑝𝜽𝐳𝒩𝐳0𝐈p_{\\boldsymbol{\\theta}}(\\mathbf{z})=\\mathcal{N}(\\mathbf{z};\\mathbf{0},\\mathbf{I}). Note that in this case, the prior lacks parameters. We let p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from 𝐳𝐳\\mathbf{z} with a MLP (a fully-connected neural network with a single hidden layer, see appendix C). Note the true posterior p𝜽​(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}) is in this case intractable. While there is much freedom in the form qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), we’ll assume the true (but intractable) posterior takes on a approximate Gaussian form with an approximately diagonal covariance. In this case, we can let the variational approximate posterior be a multivariate Gaussian with a diagonal covariance structure222Note that this is just a (simplifying) choice, and not a limitation of our method.: log⁡qϕ​(𝐳|𝐱(i))subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\displaystyle\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) =log⁡𝒩​(𝐳;𝝁(i),𝝈2​(i)​𝐈)absent𝒩𝐳superscript𝝁𝑖superscript𝝈2𝑖𝐈\\displaystyle=\\log\\mathcal{N}(\\mathbf{z};\\boldsymbol{\\mu}^{(i)},\\boldsymbol{\\sigma}^{2(i)}\\mathbf{I}) (9) where the mean and s.d. of the approximate posterior, 𝝁(i)superscript𝝁𝑖\\boldsymbol{\\mu}^{(i)} and 𝝈(i)superscript𝝈𝑖\\boldsymbol{\\sigma}^{(i)}, are outputs of the encoding MLP, i.e. nonlinear functions of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} (see appendix C). ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_19", "text": " As explained in section 2.4, we sample from the posterior 𝐳(i,l)∼qϕ​(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) using 𝐳(i,l)=gϕ​(𝐱(i),ϵ(l))=𝝁(i)+𝝈(i)⊙ϵ(l)superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscript𝐱𝑖superscriptbold-italic-ϵ𝑙superscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\mathbf{x}^{(i)},\\boldsymbol{\\epsilon}^{(l)})=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)} where ϵ(l)∼𝒩​(𝟎,𝐈)similar-tosuperscriptbold-italic-ϵ𝑙𝒩0𝐈\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}). With ⊙direct-product\\odot we signify an element-wise product. In this model both p𝜽​(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) (the prior) and qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) are Gaussian; in this case, we can use the estimator of eq. (7) where the KL divergence can be computed and differentiated without estimation (see appendix B). The resulting estimator for this model and datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is: ℒ​(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) ≃12​∑j=1J(1+log⁡((σj(i))2)−(μj(i))2−(σj(i))2)+1L​∑l=1Llog⁡p𝜽​(𝐱(i)|𝐳(i,l))similar-to-or-equalsabsent12superscriptsubscript𝑗1𝐽1superscriptsuperscriptsubscript𝜎𝑗𝑖2superscriptsuperscriptsubscript𝜇𝑗𝑖2superscriptsuperscriptsubscript𝜎𝑗𝑖21𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\displaystyle\\simeq\\frac{1}{2}\\sum_{j=1}^{J}\\left(1+\\log((\\sigma_{j}^{(i)})^{2})-(\\mu_{j}^{(i)})^{2}-(\\sigma_{j}^{(i)})^{2}\\right)+\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) where ​𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where\\quad}\\mathbf{z}^{(i,l)} =𝝁(i)+𝝈(i)⊙ϵ(l)​ and ​ϵ(l)∼𝒩​(0,𝐈)absentsuperscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙 and superscriptbold-italic-ϵ𝑙similar-to𝒩0𝐈\\displaystyle=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)}\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(0,\\mathbf{I}) (10) As explained above and in appendix C, the decoding term log⁡p𝜽​(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) is a Bernoulli or Gaussian MLP, depending on the type of data we are modelling. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_20", "text": " The wake-sleep algorithm (HDFN95) is, to the best of our knowledge, the only other on-line learning method in the literature that is applicable to the same general class of continuous latent variable models. Like our method, the wake-sleep algorithm employs a recognition model that approximates the true posterior. A drawback of the wake-sleep algorithm is that it requires a concurrent optimization of two objective functions, which together do not correspond to optimization of (a bound of) the marginal likelihood. An advantage of wake-sleep is that it also applies to models with discrete latent variables. Wake-Sleep has the same computational complexity as AEVB per datapoint. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_21", "text": " Stochastic variational inference (HBWP13) has recently received increasing interest. Recently, (BJP12) introduced a control variate schemes to reduce the high variance of the naïve gradient estimator discussed in section 2.1, and applied to exponential family approximations of the posterior. In (RGB13) some general methods, i.e. a control variate scheme, were introduced for reducing the variance of the original gradient estimator. In (SK13), a similar reparameterization as in this paper was used in an efficient version of a stochastic variational inference algorithm for learning the natural parameters of exponential-family approximating distributions. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_22", "text": " The AEVB algorithm exposes a connection between directed probabilistic models (trained with a variational objective) and auto-encoders. A connection between linear auto-encoders and a certain class of generative linear-Gaussian models has long been known. In  (Row98) it was shown that PCA corresponds to the maximum-likelihood (ML) solution of a special case of the linear-Gaussian model with a prior p​(𝐳)=𝒩​(0,𝐈)𝑝𝐳𝒩0𝐈p(\\mathbf{z})=\\mathcal{N}(0,\\mathbf{I}) and a conditional distribution p​(𝐱|𝐳)=𝒩​(𝐱;𝐖𝐳,ϵ​𝐈)𝑝conditional𝐱𝐳𝒩𝐱𝐖𝐳italic-ϵ𝐈p(\\mathbf{x}|\\mathbf{z})=\\mathcal{N}(\\mathbf{x};\\mathbf{W}\\mathbf{z},\\epsilon\\mathbf{I}), specifically the case with infinitesimally small ϵitalic-ϵ\\epsilon. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_23", "text": " In relevant recent work on autoencoders (VLL+10) it was shown that the training criterion of unregularized autoencoders corresponds to maximization of a lower bound (see the infomax principle (Lin89)) of the mutual information between input X𝑋X and latent representation Z𝑍Z. Maximizing (w.r.t. parameters) of the mutual information is equivalent to maximizing the conditional entropy, which is lower bounded by the expected loglikelihood of the data under the autoencoding model (VLL+10), i.e. the negative reconstrution error. However, it is well known that this reconstruction criterion is in itself not sufficient for learning useful representations (BCV13). Regularization techniques have been proposed to make autoencoders learn useful representations, such as denoising, contractive and sparse autoencoder variants  (BCV13). The SGVB objective contains a regularization term dictated by the variational bound (e.g. eq. (10)), lacking the usual nuisance regularization hyperparameter required to learn useful representations. Related are also encoder-decoder architectures such as the predictive sparse decomposition (PSD) (KRL08), from which we drew some inspiration. Also relevant are the recently introduced Generative Stochastic Networks (BTL13) where noisy auto-encoders learn the transition operator of a Markov chain that samples from the data distribution. In (SL10) a recognition model was employed for efficient learning with Deep Boltzmann Machines. These methods are targeted at either unnormalized models (i.e. undirected models like Boltzmann machines) or limited to sparse coding models, in contrast to our proposed algorithm for learning a general class of directed probabilistic models. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_24", "text": " The recently proposed DARN method  (GMW13), also learns a directed probabilistic model using an auto-encoding structure, however their method applies to binary latent variables. Even more recently,  (RMW14) also make the connection between auto-encoders, directed proabilistic models and stochastic variational inference using the reparameterization trick we describe in this paper. Their work was developed independently of ours and provides an additional perspective on AEVB. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_25", "text": " We trained generative models of images from the MNIST and Frey Face datasets333Available at http://www.cs.nyu.edu/~roweis/data.html and compared learning algorithms in terms of the variational lower bound, and the estimated marginal likelihood. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_26", "text": " The generative model (encoder) and variational approximation (decoder) from section 3 were used, where the described encoder and decoder have an equal number of hidden units. Since the Frey Face data are continuous, we used a decoder with Gaussian outputs, identical to the encoder, except that the means were constrained to the interval (0,1)01(0,1) using a sigmoidal activation function at the decoder output. Note that with hidden units we refer to the hidden layer of the neural networks of the encoder and decoder. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_27", "text": " Parameters are updated using stochastic gradient ascent where gradients are computed by differentiating the lower bound estimator ∇𝜽,ϕℒ​(𝜽,ϕ;𝐗)subscript∇𝜽bold-italic-ϕℒ𝜽bold-italic-ϕ𝐗\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}) (see algorithm  1), plus a small weight decay term corresponding to a prior p​(𝜽)=𝒩​(0,𝐈)𝑝𝜽𝒩0𝐈p(\\boldsymbol{\\theta})=\\mathcal{N}(0,\\mathbf{I}). Optimization of this objective is equivalent to approximate MAP estimation, where the likelihood gradient is approximated by the gradient of the lower bound. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_28", "text": " We compared performance of AEVB to the wake-sleep algorithm (HDFN95). We employed the same encoder (also called recognition model) for the wake-sleep algorithm and the variational auto-encoder. All parameters, both variational and generative, were initialized by random sampling from 𝒩​(0,0.01)𝒩00.01\\mathcal{N}(0,0.01), and were jointly stochastically optimized using the MAP criterion. Stepsizes were adapted with Adagrad (DHS10); the Adagrad global stepsize parameters were chosen from {0.01, 0.02, 0.1} based on performance on the training set in the first few iterations. Minibatches of size M=100𝑀100M=100 were used, with L=1𝐿1L=1 samples per datapoint. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_29", "text": " We trained generative models (decoders) and corresponding encoders (a.k.a. recognition models) having 500500500 hidden units in case of MNIST, and 200200200 hidden units in case of the Frey Face dataset (to prevent overfitting, since it is a considerably smaller dataset). The chosen number of hidden units is based on prior literature on auto-encoders, and the relative performance of different algorithms was not very sensitive to these choices. Figure 2 shows the results when comparing the lower bounds. Interestingly, superfluous latent variables did not result in overfitting, which is explained by the regularizing nature of the variational bound. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_30", "text": " For very low-dimensional latent space it is possible to estimate the marginal likelihood of the learned generative models using an MCMC estimator. More information about the marginal likelihood estimator is available in the appendix. For the encoder and decoder we again used neural networks, this time with 100 hidden units, and 3 latent variables; for higher dimensional latent space the estimates became unreliable. Again, the MNIST dataset was used. The AEVB and Wake-Sleep methods were compared to Monte Carlo EM (MCEM) with a Hybrid Monte Carlo (HMC) (DKPR87) sampler; details are in the appendix. We compared the convergence speed for the three algorithms, for a small and large training set size. Results are in figure 3. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_31", "text": " If we choose a low-dimensional latent space (e.g. 2D), we can use the learned encoders (recognition model) to project high-dimensional data to a low-dimensional manifold. See appendix A for visualisations of the 2D latent manifolds for the MNIST and Frey Face datasets. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_32", "text": " We have introduced a novel estimator of the variational lower bound, Stochastic Gradient VB (SGVB), for efficient approximate inference with continuous latent variables. The proposed estimator can be straightforwardly differentiated and optimized using standard stochastic gradient methods. For the case of i.i.d. datasets and continuous latent variables per datapoint we introduce an efficient algorithm for efficient inference and learning, Auto-Encoding VB (AEVB), that learns an approximate inference model using the SGVB estimator. The theoretical advantages are reflected in experimental results. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_33", "text": " Since the SGVB estimator and the AEVB algorithm can be applied to almost any inference and learning problem with continuous latent variables, there are plenty of future directions: (i) learning hierarchical generative architectures with deep neural networks (e.g. convolutional networks) used for the encoders and decoders, trained jointly with AEVB; (ii) time-series models (i.e. dynamic Bayesian networks); (iii) application of SGVB to the global parameters; (iv) supervised models with latent variables, useful for learning complicated noise distributions. ", "title": "Auto-Encoding Variational Bayes" } ]
High accuracy is crucial for safety in autonomous vehicles. Would deploying smaller models using over-the-air updates in Tesla result in a trade-off with accuracy(and hence safety)?
Accuracy is crucial for safety but it's not only accuracy vs size relation [0]. We should consider more aspects [27].
[ 0, 27 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: ∙∙\\bullet More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, communication overhead is directly proportional to the number of parameters in the model Iandola et al. (2016). In short, small models train faster due to requiring less communication. ∙∙\\bullet Less overhead when exporting new models to clients. For autonomous driving, companies such as Tesla periodically copy new models from their servers to customers’ cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla’s Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates Consumer Reports (2016). However, over-the-air updates of today’s typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. ∙∙\\bullet Feasible FPGA and embedded deployment. FPGAs often have less than 10MB111For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory and does not provide off-chip memory. of on-chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory bandwidth Qiu et al. (2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASIC to fit on a smaller die. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_1", "text": " As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_2", "text": " The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the SqueezeNet architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of SqueezeNet-like architectures. In Section 5, we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_3", "text": " The overarching goal of our work is to identify a model that has very few parameters while preserving accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Denton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model Denton et al. (2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN Han et al. (2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression Han et al. (2015a), and further designed a hardware accelerator called EIE Han et al. (2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_4", "text": " Convolutions have been used in artificial neural networks for at least 25 years; LeCun et al. helped to popularize CNNs for digit recognition applications in the late 1980s LeCun et al. (1989). In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i.e. RGB), and in each subsequent layer Lisubscript𝐿𝑖L_{i} the filters have the same number of channels as Li−1subscript𝐿𝑖1L_{i-1} has filters. The early work by LeCun et al. LeCun et al. (1989) uses 5x5xChannels222From now on, we will simply abbreviate HxWxChannels to HxW. filters, and the recent VGG Simonyan & Zisserman (2014) architectures extensively use 3x3 filters. Models such as Network-in-Network Lin et al. (2013) and the GoogLeNet family of architectures Szegedy et al. (2014); Ioffe & Szegedy (2015); Szegedy et al. (2015; 2016) use 1x1 filters in some layers. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_5", "text": " With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter dimensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the GoogLeNet papers propose Inception modules, which are comprised of a number of different dimensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 Szegedy et al. (2014) and sometimes 1x3 and 3x1 Szegedy et al. (2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_6", "text": " While the CNN microarchitecture refers to individual layers and modules, we define the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_7", "text": " Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG Simonyan & Zisserman (2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset Deng et al. (2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy He et al. (2015a). ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_8", "text": " The choice of connections across multiple layers or modules is an emerging area of CNN macroarchitectural research. Residual Networks (ResNet) He et al. (2015b) and Highway Networks Srivastava et al. (2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_9", "text": " Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NN’s accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for finding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization Snoek et al. (2012), simulated annealing Ludermir et al. (2006), randomized search Bergstra & Bengio (2012), and genetic algorithms Stanley & Miikkulainen (2002). To their credit, each of these papers provides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated approaches – instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions influence model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_10", "text": " In the following sections, we first propose and evaluate the SqueezeNet architecture with and without model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_11", "text": " In this section, we begin by outlining our design strategies for CNN architectures with few parameters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised mainly of Fire modules. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_12", "text": " Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures: ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_13", "text": " Strategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_14", "text": " Strategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 filters. We decrease the number of input channels to 3x3 filters using squeeze layers, which we describe in the next section. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_15", "text": " Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2) the choice of layers in which to downsample in the CNN architecture. Most commonly, downsampling is engineered into CNN architectures by setting the (stride >> 1) in some of the convolution or pooling layers (e.g. Szegedy et al. (2014); Simonyan & Zisserman (2014); Krizhevsky et al. (2012)). If early333In our terminology, an “early” layer is close to the input data. layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are concentrated toward the end444In our terminology, the “end” of the network is the classifier. of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed downsampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy He & Sun (2015). ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_16", "text": " Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_17", "text": " We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1. The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3.1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1​x​1subscript𝑠1𝑥1s_{1x1}, e1​x​1subscript𝑒1𝑥1e_{1x1}, and e3​x​3subscript𝑒3𝑥3e_{3x3}. In a Fire module, s1​x​1subscript𝑠1𝑥1s_{1x1} is the number of filters in the squeeze layer (all 1x1), e1​x​1subscript𝑒1𝑥1e_{1x1} is the number of 1x1 filters in the expand layer, and e3​x​3subscript𝑒3𝑥3e_{3x3} is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1​x​1subscript𝑠1𝑥1s_{1x1} to be less than (e1​x​1subscript𝑒1𝑥1e_{1x1} + e3​x​3subscript𝑒3𝑥3e_{3x3}), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section 3.1. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_18", "text": " We now describe the SqueezeNet CNN architecture. We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3.1. We present the full SqueezeNet architecture in Table 1. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_19", "text": " For brevity, we have omitted number of details and design choices about SqueezeNet from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_20", "text": " ∙∙\\bullet So that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. ∙∙\\bullet ReLU Nair & Hinton (2010) is applied to activations from squeeze and expand layers. ∙∙\\bullet Dropout Srivastava et al. (2014) with a ratio of 50% is applied after the fire9 module. ∙∙\\bullet Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by the NiN Lin et al. (2013) architecture. ∙∙\\bullet When training SqueezeNet, we begin with a learning rate of 0.04, and we linearly decrease the learning rate throughout training, as described in Mishkin et al. (2016). For details on the training protocol (e.g. batch size, learning rate, parameter initialization), please refer to our Caffe-compatible configuration files located here: https://github.com/DeepScale/SqueezeNet. ∙∙\\bullet The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e.g. 1x1 and 3x3) Jia et al. (2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_21", "text": " We released the SqueezeNet configuration files in the format defined by the Caffe CNN framework. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet Chen et al. (2015a), Chainer Tokui et al. (2015), Keras Chollet (2016), and Torch Collobert et al. (2011). Each of these has its own native format for representing a CNN architecture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN Chetlur et al. (2014) and MKL-DNN Das et al. (2016). The research community has ported the SqueezeNet CNN architecture for compatibility with a number of other CNN software frameworks: • MXNet Chen et al. (2015a) port of SqueezeNet: Haria (2016) • Chainer Tokui et al. (2015) port of SqueezeNet: Bell (2016) • Keras Chollet (2016) port of SqueezeNet: DT42 (2016) • Torch Collobert et al. (2011) port of SqueezeNet’s Fire Modules: Waghmare (2016) ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_22", "text": " We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet Krizhevsky et al. (2012) model that was trained to classify images using the ImageNet Deng et al. (2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet555Our baseline is bvlc_alexnet from the Caffe codebase Jia et al. (2014). and the associated model compression results as a basis for comparison when evaluating SqueezeNet. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_23", "text": " In Table 2, we review SqueezeNet in the context of recent model compression results. The SVD-based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% Denton et al. (2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet Han et al. (2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level Han et al. (2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table 2. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_24", "text": " It appears that we have surpassed the state-of-the-art results from the model compression community: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4×1.4\\times smaller model size than the best efforts from the model compression community while maintaining or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models “need” all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression Han et al. (2015a) to SqueezeNet, using 33% sparsity666Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3×3\\times decrease in model size. and 8-bit quantization. This yields a 0.66 MB model (363×363\\times smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compression with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510×510\\times smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_25", "text": " In addition, these results demonstrate that Deep Compression Han et al. (2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10×10\\times while preserving the baseline accuracy. In summary: by combining CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510×510\\times reduction in model size with no decrease in accuracy compared to the baseline. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_26", "text": " Finally, note that Deep Compression Han et al. (2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 328=4​x3284𝑥\\frac{32}{8}=4x with 8-bit quantization or 326=5.3​x3265.3𝑥\\frac{32}{6}=5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware – Efficient Inference Engine (EIE) – that can compute codebook-quantized CNNs more efficiently Han et al. (2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits Gysel (2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_27", "text": " So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_28", "text": " In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_29", "text": " In SqueezeNet, each Fire module has three dimensional hyperparameters that we defined in Section 3.2: s1​x​1subscript𝑠1𝑥1s_{1x1}, e1​x​1subscript𝑒1𝑥1e_{1x1}, and e3​x​3subscript𝑒3𝑥3e_{3x3}. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define b​a​s​ee𝑏𝑎𝑠subscript𝑒𝑒base_{e} as the number of expand filters in the first Fire module in a CNN. After every f​r​e​q𝑓𝑟𝑒𝑞freq Fire modules, we increase the number of expand filters by i​n​c​re𝑖𝑛𝑐subscript𝑟𝑒incr_{e}. In other words, for Fire module i𝑖i, the number of expand filters is ei=basee+(incre∗⌊if​r​e​q⌋e_{i}=base_{e}+(incr_{e}*{\\left\\lfloor{\\frac{i}{freq}}\\right\\rfloor}). In the expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define ei=ei,1​x​1+ei,3​x​3subscript𝑒𝑖subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3e_{i}=e_{i,{1x1}}+e_{i,{3x3}} with p​c​t3​x​3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} (in the range (0,1)01(0,1), shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, ei,3​x​3=ei∗p​c​t3​x​3subscript𝑒𝑖3𝑥3subscript𝑒𝑖𝑝𝑐subscript𝑡3𝑥3e_{i,{3x3}}=e_{i}*pct_{3x3}, and ei,1​x​1=ei∗(1−p​c​t3​x​3)subscript𝑒𝑖1𝑥1subscript𝑒𝑖1𝑝𝑐subscript𝑡3𝑥3e_{i,{1x1}}=e_{i}*(1-pct_{3x3}). Finally, we define the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range (0,1)01(0,1), shared by all Fire modules): si,1​x​1=S​R∗eisubscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖s_{i,{1x1}}=SR*e_{i} (or equivalently si,1​x​1=S​R∗(ei,1​x​1+ei,3​x​3)subscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3s_{i,{1x1}}=SR*(e_{i,{1x1}}+e_{i,{3x3}})). SqueezeNet (Table 1) is an example architecture that we generated with the aforementioned set of metaparameters. Specifically, SqueezeNet has the following metaparameters: b​a​s​ee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, i​n​c​re=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, p​c​t3​x​3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, f​r​e​q=2𝑓𝑟𝑒𝑞2freq=2, and S​R=0.125𝑆𝑅0.125SR=0.125. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_30", "text": " In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratio between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_31", "text": " In these experiments, we use SqueezeNet (Figure 2) as a starting point. As in SqueezeNet, these experiments use the following metaparameters: b​a​s​ee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, i​n​c​re=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, p​c​t3​x​3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, and f​r​e​q=2𝑓𝑟𝑒𝑞2freq=2. We train multiple models, where each model has a different squeeze ratio (SR)777Note that, for a given model, all Fire layers share the same squeeze ratio. in the range (0.125, 1.0). In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this figure.888Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in SqueezeNet have 0.125x the number of filters as the expand layers. From this figure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_32", "text": " In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 filters with 1x1 filters. An open question is, how important is spatial resolution in CNN filters? The VGG Simonyan & Zisserman (2014) architectures have 3x3 spatial resolution in most layers’ filters; GoogLeNet Szegedy et al. (2014) and Network-in-Network (NiN) Lin et al. (2013) have 1x1 filters in some layers. In GoogLeNet and NiN, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis.999To be clear, each filter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3. Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_33", "text": " We use the following metaparameters in this experiment: b​a​s​ee=i​n​c​re=128𝑏𝑎𝑠subscript𝑒𝑒𝑖𝑛𝑐subscript𝑟𝑒128base_{e}=incr_{e}=128, f​r​e​q=2𝑓𝑟𝑒𝑞2freq=2, S​R=0.500𝑆𝑅0.500SR=0.500, and we vary p​c​t3​x​3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} from 1% to 99%. In other words, each Fire module’s expand layer has a predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from “mostly 1x1” to “mostly 3x3”. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: S​R=0.500𝑆𝑅0.500SR=0.500 and p​c​t3​x​3=50%𝑝𝑐subscript𝑡3𝑥3percent50pct_{3x3}=50\\%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a larger model size but provides no improvement in accuracy on ImageNet. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_34", "text": " So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet He et al. (2015b), we explored three different architectures: ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_35", "text": " ∙∙\\bullet Vanilla SqueezeNet (as per the prior sections). ∙∙\\bullet SqueezeNet with simple bypass connections between some Fire modules. (Inspired by Srivastava et al. (2015); He et al. (2015b).) ∙∙\\bullet SqueezeNet with complex bypass connections between the remaining Fire modules. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_36", "text": " We illustrate these three variants of SqueezeNet in Figure 2. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_37", "text": " Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the final accuracy and/or ability to train the full model. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_38", "text": " One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the “same number of channels” requirement can’t be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is “just a wire,” we define a complex bypass as a bypass that includes a 1x1 convolution layer with the number of filters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_39", "text": " In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of information can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to flow around the squeeze layers. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_40", "text": " We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We fixed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_41", "text": " In this paper, we have proposed steps toward a more disciplined approach to the design-space exploration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50×50\\times fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510×510\\times smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) Han et al. (2016b), Han et al. use model compression during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also producing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_42", "text": " We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA Gschwend (2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_43", "text": " In the context of this paper, we focused on ImageNet as a target dataset. However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as fine-grained object recognition Zhang et al. (2013); Donahue et al. (2013), logo identification in images Iandola et al. (2015), and generating sentences about images Fang et al. (2015). ImageNet-trained CNNs have also been applied to a number of applications pertaining to autonomous driving, including pedestrian and vehicle detection in images Iandola et al. (2014); Girshick et al. (2015); Ashraf et al. (2016) and videos Chen et al. (2015b), as well as segmenting the shape of the road Badrinarayanan et al. (2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_44", "text": " SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the design space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" } ]
How can LSTM that are not symmetric deal with neighborhoods that have no order?
Authors permute neighbors of nodes to operate LSTMs to deal with unordered neighbor set [22].
[ 22 ]
[ { "id": "1706.02216_all_0", "text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs for a wide variety of prediction and graph analysis tasks (5, 11, 28, 35, 36). The basic idea behind node embedding approaches is to use dimensionality reduction techniques to distill the high-dimensional information about a node’s graph neighborhood into a dense vector embedding. These node embeddings can then be fed to downstream machine learning systems and aid in tasks such as node classification, clustering, and link prediction (11, 28, 35). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_1", "text": " However, previous works have focused on embedding nodes from a single fixed graph, and many real-world applications require embeddings to be quickly generated for unseen nodes, or entirely new (sub)graphs. This inductive capability is essential for high-throughput, production machine learning systems, which operate on evolving graphs and constantly encounter unseen nodes (e.g., posts on Reddit, users and videos on Youtube). An inductive approach to generating node embeddings also facilitates generalization across graphs with the same form of features: for example, one could train an embedding generator on protein-protein interaction graphs derived from a model organism, and then easily produce node embeddings for data collected on new organisms using the trained model. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_2", "text": " The inductive node embedding problem is especially difficult, compared to the transductive setting, because generalizing to unseen nodes requires “aligning” newly observed subgraphs to the node embeddings that the algorithm has already optimized on. An inductive framework must learn to recognize structural properties of a node’s neighborhood that reveal both the node’s local role in the graph, as well as its global position. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_3", "text": " Most existing approaches to generating node embeddings are inherently transductive. The majority of these approaches directly optimize the embeddings for each node using matrix-factorization-based objectives, and do not naturally generalize to unseen data, since they make predictions on nodes in a single, fixed graph (5, 11, 23, 28, 35, 36, 37, 39). These approaches can be modified to operate in an inductive setting (e.g., ), but these modifications tend to be computationally expensive, requiring additional rounds of gradient descent before new predictions can be made. There are also recent approaches to learning over graph structures using convolution operators that offer promise as an embedding methodology . So far, graph convolutional networks (GCNs) have only been applied in the transductive setting with fixed graphs (17, 18). In this work we both extend GCNs to the task of inductive unsupervised learning and propose a framework that generalizes the GCN approach to use trainable aggregation functions (beyond simple convolutions). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_4", "text": " Present work. We propose a general framework, called GraphSAGE (sample and aggregate), for inductive node embedding. Unlike embedding approaches that are based on matrix factorization, we leverage node features (e.g., text attributes, node profile information, node degrees) in order to learn an embedding function that generalizes to unseen nodes. By incorporating node features in the learning algorithm, we simultaneously learn the topological structure of each node’s neighborhood as well as the distribution of node features in the neighborhood. While we focus on feature-rich graphs (e.g., citation data with text attributes, biological data with functional/molecular markers), our approach can also make use of structural features that are present in all graphs (e.g., node degrees). Thus, our algorithm can also be applied to graphs without node features. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_5", "text": " Instead of training a distinct embedding vector for each node, we train a set of aggregator functions that learn to aggregate feature information from a node’s local neighborhood (Figure 1). Each aggregator function aggregates information from a different number of hops, or search depth, away from a given node. At test, or inference time, we use our trained system to generate embeddings for entirely unseen nodes by applying the learned aggregation functions. Following previous work on generating node embeddings, we design an unsupervised loss function that allows GraphSAGE to be trained without task-specific supervision. We also show that GraphSAGE can be trained in a fully supervised manner. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_6", "text": " We evaluate our algorithm on three node-classification benchmarks, which test GraphSAGE’s ability to generate useful embeddings on unseen data. We use two evolving document graphs based on citation data and Reddit post data (predicting paper and post categories, respectively), and a multi-graph generalization experiment based on a dataset of protein-protein interactions (predicting protein functions). Using these benchmarks, we show that our approach is able to effectively generate representations for unseen nodes and outperform relevant baselines by a significant margin: across domains, our supervised approach improves classification F1-scores by an average of 51% compared to using node features alone and GraphSAGE consistently outperforms a strong, transductive baseline , despite this baseline taking ∼100×{\\sim}100\\times longer to run on unseen nodes. We also show that the new aggregator architectures we propose provide significant gains (7.4% on average) compared to an aggregator inspired by graph convolutional networks . Lastly, we probe the expressive capability of our approach and show, through theoretical analysis, that GraphSAGE is capable of learning structural information about a node’s role in a graph, despite the fact that it is inherently based on features (Section 5). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_7", "text": " Our algorithm is conceptually related to previous node embedding approaches, general supervised approaches to learning over graphs, and recent advancements in applying convolutional neural networks to graph-structured data.222In the time between this papers original submission to NIPS 2017 and the submission of the final, accepted (i.e., “camera-ready”) version, there have been a number of closely related (e.g., follow-up) works published on pre-print servers. For temporal clarity, we do not review or compare against these papers in detail. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_8", "text": " Factorization-based embedding approaches. There are a number of recent node embedding approaches that learn low-dimensional embeddings using random walk statistics and matrix factorization-based learning objectives (5, 11, 28, 35, 36). These methods also bear close relationships to more classic approaches to spectral clustering , multi-dimensional scaling , as well as the PageRank algorithm . Since these embedding algorithms directly train node embeddings for individual nodes, they are inherently transductive and, at the very least, require expensive additional training (e.g., via stochastic gradient descent) to make predictions on new nodes. In addition, for many of these approaches (e.g., (11, 28, 35, 36)) the objective function is invariant to orthogonal transformations of the embeddings, which means that the embedding space does not naturally generalize between graphs and can drift during re-training. One notable exception to this trend is the Planetoid-I algorithm introduced by Yang et al. , which is an inductive, embedding-based approach to semi-supervised learning. However, Planetoid-I does not use any graph structural information during inference; instead, it uses the graph structure as a form of regularization during training. Unlike these previous approaches, we leverage feature information in order to train a model to produce embeddings for unseen nodes. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_9", "text": " Supervised learning over graphs. Beyond node embedding approaches, there is a rich literature on supervised learning over graph-structured data. This includes a wide variety of kernel-based approaches, where feature vectors for graphs are derived from various graph kernels (see and references therein). There are also a number of recent neural network approaches to supervised learning over graph structures (7, 10, 21, 31). Our approach is conceptually inspired by a number of these algorithms. However, whereas these previous approaches attempt to classify entire graphs (or subgraphs), the focus of this work is generating useful representations for individual nodes. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_10", "text": " Graph convolutional networks. In recent years, several convolutional neural network architectures for learning over graphs have been proposed (e.g., (4, 9, 8, 17, 24)). The majority of these methods do not scale to large graphs or are designed for whole-graph classification (or both) (4, 9, 8, 24). However, our approach is closely related to the graph convolutional network (GCN), introduced by Kipf et al. (17, 18). The original GCN algorithm is designed for semi-supervised learning in a transductive setting, and the exact algorithm requires that the full graph Laplacian is known during training. A simple variant of our algorithm can be viewed as an extension of the GCN framework to the inductive setting, a point which we revisit in Section 3.3. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_11", "text": " The key idea behind our approach is that we learn how to aggregate feature information from a node’s local neighborhood (e.g., the degrees or text attributes of nearby nodes). We first describe the GraphSAGE embedding generation (i.e., forward propagation) algorithm, which generates embeddings for nodes assuming that the GraphSAGE model parameters are already learned (Section 3.1). We then describe how the GraphSAGE model parameters can be learned using standard stochastic gradient descent and backpropagation techniques (Section 3.2). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_12", "text": " In this section, we describe the embedding generation, or forward propagation algorithm (Algorithm 1), which assumes that the model has already been trained and that the parameters are fixed. In particular, we assume that we have learned the parameters of K𝐾K aggregator functions (denoted aggregatek,∀k∈{1,…,K}subscriptaggregate𝑘for-all𝑘1…𝐾\\textsc{aggregate}_{k},\\forall k\\in\\{1,...,K\\}), which aggregate information from node neighbors, as well as a set of weight matrices 𝐖k,∀k∈{1,…,K}superscript𝐖𝑘for-all𝑘1…𝐾\\mathbf{W}^{k},\\forall k\\in\\{1,...,K\\}, which are used to propagate information between different layers of the model or “search depths”. Section 3.2 describes how we train these parameters. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_13", "text": " The intuition behind Algorithm 1 is that at each iteration, or search depth, nodes aggregate information from their local neighbors, and as this process iterates, nodes incrementally gain more and more information from further reaches of the graph. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_14", "text": " Algorithm 1 describes the embedding generation process in the case where the entire graph, 𝒢=(𝒱,ℰ)𝒢𝒱ℰ\\mathcal{G}=(\\mathcal{V},\\mathcal{E}), and features for all nodes 𝐱v,∀v∈𝒱subscript𝐱𝑣for-all𝑣𝒱\\mathbf{x}_{v},\\forall v\\in\\mathcal{V}, are provided as input. We describe how to generalize this to the minibatch setting below. Each step in the outer loop of Algorithm 1 proceeds as follows, where k𝑘k denotes the current step in the outer loop (or the depth of the search) and 𝐡ksuperscript𝐡𝑘\\mathbf{h}^{k} denotes a node’s representation at this step: First, each node v∈𝒱𝑣𝒱v\\in\\mathcal{V} aggregates the representations of the nodes in its immediate neighborhood, {𝐡uk−1,∀u∈𝒩​(v)}subscriptsuperscript𝐡𝑘1𝑢for-all𝑢𝒩𝑣\\{\\mathbf{h}^{k-1}_{u},\\forall u\\in\\mathcal{N}(v)\\}, into a single vector 𝐡𝒩​(v)k−1subscriptsuperscript𝐡𝑘1𝒩𝑣\\mathbf{h}^{k-1}_{\\mathcal{N}(v)}. Note that this aggregation step depends on the representations generated at the previous iteration of the outer loop (i.e., k−1𝑘1k-1), and the k=0𝑘0k=0 (“base case”) representations are defined as the input node features. After aggregating the neighboring feature vectors, GraphSAGE then concatenates the node’s current representation, 𝐡vk−1superscriptsubscript𝐡𝑣𝑘1\\mathbf{h}_{v}^{k-1}, with the aggregated neighborhood vector, 𝐡𝒩​(v)k−1subscriptsuperscript𝐡𝑘1𝒩𝑣\\mathbf{h}^{k-1}_{\\mathcal{N}(v)}, and this concatenated vector is fed through a fully connected layer with nonlinear activation function σ𝜎\\sigma, which transforms the representations to be used at the next step of the algorithm (i.e., 𝐡vk,∀v∈𝒱superscriptsubscript𝐡𝑣𝑘for-all𝑣𝒱\\mathbf{h}_{v}^{k},\\forall v\\in\\mathcal{V}). For notational convenience, we denote the final representations output at depth K𝐾K as 𝐳v≡𝐡vK,∀v∈𝒱formulae-sequencesubscript𝐳𝑣subscriptsuperscript𝐡𝐾𝑣for-all𝑣𝒱\\mathbf{z}_{v}\\equiv\\mathbf{h}^{K}_{v},\\forall v\\in\\mathcal{V}. The aggregation of the neighbor representations can be done by a variety of aggregator architectures (denoted by the aggregate placeholder in Algorithm 1), and we discuss different architecture choices in Section 3.3 below. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_15", "text": " To extend Algorithm 1 to the minibatch setting, given a set of input nodes, we first forward sample the required neighborhood sets (up to depth K𝐾K) and then we run the inner loop (line 3 in Algorithm 1), but instead of iterating over all nodes, we compute only the representations that are necessary to satisfy the recursion at each depth (Appendix A contains complete minibatch pseudocode). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_16", "text": " Relation to the Weisfeiler-Lehman Isomorphism Test. The GraphSAGE algorithm is conceptually inspired by a classic algorithm for testing graph isomorphism. If, in Algorithm 1, we (i) set K=|𝒱|𝐾𝒱K=|\\mathcal{V}|, (ii) set the weight matrices as the identity, and (iii) use an appropriate hash function as an aggregator (with no non-linearity), then Algorithm 1 is an instance of the Weisfeiler-Lehman (WL) isomorphism test, also known as “naive vertex refinement” . If the set of representations {𝐳v,∀v∈𝒱}subscript𝐳𝑣for-all𝑣𝒱\\{\\mathbf{z}_{v},\\forall v\\in\\mathcal{V}\\} output by Algorithm 1 for two subgraphs are identical then the WL test declares the two subgraphs to be isomorphic. This test is known to fail in some cases, but is valid for a broad class of graphs . GraphSAGE is a continuous approximation to the WL test, where we replace the hash function with trainable neural network aggregators. Of course, we use GraphSAGE to generate useful node representations–not to test graph isomorphism. Nevertheless, the connection between GraphSAGE and the classic WL test provides theoretical context for our algorithm design to learn the topological structure of node neighborhoods. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_17", "text": " Neighborhood definition. In this work, we uniformly sample a fixed-size set of neighbors, instead of using full neighborhood sets in Algorithm 1, in order to keep the computational footprint of each batch fixed.333Exploring non-uniform samplers is an important direction for future work. That is, using overloaded notation, we define 𝒩​(v)𝒩𝑣\\mathcal{N}(v) as a fixed-size, uniform draw from the set {u∈𝒱:(u,v)∈ℰ}conditional-set𝑢𝒱𝑢𝑣ℰ\\{u\\in\\mathcal{V}:(u,v)\\in\\mathcal{E}\\}, and we draw different uniform samples at each iteration, k𝑘k, in Algorithm 1. Without this sampling the memory and expected runtime of a single batch is unpredictable and in the worst case O​(|𝒱|)𝑂𝒱O(|\\mathcal{V}|). In contrast, the per-batch space and time complexity for GraphSAGE is fixed at O​(∏i=1KSi)𝑂superscriptsubscriptproduct𝑖1𝐾subscript𝑆𝑖O(\\prod_{i=1}^{K}S_{i}), where Si,i∈{1,…,K}subscript𝑆𝑖𝑖1…𝐾S_{i},i\\in\\{1,...,K\\} and K𝐾K are user-specified constants. Practically speaking we found that our approach could achieve high performance with K=2𝐾2K=2 and S1⋅S2≤500⋅subscript𝑆1subscript𝑆2500S_{1}\\cdot S_{2}\\leq 500 (see Section 4.4 for details). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_18", "text": " In order to learn useful, predictive representations in a fully unsupervised setting, we apply a graph-based loss function to the output representations, 𝐳u,∀u∈𝒱subscript𝐳𝑢for-all𝑢𝒱\\mathbf{z}_{u},\\forall u\\in{\\mathcal{V}}, and tune the weight matrices, 𝐖k,∀k∈{1,…,K}superscript𝐖𝑘for-all𝑘1…𝐾\\mathbf{W}^{k},\\forall k\\in\\{1,...,K\\}, and parameters of the aggregator functions via stochastic gradient descent. The graph-based loss function encourages nearby nodes to have similar representations, while enforcing that the representations of disparate nodes are highly distinct: J𝒢​(𝐳u)=−log⁡(σ​(𝐳u⊤​𝐳v))−Q⋅𝔼vn∼Pn​(v)​log⁡(σ​(−𝐳u⊤​𝐳vn)),subscript𝐽𝒢subscript𝐳𝑢𝜎subscriptsuperscript𝐳top𝑢subscript𝐳𝑣⋅𝑄subscript𝔼similar-tosubscript𝑣𝑛subscript𝑃𝑛𝑣𝜎subscriptsuperscript𝐳top𝑢subscript𝐳subscript𝑣𝑛J_{\\mathcal{G}}(\\mathbf{z}_{u})=-\\log\\left(\\sigma(\\mathbf{z}^{\\top}_{u}\\mathbf{z}_{v})\\right)-Q\\cdot\\mathbb{E}_{v_{n}\\sim P_{n}(v)}\\log\\left(\\sigma(-\\mathbf{z}^{\\top}_{u}\\mathbf{z}_{v_{n}})\\right), (1) where v𝑣v is a node that co-occurs near u𝑢u on fixed-length random walk, σ𝜎\\sigma is the sigmoid function, Pnsubscript𝑃𝑛P_{n} is a negative sampling distribution, and Q𝑄Q defines the number of negative samples. Importantly, unlike previous embedding approaches, the representations 𝐳usubscript𝐳𝑢\\mathbf{z}_{u} that we feed into this loss function are generated from the features contained within a node’s local neighborhood, rather than training a unique embedding for each node (via an embedding look-up). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_19", "text": " This unsupervised setting emulates situations where node features are provided to downstream machine learning applications, as a service or in a static repository. In cases where representations are to be used only on a specific downstream task, the unsupervised loss (Equation 1) can simply be replaced, or augmented, by a task-specific objective (e.g., cross-entropy loss). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_20", "text": " Unlike machine learning over N-D lattices (e.g., sentences, images, or 3-D volumes), a node’s neighbors have no natural ordering; thus, the aggregator functions in Algorithm 1 must operate over an unordered set of vectors. Ideally, an aggregator function would be symmetric (i.e., invariant to permutations of its inputs) while still being trainable and maintaining high representational capacity. The symmetry property of the aggregation function ensures that our neural network model can be trained and applied to arbitrarily ordered node neighborhood feature sets. We examined three candidate aggregator functions: ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_21", "text": " Mean aggregator. Our first candidate aggregator function is the mean operator, where we simply take the elementwise mean of the vectors in {𝐡uk−1,∀u∈𝒩​(v)}superscriptsubscript𝐡𝑢𝑘1for-all𝑢𝒩𝑣\\{\\mathbf{h}_{u}^{k-1},\\forall u\\in\\mathcal{N}(v)\\}. The mean aggregator is nearly equivalent to the convolutional propagation rule used in the transductive GCN framework . In particular, we can derive an inductive variant of the GCN approach by replacing lines 4 and 5 in Algorithm 1 with the following:444Note that this differs from Kipf et al’s exact equation by a minor normalization constant . 𝐡vk←σ(𝐖⋅mean({𝐡vk−1}∪{𝐡uk−1,∀u∈𝒩(v)}).\\mathbf{h}^{k}_{v}\\leftarrow\\sigma(\\mathbf{W}\\cdot\\textsc{mean}(\\{\\mathbf{h}^{k-1}_{v}\\}\\cup\\{\\mathbf{h}_{u}^{k-1},\\forall u\\in{\\mathcal{N}(v)}\\}). (2) We call this modified mean-based aggregator convolutional since it is a rough, linear approximation of a localized spectral convolution . An important distinction between this convolutional aggregator and our other proposed aggregators is that it does not perform the concatenation operation in line 5 of Algorithm 1—i.e., the convolutional aggregator does concatenate the node’s previous layer representation 𝐡vk−1subscriptsuperscript𝐡𝑘1𝑣\\mathbf{h}^{k-1}_{v} with the aggregated neighborhood vector 𝐡𝒩​(v)ksubscriptsuperscript𝐡𝑘𝒩𝑣\\mathbf{h}^{k}_{\\mathcal{N}(v)}. This concatenation can be viewed as a simple form of a “skip connection” between the different “search depths”, or “layers” of the GraphSAGE algorithm, and it leads to significant gains in performance (Section 4). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_22", "text": " LSTM aggregator. We also examined a more complex aggregator based on an LSTM architecture . Compared to the mean aggregator, LSTMs have the advantage of larger expressive capability. However, it is important to note that LSTMs are not inherently symmetric (i.e., they are not permutation invariant), since they process their inputs in a sequential manner. We adapt LSTMs to operate on an unordered set by simply applying the LSTMs to a random permutation of the node’s neighbors. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_23", "text": " Pooling aggregator. The final aggregator we examine is both symmetric and trainable. In this pooling approach, each neighbor’s vector is independently fed through a fully-connected neural network; following this transformation, an elementwise max-pooling operation is applied to aggregate information across the neighbor set: aggregatekpool=max⁡({σ​(𝐖pool​𝐡uik+𝐛),∀ui∈𝒩​(v)}),superscriptsubscriptaggregate𝑘pool𝜎subscript𝐖poolsubscriptsuperscript𝐡𝑘subscript𝑢𝑖𝐛for-allsubscript𝑢𝑖𝒩𝑣\\textsc{aggregate}_{k}^{\\textrm{pool}}=\\max(\\{\\sigma\\left(\\mathbf{W}_{\\textrm{pool}}\\mathbf{h}^{k}_{u_{i}}+\\mathbf{b}\\right),\\forall u_{i}\\in\\mathcal{N}(v)\\}), (3) where max\\max denotes the element-wise max operator and σ𝜎\\sigma is a nonlinear activation function. In principle, the function applied before the max pooling can be an arbitrarily deep multi-layer perceptron, but we focus on simple single-layer architectures in this work. This approach is inspired by recent advancements in applying neural network architectures to learn over general point sets . Intuitively, the multi-layer perceptron can be thought of as a set of functions that compute features for each of the node representations in the neighbor set. By applying the max-pooling operator to each of the computed features, the model effectively captures different aspects of the neighborhood set. Note also that, in principle, any symmetric vector function could be used in place of the max\\max operator (e.g., an element-wise mean). We found no significant difference between max- and mean-pooling in developments test and thus focused on max-pooling for the rest of our experiments. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_24", "text": " We test the performance of GraphSAGE on three benchmark tasks: (i) classifying academic papers into different subjects using the Web of Science citation dataset, (ii) classifying Reddit posts as belonging to different communities, and (iii) classifying protein functions across various biological protein-protein interaction (PPI) graphs. Sections 4.1 and 4.2 summarize the datasets, and the supplementary material contains additional information. In all these experiments, we perform predictions on nodes that are not seen during training, and, in the case of the PPI dataset, we test on entirely unseen graphs. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_25", "text": " Experimental set-up. To contextualize the empirical results on our inductive benchmarks, we compare against four baselines: a random classifer, a logistic regression feature-based classifier (that ignores graph structure), the DeepWalk algorithm as a representative factorization-based approach, and a concatenation of the raw features and DeepWalk embeddings. We also compare four variants of GraphSAGE that use the different aggregator functions (Section 3.3). Since, the “convolutional” variant of GraphSAGE is an extended, inductive version of Kipf et al’s semi-supervised GCN , we term this variant GraphSAGE-GCN. We test unsupervised variants of GraphSAGE  trained according to the loss in Equation (1), as well as supervised variants that are trained directly on classification cross-entropy loss. For all the GraphSAGE variants we used rectified linear units as the non-linearity and set K=2𝐾2K=2 with neighborhood sample sizes S1=25subscript𝑆125S_{1}=25 and S2=10subscript𝑆210S_{2}=10 (see Section 4.4 for sensitivity analyses). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_26", "text": " For the Reddit and citation datasets, we use “online” training for DeepWalk as described in Perozzi et al. , where we run a new round of SGD optimization to embed the new test nodes before making predictions (see the Appendix for details). In the multi-graph setting, we cannot apply DeepWalk, since the embedding spaces generated by running the DeepWalk algorithm on different disjoint graphs can be arbitrarily rotated with respect to each other (Appendix D). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_27", "text": " All models were implemented in TensorFlow with the Adam optimizer (except DeepWalk, which performed better with the vanilla gradient descent optimizer). We designed our experiments with the goals of (i) verifying the improvement of GraphSAGE over the baseline approaches (i.e., raw features and DeepWalk) and (ii) providing a rigorous comparison of the different GraphSAGE aggregator architectures. In order to provide a fair comparison, all models share an identical implementation of their minibatch iterators, loss function and neighborhood sampler (when applicable). Moreover, in order to guard against unintentional “hyperparameter hacking” in the comparisons between GraphSAGE aggregators, we sweep over the same set of hyperparameters for all GraphSAGE variants (choosing the best setting for each variant according to performance on a validation set). The set of possible hyperparameter values was determined on early validation tests using subsets of the citation and Reddit data that we then discarded from our analyses. The appendix contains further implementation details.555Code and links to the datasets: http://snap.stanford.edu/graphsage/ ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_28", "text": " Our first two experiments are on classifying nodes in evolving information graphs, a task that is especially relevant to high-throughput production systems, which constantly encounter unseen data. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_29", "text": " Citation data. Our first task is predicting paper subject categories on a large citation dataset. We use an undirected citation graph dataset derived from the Thomson Reuters Web of Science Core Collection, corresponding to all papers in six biology-related fields for the years 2000-2005. The node labels for this dataset correspond to the six different field labels. In total, this is dataset contains 302,424 nodes with an average degree of 9.15. We train all the algorithms on the 2000-2004 data and use the 2005 data for testing (with 30% used for validation). For features, we used node degrees and processed the paper abstracts according Arora et al.’s  sentence embedding approach, with 300-dimensional word vectors trained using the GenSim word2vec implementation . ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_30", "text": " Reddit data. In our second task, we predict which community different Reddit posts belong to. Reddit is a large online discussion forum where users post and comment on content in different topical communities. We constructed a graph dataset from Reddit posts made in the month of September, 2014. The node label in this case is the community, or “subreddit”, that a post belongs to. We sampled 50 large communities and built a post-to-post graph, connecting posts if the same user comments on both. In total this dataset contains 232,965 posts with an average degree of 492. We use the first 20 days for training and the remaining days for testing (with 30% used for validation). For features, we use off-the-shelf 300-dimensional GloVe CommonCrawl word vectors ; for each post, we concatenated (i) the average embedding of the post title, (ii) the average embedding of all the post’s comments (iii) the post’s score, and (iv) the number of comments made on the post. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_31", "text": " The first four columns of Table 1 summarize the performance of GraphSAGE  as well as the baseline approaches on these two datasets. We find that GraphSAGE outperforms all the baselines by a significant margin, and the trainable, neural network aggregators provide significant gains compared to the GCN approach. For example, the unsupervised variant GraphSAGE-pool outperforms the concatenation of the DeepWalk embeddings and the raw features by 13.8% on the citation data and 29.1% on the Reddit data, while the supervised version provides a gain of 19.7% and 37.2%, respectively. Interestingly, the LSTM based aggregator shows strong performance, despite the fact that it is designed for sequential data and not unordered sets. Lastly, we see that the performance of unsupervised GraphSAGE is reasonably competitive with the fully supervised version, indicating that our framework can achieve strong performance without task-specific fine-tuning. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_32", "text": " We now consider the task of generalizing across graphs, which requires learning about node roles rather than community structure. We classify protein roles—in terms of their cellular functions from gene ontology—in various protein-protein interaction (PPI) graphs, with each graph corresponding to a different human tissue . We use positional gene sets, motif gene sets and immunological signatures as features and gene ontology sets as labels (121 in total), collected from the Molecular Signatures Database . The average graph contains 2373 nodes, with an average degree of 28.8. We train all algorithms on 20 graphs and then average prediction F1 scores on two test graphs (with two other graphs used for validation). ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_33", "text": " The final two columns of Table 1 summarize the accuracies of the various approaches on this data. Again we see that GraphSAGE significantly outperforms the baseline approaches, with the LSTM- and pooling-based aggregators providing substantial gains over the mean- and GCN-based aggregators.666Note that in very recent follow-up work Chen and Zhu achieve superior performance by optimizing the GraphSAGE hyperparameters specifically for the PPI task and implementing new training techniques (e.g., dropout, layer normalization, and a new sampling scheme). We refer the reader to their work for the current state-of-the-art numbers on the PPI dataset that are possible using a variant of the GraphSAGE approach. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_34", "text": " Figure 2.A summarizes the training and test runtimes for the different approaches. The training time for the methods are comparable (with GraphSAGE-LSTM being the slowest). However, the need to sample new random walks and run new rounds of SGD to embed unseen nodes makes DeepWalk 100-500×100\\text{-}500\\times slower at test time. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_35", "text": " For the GraphSAGE variants, we found that setting K=2𝐾2K=2 provided a consistent boost in accuracy of around 10​-​15%10-percent1510\\text{-}15\\%, on average, compared to K=1𝐾1K=1; however, increasing K𝐾K beyond 2 gave marginal returns in performance (0​-​5%0-percent50\\text{-}5\\%) while increasing the runtime by a prohibitively large factor of 10-100×10\\text{-}100{\\times}, depending on the neighborhood sample size. We also found diminishing returns for sampling large neighborhoods (Figure 2.B). Thus, despite the higher variance induced by sub-sampling neighborhoods, GraphSAGE is still able to maintain strong predictive accuracy, while significantly improving the runtime. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_36", "text": " Overall, we found that the LSTM- and pool-based aggregators performed the best, in terms of both average performance and number of experimental settings where they were the top-performing method (Table 1). To give more quantitative insight into these trends, we consider each of the six different experimental settings (i.e., (3 datasets)×(unsupervised vs. supervised)(3 datasets)unsupervised vs. supervised\\textrm{(3 datasets)}\\times(\\textrm{unsupervised vs.\\ supervised})) as trials and consider what performance trends are likely to generalize. In particular, we use the non-parametric Wilcoxon Signed-Rank Test to quantify the differences between the different aggregators across trials, reporting the T𝑇T-statistic and p𝑝p-value where applicable. Note that this method is rank-based and essentially tests whether we would expect one particular approach to outperform another in a new experimental setting. Given our small sample size of only 6 different settings, this significance test is somewhat underpowered; nonetheless, the T𝑇T-statistic and associated p𝑝p-values are useful quantitative measures to assess the aggregators’ relative performances. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_37", "text": " We see that LSTM-, pool- and mean-based aggregators all provide statistically significant gains over the GCN-based approach (T=1.0𝑇1.0T=1.0, p=0.02𝑝0.02p=0.02 for all three). However, the gains of the LSTM and pool approaches over the mean-based aggregator are more marginal (T=1.5𝑇1.5T=1.5, p=0.03𝑝0.03p=0.03, comparing LSTM to mean; T=4.5𝑇4.5T=4.5, p=0.10𝑝0.10p=0.10, comparing pool to mean). There is no significant difference between the LSTM and pool approaches (T=10.0𝑇10.0T=10.0, p=0.46𝑝0.46p=0.46). However, GraphSAGE-LSTM is significantly slower than GraphSAGE-pool (by a factor of ≈2×{\\approx}2{\\times}), perhaps giving the pooling-based aggregator a slight edge overall. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_38", "text": " In this section, we probe the expressive capabilities of GraphSAGE in order to provide insight into how GraphSAGE can learn about graph structure, even though it is inherently based on features. As a case-study, we consider whether GraphSAGE can learn to predict the clustering coefficient of a node, i.e., the proportion of triangles that are closed within the node’s 1-hop neighborhood . The clustering coefficient is a popular measure of how clustered a node’s local neighborhood is, and it serves as a building block for many more complicated structural motifs . We can show that Algorithm 1 is capable of approximating clustering coefficients to an arbitrary degree of precision: ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_39", "text": " Theorem 1 states that for any graph there exists a parameter setting for Algorithm 1 such that it can approximate clustering coefficients in that graph to an arbitrary precision, if the features for every node are distinct (and if the model is sufficiently high-dimensional). The full proof of Theorem 1 is in the Appendix. Note that as a corollary of Theorem 1, GraphSAGE can learn about local graph structure, even when the node feature inputs are sampled from an absolutely continuous random distribution (see the Appendix for details). The basic idea behind the proof is that if each node has a unique feature representation, then we can learn to map nodes to indicator vectors and identify node neighborhoods. The proof of Theorem 1 relies on some properties of the pooling aggregator, which also provides insight into why GraphSAGE-pool outperforms the GCN and mean-based aggregators. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_40", "text": " We introduced a novel approach that allows embeddings to be efficiently generated for unseen nodes. GraphSAGE consistently outperforms state-of-the-art baselines, effectively trades off performance and runtime by sampling node neighborhoods, and our theoretical analysis provides insight into how our approach can learn about local graph structures. A number of extensions and potential improvements are possible, such as extending GraphSAGE to incorporate directed or multi-modal graphs. A particularly interesting direction for future work is exploring non-uniform neighborhood sampling functions, and perhaps even learning these functions as part of the GraphSAGE optimization. ", "title": "Inductive Representation Learning on Large Graphs" }, { "id": "1706.02216_all_41", "text": " The authors thank Austin Benson, Aditya Grover, Bryan He, Dan Jurafsky, Alex Ratner, Marinka Zitnik, and Daniel Selsam for their helpful discussions and comments on early drafts. The authors would also like to thank Ben Johnson for his many useful questions and comments on our code and Nikhil Mehta and Yuhui Ding for catching some minor errors in a previous version of the appendix. This research has been supported in part by NSF IIS-1149837, DARPA SIMPLEX, Stanford Data Science Initiative, Huawei, and Chan Zuckerberg Biohub. WLH was also supported by the SAP Stanford Graduate Fellowship and an NSERC PGS-D grant. The views and conclusions expressed in this material are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the above funding agencies, corporations, or the U.S. and Canadian governments. ", "title": "Inductive Representation Learning on Large Graphs" } ]
We also collect a test set of 300 prompts for zero-shot T2V human evaluation which we plan to release
They collect 300 text prompts and asked annotators what they would be interested in generating if there were a T2V system [29]. It is used for zero-shot T2V human evaluation which they plan to release [36].
[ 29, 36 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, video) dataset cannot be easily collected. It would be wasteful to train Text-to-Video (T2V) models from scratch when there already exist models that can generate images. Moreover, unsupervised learning enables networks to learn from orders of magnitude more data. This large quantity of data is important to learn representations of more subtle, less common concepts in the world. Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) (Liu et al., 2019a; Brown et al., 2020). Models pre-trained this way yield considerably higher performance than when solely trained in a supervised manner. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_1", "text": " Inspired by these motivations, we propose Make-A-Video. Make-A-Video leverages T2I models to learn the correspondence between text and the visual world, and uses unsupervised learning on unlabeled (unpaired) video data, to learn realistic motion. Together, Make-A-Video generates videos from text without leveraging paired text-video data. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_2", "text": " Clearly, text describing images does not capture the entirety of phenomena observed in videos. That said, one can often infer actions and events from static images (e.g. a woman drinking coffee, or an elephant kicking a football) as done in image-based action recognition systems (Girish et al., 2020). Moreover, even without text descriptions, unsupervised videos are sufficient to learn how different entities in the world move and interact (e.g. the motion of waves at the beach, or of an elephant’s trunk). As a result, a model that has only seen text describing images is surprisingly effective at generating short videos, as demonstrated by our temporal diffusion-based method. Make-A-Video sets the new state-of-the-art in T2V generation. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_3", "text": " Using function-preserving transformations, we extend the spatial layers at the model initialization stage, to include temporal information. The extended spatial-temporal network includes new attention modules that learn temporal world dynamics from a collection of videos. This procedure significantly accelerates the T2V training process by instantaneously transferring the knowledge from a previously trained T2I network to a new T2V one. To enhance the visual quality, we train spatial super-resolution models as well as frame interpolation models. This increases the resolution of the generated videos, as well as enables a higher (controllable) frame rate. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_4", "text": " Our main contributions are: • We present Make-A-Video – an effective method that extends a diffusion-based T2I model to T2V through a spatiotemporally factorized diffusion model. • We leverage joint text-image priors to bypass the need for paired text-video data, which in turn allows us to potentially scale to larger quantities of video data. • We present super-resolution strategies in space and time that, for the first time, generate high-definition, high frame-rate videos given a user-provided textual input. • We evaluate Make-A-Video against existing T2V systems and present: (a) State-of-the-art results in quantitative as well as qualitative measures, and (b) A more thorough evaluation than existing literature in T2V. We also collect a test set of 300 prompts for zero-shot T2V human evaluation which we plan to release. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_5", "text": " Text-to-Image Generation.  (Reed et al., 2016) is among the first methods to extend unconditional Generative Adversairal Network (GAN) (Goodfellow et al., 2014) to T2I generation. Later GAN variants have focused on progressive generation (Zhang et al., 2017; Hong et al., 2018), or better text-image alignment (Xu et al., 2018; Zhang et al., 2021). The pioneering work of DALL-E (Ramesh et al., 2021) considers T2I generation as a sequence-to-sequence translation problem using a discrete variational auto-encoder (VQVAE) and Transformer (Vaswani et al., 2017). Additional variants (Ding et al., 2022) have been proposed since then. For example, Make-A-Scene (Gafni et al., 2022) explores controllable T2I generation using semantic maps. Parti (Yu et al., 2022a) aims for more diverse content generation through an encoder-decoder architecture and an improved image tokenizer (Yu et al., 2021). On the other hand, Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) are successfully leveraged for T2I generation. GLIDE (Nichol et al., 2021) trained a T2I and an upsampling diffusion model for cascade generation. GLIDE’s proposed classifier-free guidance has been widely adopted in T2I generation to improve image quality and text faithfulness. DALLE-2 (Ramesh et al., 2022) leverages the CLIP (Radford et al., 2021) latent space and a prior model. VQ-diffusion (Gu et al., 2022) and stable diffusion (Rombach et al., 2022) performs T2I generation in the latent space instead of pixel space to improve efficiency. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_6", "text": " Text-to-Video Generation. While there is remarkable progress in T2I generation, the progress of T2V generation lags behind largely due to two main reasons: the lack of large-scale datasets with high-quality text-video pairs, and the complexity of modeling higher-dimensional video data. Early works (Mittal et al., 2017; Pan et al., 2017; Marwah et al., 2017; Li et al., 2018; Gupta et al., 2018; Liu et al., 2019b) are mainly focused on video generation in simple domains, such as moving digits or specific human actions. To our knowledge, Sync-DRAW (Mittal et al., 2017) is the first T2V generation approach that leverages a VAE with recurrent attention. (Pan et al., 2017) and (Li et al., 2018) extend GANs from image generation to T2V generation. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_7", "text": " More recently, GODIVA (Wu et al., 2021a) is the first to use 2D VQVAE and sparse attention for T2V generation supporting more realistic scenes. NÜWA (Wu et al., 2021b) extends GODIVA, and presents a unified representation for various generation tasks in a multitask learning scheme. To further improve the performance of T2V generation, CogVideo (Hong et al., 2022) is built on top of a frozen CogView-2 (Ding et al., 2022) T2I model by adding additional temporal attention modules. Video Diffusion Models (VDM) (Ho et al., 2022) uses a space-time factorized U-Net with joint image and video data training. While both CogVideo and VDM collected 10M private text-video pairs for training, our work uses solely open-source datasets, making it easier to reproduce. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_8", "text": " Leveraging Image Priors for Video Generation. Due to the complexity of modeling videos and the challenges in high-quality video data collection, it is natural to consider leveraging image priors for videos to simplifying the learning process. After all, an image is a video with a single frame (Bain et al., 2021). In unconditional video generation, MoCoGAN-HD (Tian et al., 2021) formulates video generation as the task of finding a trajectory in the latent space of a pre-trained and fixed image generation model. In T2V generation, NÜWA (Wu et al., 2021b) combines image and video datasets in a multitask pre-training stage to improve model generalization for fine-tuning. CogVideo (Hong et al., 2022) uses a pre-trained and fixed T2I model for T2V generation with only a small number of trainable parameters to reduce memory usage during training. But the fixed autoencoder and T2I models can be restrictive for T2V generation. The architecture of VDM (Ho et al., 2022) can enable joint image and video generation. However, they sample random independent images from random videos as their source of images, and do not leverage the massive text-image datasets. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_9", "text": " Make-A-Video differs from previous works in several aspects. First, our architecture breaks the dependency on text-video pairs for T2V generation. This is a significant advantage compared to prior work, that has to be restricted to narrow domains (Mittal et al., 2017; Gupta et al., 2018; Ge et al., 2022; Hayes et al., 2022), or require large-scale paired text-video data (Hong et al., 2022; Ho et al., 2022). Second, we fine-tune the T2I model for video generation, gaining the advantage of adapting the model weights effectively, compared to freezing the weights as in CogVideo (Hong et al., 2022). Third, motivated from prior work on efficient architectures for video and 3D vision tasks (Ye et al., 2019; Qiu et al., 2017; Xie et al., 2018), our use of pseudo-3D convolution (Qiu et al., 2017) and temporal attention layers not only better leverage a T2I architecture, it also allows for better temporal information fusion compared to VDM (Ho et al., 2022). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_10", "text": " Make-A-Video consists of three main components: (i) A base T2I model trained on text-image pairs (Sec. 3.1), (ii) spatiotemporal convolution and attention layers that extend the networks’ building blocks to the temporal dimension (Sec. 3.2), and (iii) spatiotemporal networks that consist of both spatiotemporal layers, as well as another crucial element needed for T2V generation - a frame interpolation network for high frame rate generation (Sec. 3.3). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_11", "text": " Make-A-Video’s final T2V inference scheme (depicted in Fig. 2) can be formulated as: yt^=SRh∘SRlt∘↑F∘Dt∘P∘(x^,Cx(x)),\\hat{y_{t}}=\\operatorname{SR}_{h}\\circ\\operatorname{SR}_{l}^{t}\\circ\\uparrow_{F}\\circ\\operatorname{D}^{t}\\circ\\operatorname{P}\\circ(\\hat{x},\\operatorname{C}_{x}(x)), (1) where yt^^subscript𝑦𝑡\\hat{y_{t}} is the generated video, SRh,SRlsubscriptSRℎsubscriptSR𝑙\\operatorname{SR}_{h},\\operatorname{SR}_{l} are the spatial and spatiotemporal super-resolution networks (Sec. 3.2), ↑Fsubscript↑𝐹\\uparrow_{F} is a frame interpolation network (Sec. 3.3), DtsuperscriptD𝑡\\operatorname{D}^{t} is the spatiotemporal decoder (Sec. 3.2), PP\\operatorname{P} is the prior (Sec. 3.1), x^^𝑥\\hat{x} is the BPE-encoded text, CxsubscriptC𝑥\\operatorname{C}_{x} is the CLIP text encoder (Radford et al., 2021), and x𝑥x is the input text. The three main components are described in detail in the following sections. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_12", "text": " Prior to the addition of the temporal components, we train the backbone of our method: a T2I model trained on text-image pairs, sharing the core components with the work of (Ramesh et al., 2022). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_13", "text": " We use the following networks to produce high-resolution images from text: (i) A prior network PP\\operatorname{\\textbf{P}}, that during inference generates image embeddings yesubscript𝑦𝑒y_{e} given text embeddings xesubscript𝑥𝑒x_{e} and BPE encoded text tokens x^^𝑥\\hat{x}, (ii) a decoder network DD\\operatorname{\\textbf{D}} that generates a low-resolution 64×64646464\\times 64 RGB image y^lsubscript^𝑦𝑙\\hat{y}_{l}, conditioned on the image embeddings yesubscript𝑦𝑒y_{e}, and (iii) two super-resolution networks SRlsubscriptSRl\\operatorname{\\textbf{SR}}_{\\textbf{l}},SRhsubscriptSRh\\operatorname{\\textbf{SR}}_{\\textbf{h}} that increase the generated image y^lsubscript^𝑦𝑙\\hat{y}_{l} resolution to 256×256256256256\\times 256 and 768×768768768768\\times 768 pixels respectively, resulting in the final222We then downsample to 512 using bicubic interpolation for a cleaner aesthetic. Maintaining a clean aesthetic for high definition videos is part of future work. generated image y^^𝑦\\hat{y}. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_14", "text": " In order to expand the two-dimensional (2D) conditional network into the temporal dimension, we modify the two key building blocks that now require not just spatial but also temporal dimensions in order to generate videos: (i) Convolutional layers (Sec. 3.2.1), and (ii) attention layers (Sec. 3.2.2), discussed in the following two subsections. Other layers, such as fully-connected layers, do not require specific handling when adding an additional dimension, as they are agnostic to structured spatial and temporal information. Temporal modifications are made in most U-Net-based diffusion networks: the spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} now generating 161616 RGB frames, each of size 64×64646464\\times 64, the newly added frame interpolation network ↑Fsubscript↑𝐹\\uparrow_{F}, increasing the effective frame rate by interpolating between the 161616 generated frames (as depicted in Fig. 2), and the super-resolution networks SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t}. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_15", "text": " Note that super resolution involves hallucinating information. In order to not have flickering artifacts, the hallucination must be consistent across frames. As a result, our SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} module operates across spatial and temporal dimensions. In qualitative inspection we found this to significantly outperform per-frame super resolution. It is challenging to extend SRhsubscriptSRℎ\\operatorname{SR}_{h} to the temporal dimension due to memory and compute constraints, as well as a scarcity of high resolution video data. So SRhsubscriptSRℎ\\operatorname{SR}_{h} operates only along the spatial dimensions. But to encourage consistent detail hallucination across frames, we use the same noise initialization for each frame. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_16", "text": " Motivated by separable convolutions (Chollet, 2017), we stack a 1D convolution following each 2D convolutional (conv) layer, as shown in Fig. 3. This facilitates information sharing between the spatial and temporal axes, without succumbing to the heavy computational load of 3D conv layers. In addition, it creates a concrete partition between the pre-trained 2D conv layers and the newly initialized 1D conv layers, allowing us to train the temporal convolutions from scratch, while retaining the previously learned spatial knowledge in the spatial convolutions’ weights. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_17", "text": " Given an input tensor h∈ℝB×C×F×H×Wℎsuperscriptℝ𝐵𝐶𝐹𝐻𝑊h\\in\\mathbb{R}^{B\\times C\\times F\\times H\\times W}, where B𝐵B, C𝐶C, F𝐹F, H𝐻H, W𝑊W are the batch, channels, frames, height, and width dimensions respectively, the Pseudo-3D convolutional layer is defined as: ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_18", "text": " C​o​n​vP​3​D​(h):=C​o​n​v1​D​(C​o​n​v2​D​(h)∘T)∘T,assign𝐶𝑜𝑛subscript𝑣𝑃3𝐷ℎ𝐶𝑜𝑛subscript𝑣1𝐷𝐶𝑜𝑛subscript𝑣2𝐷ℎ𝑇𝑇Conv_{P3D}(h):=Conv_{1D}(Conv_{2D}(h)\\circ T)\\circ T, (2) ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_19", "text": " where the transpose operator ∘Tabsent𝑇\\circ T swaps between the spatial and temporal dimensions. For smooth initialization, while the C​o​n​v2​D𝐶𝑜𝑛subscript𝑣2𝐷Conv_{2D} layer is initialized from the pre-trained T2I model, the C​o​n​v1​D𝐶𝑜𝑛subscript𝑣1𝐷Conv_{1D} layer is initialized as the identity function, enabling a seamless transition from training spatial-only layers, to spatiotemporal layers. Note that at initialization, the network will generate K different images (due to random noise), each faithful to the input text but lacking temporal coherence. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_20", "text": " A crucial component of T2I networks is the attention layer, where in addition to self-attending to extracted features, text information is injected to several network hierarchies, alongside other relevant information, such as the diffusion time-step. While using 3D convolutional layers is computationally heavy, adding the temporal dimension to attention layers is outright infeasible in terms of memory consumption. Inspired by the work of (Ho et al., 2022), we extend our dimension decomposition strategy to attention layers as well. Following each (pre-trained) spatial attention layer, we stack a temporal attention layer, which as with the convolutional layers, approximates a full spatiotemporal attention layer. Specifically, given an input tensor hℎh, we define f​l​a​t​t​e​n𝑓𝑙𝑎𝑡𝑡𝑒𝑛flatten as a matrix operator that flattens the spatial dimension into h′∈RB×C×F×H​Wsuperscriptℎ′superscript𝑅𝐵𝐶𝐹𝐻𝑊h^{\\prime}\\in R^{B\\times C\\times F\\times HW}. u​n​f​l​a​t​t​e​n𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛unflatten is defined as the inverse matrix operator. The Pseudo-3D attention layer therefore is therefore defined as: ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_21", "text": " A​T​T​NP​3​D​(h)=u​n​f​l​a​t​t​e​n​(A​T​T​N1​D​(A​T​T​N2​D​(f​l​a​t​t​e​n​(h))∘T)∘T).𝐴𝑇𝑇subscript𝑁𝑃3𝐷ℎ𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛𝐴𝑇𝑇subscript𝑁1𝐷𝐴𝑇𝑇subscript𝑁2𝐷𝑓𝑙𝑎𝑡𝑡𝑒𝑛ℎ𝑇𝑇ATTN_{P3D}(h)=unflatten(ATTN_{1D}(ATTN_{2D}(flatten(h))\\circ T)\\circ T). (3) ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_22", "text": " Similarly to C​o​n​vP​3​D𝐶𝑜𝑛subscript𝑣𝑃3𝐷Conv_{P3D}, to allow for smooth spatiotemporal initialization, the A​T​T​N2​D𝐴𝑇𝑇subscript𝑁2𝐷ATTN_{2D} layer is initialized from the pre-trained T2I model and the A​T​T​N1​D𝐴𝑇𝑇subscript𝑁1𝐷ATTN_{1D} layer is initialized as the identity function. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_23", "text": " Factorized space-time attention layers have also been used in VDM (Ho et al., 2022) and CogVideo (Hong et al., 2022). CogVideo has added temporal layers to each (frozen) spatial layers whereas we train them jointly. In order to force their network to train for images and videos interchangeably, VDM has extended their 2D U-Net to 3D through unflattened 1x3x3 convolution filters, such that the subsequent spatial attention remains 2D, and added 1D temporal attention through relative position embeddings. In contrast, we apply an additional 3x1x1 convolution projection (after each 1x3x3) such that the temporal information will also be passed through each convolution layer. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_24", "text": " Frame rate conditioning. In addition to the T2I conditionings, similar to CogVideo (Hong et al., 2022), we add an additional conditioning parameter f​p​s𝑓𝑝𝑠fps, representing the number of frames-per-second in a generated video. Conditioning on a varying number of frames-per-second, enables an additional augmentation method to tackle the limited volume of available videos at training time, and provides additional control on the generated video at inference time. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_25", "text": " In addition to the spatiotemporal modifications discussed in Sec. 3.2, we train a new masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F}, capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video, or by pre/post frame extrapolation for extending the video length. In order to increase the frame rate within memory and compute constraints, we fine-tune a spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} on the task of masked frame interpolation, by zero-padding the masked input frames, enabling video upsampling. When fine-tuning on masked frame interpolation, we add an additional 4 channels to the input of the U-Net: 3 channels for the RGB masked video input and an additional binary channel indicating which frames are masked. We fine-tune with variable frame-skips and f​p​s𝑓𝑝𝑠fps conditioning to enable multiple temporal upsample rates at inference time. We denote ↑Fsubscript↑𝐹\\uparrow_{F} as the operator that expands the given video tensor through masked frame interpolation. For all of our experiments we applied ↑Fsubscript↑𝐹\\uparrow_{F} with frame skip 5 to upsample a 16 frame video to 76 frames ((16-1)×\\times5+1). Note that we can use the same architecture for video extrapolation or image animation by masking frames at the beginning or end of a video. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_26", "text": " The different components of Make-A-Video described above are trained independently. The only component that receives text as input is the prior PP\\operatorname{P}. We train it on paired text-image data and do not fine-tune it on videos. The decoder, prior, and two super-resolution components are first trained on images alone (no aligned text). Recall that the decoder receives CLIP image embedding as input, and the super-resolution components receive downsampled images as input during training. After training on images, we add and initialize the new temporal layers and fine-tune them over unlabeled video data. 16 frames are sampled from the original video with random f​p​s𝑓𝑝𝑠fps ranging from 111 to 303030. We use the beta function for sampling and while training the decoder, start from higher FPS ranges (less motion) and then transition to lower FPS ranges (more motion). The masked-frame-interpolation component is fine-tuned from the temporal decoder. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_27", "text": " Datasets. To train the image models, we use a 2.32.32.3B subset of the dataset from  (Schuhmann et al., ) where the text is English. We filter out sample pairs with NSFW images 333We used this model: https://github.com/GantMan/nsfw_model, toxic words in the text, or images with a watermark probability larger than 0.50.50.5. We use WebVid-10M (Bain et al., 2021) and a 101010M subset from HD-VILA-100M (Xue et al., 2022) 444These 100100100M clips are sourced from 3.13.13.1M videos. We randomly downloaded 333 clips per video to form our HD-VILA-10M subset. to train our video generation models. Note that only the videos (no aligned text) are used. The decoder DtsuperscriptD𝑡\\operatorname{D}^{t} and the interpolation model is trained on WebVid-10M. SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} is trained on both WebVid-10M and HD-VILA-10M. While prior work (Hong et al., 2022; Ho et al., 2022) have collected private text-video pairs for T2V generation, we use only public datasets (and no paired text for videos). We conduct automatic evaluation on UCF-101 (Soomro et al., 2012) and MSR-VTT (Xu et al., 2016) in a zero-shot setting. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_28", "text": " Automatic Metrics. For UCF-101, we write one template sentence for each class (without generating any video) and fix it for evaluation. We report Frechet Video Distance (FVD) and Inception Score (IS) on 101010K samples following (Ho et al., 2022). We generate samples that follow the same class distribution as the training set. For MSR-VTT, we report Frechet Inception Distance (FID) (Parmar et al., 2022) and CLIPSIM (average CLIP similarity between video frames and text) (Wu et al., 2021a), where all 59,7945979459,794 captions from the test set are used, following (Wu et al., 2021b). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_29", "text": " Human Evaluation Set and Metrics. We collect an evaluation set from Amazon Mechanical Turk (AMT) that consists of 300300300 prompts. We asked annotators what they would be interested in generating if there were a T2V system. We filtered out prompts that were incomplete (e.g., “jump into water”), too abstract (e.g., “climate change”), or offensive. We then identified 555 categories (animals, fantasy, people, nature and scenes, food and beverage) and selected prompts for these categories. These prompts were selected without generating any videos for them, and were kept fixed. In addition, we also used the DrawBench prompts from Imagen (Saharia et al., 2022) for human evaluation. We evaluate video quality and text-video faithfulness. For video quality, we show two videos in random order and ask annotators which one is of higher quality. For faithfulness, we additionally show the text and ask annotators which video has a better correspondence with the text (we suggest them to ignore quality issues). In addition, we also conducted human evaluation to compare video motion realism of our interpolation model and FILM (Reda et al., 2022). For each comparison, we use the majority vote from 555 different annotators as the final result. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_30", "text": " Automatic Evaluation on MSR-VTT. In addition to GODIVA and NÜWA that report on MSR-VTT, we also perform inference on the officially released CogVideo model with both Chinese and English inputs for comparison. For CogVideo and Make-A-Video, we only generate one sample for each prompt in a zero-shot setting. We only generate videos that are at 16×256×2561625625616\\times 256\\times 256 as the evaluation models do not expect higher resolutions and frame rate. The results are shown in Table 1. Make-A-Video’s zero-shot performance is much better than GODIVA and NÜWA which are trained on MSR-VTT. We also outperform CogVideo in both Chinese and English settings. Thus, Make-A-Video has significantly better generalization capabilities than prior work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_31", "text": " Automatic Evaluation on UCF-101. UCF-101 is a popular benchmark to evaluate video generation and has been recently used in T2V models. CogVideo performed finetuning of their pretrained model for class-conditional video generation. VDM (Ho et al., 2022) performed unconditional video generation and trained from scratch on UCF-101. We argue that both settings are not ideal and is not a direct evaluation of the T2V generation capabilities. Moreover, the FVD evaluation model expects the videos to be 0.50.50.5 second (161616 frames), which is too short to be used for video generation in practice. Nevertheless, in order to compare to prior work, we conducted evaluation on UCF-101 in both zero-shot and finetuning settings. As shown in Table 2, Make-A-Video’s zero-shot performance is already competitive than other approaches that are trained on UCF-101, and is much better than CogVideo, which indicates that Make-A-Video can generalize better even to such a specific domain. Our finetuning setting achieves state-of-the-art results with a significant reduction in FVD, which suggests that Make-A-Video can generate more coherent videos than prior work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_32", "text": " Human Evaluation. We compare to CogVideo (the only public zero-shot T2V generation model) on DrawBench and our test set. We also evaluate on the 282828 videos shown on the webpage of VDM (Ho et al., 2022) (which may be biased towards showcasing the model’s strengths). Since this is a very small test set, we randomly generate 888 videos for each input and perform evaluation 888 times and report the average results. We generate videos at 76×256×2567625625676\\times 256\\times 256 resolution for human evaluation. The results are shown in Table 3. Make-A-Video achieves much better performance in both video quality and text-video faithfulness in all benchmarks and comparisons. For CogVideo, the results are similar on DrawBench and our evaluation set. For VDM, it is worth noting that we have achieved significantly better results without any cherry-picking. We also evaluate our frame interpolation network in comparison to FILM (Reda et al., 2022). We first generate low frame rate videos (1 FPS) from text prompts in DrawBench and our evaluation set, then use each method to upsample to 4 FPS. Raters choose our method for more realistic motion 62% of the time on our evaluation set and 54% of the time on DrawBench. We observe that our method excels when there are large differences between frames where having real-world knowledge of how objects move is crucial. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_33", "text": " Examples of Make-A-Video’s generations are shown in Figure 1. In this section, we will show T2V generation comparison to CogVideo (Hong et al., 2022) and VDM (Ho et al., 2022), and video interpolation comparison to FILM (Reda et al., 2022). In addition, our models can be used for a variety of other tasks such as image animation, video variation, etc. Due to space constraint, we only show a single example of each. Figure 4 (a) shows the comparison of Make-A-Video to CogVideo and VDM. Make-A-Video can generate richer content with motion consistency and text correspondence. Figure 4 (b) shows an example of image animation where we condition the masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F} on the image and CLIP image embedding to extrapolate the rest of the video. This allows a user to generate a video using their own image – giving them the opportunity to personalize and directly control the generated video. Figure 4 (c) shows a comparison of our approach to FILM (Reda et al., 2022) on the task of interpolation between two images. We achieve this by using the interpolation model that takes the two images as the beginning and end frames and masks 141414 frames in between for generation. Our model generates more semantically meaningful interpolation while FILM seems to primarily smoothly transition between frames without semantic real-world understanding of what is moving. Figure 4 (d) shows an example for video variation. We take the average CLIP embedding of all frames from a video as the condition to generate a semantically similar video. More video generation examples and applications can be found here: make-a-video.github.io. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_34", "text": " Learning from the world around us is one of the greatest strengths of human intelligence. Just as we quickly learn to recognize people, places, things, and actions through observation, generative systems will be more creative and useful if they can mimic the way humans learn. Learning world dynamics from orders of magnitude more videos using unsupervised learning helps researchers break away from the reliance on labeled data. The presented work has shown how labeled images combined effectively with unlabeled video footage can achieve that. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_35", "text": " As a next step we plan to address several of the technical limitations. As discussed earlier, our approach can not learn associations between text and phenomenon that can only be inferred in videos. How to incorporate these (e.g., generating a video of a person waving their hand left-to-right or right-to-left), along with generating longer videos, with multiple scenes and events, depicting more detailed stories, is left for future work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_36", "text": " As with all large-scale models trained on data from the web, our models have learnt and likely exaggerated social biases, including harmful ones. Our T2I generation model was trained on data that removed NSFW content and toxic words. All our data (image as well as videos) is publicly available, adding a layer of transparency to our models, and making it possible for the community to reproduce our work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_37", "text": " Mustafa Said Mehmetoglu, Jacob Xu, Katayoun Zand, Jia-Bin-Huang, Jiebo Luo, Shelly Sheynin, Angela Fan, Kelly Freed. Thank you for your contributions! ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" } ]
Do α control the strength of the length normalization and β control the strength of the coverage penalty each other?
Yes, Authors found that "α" which represents the strength of length normalization and "β" which represents coverage penalty are less effective for models with RLrefinment, and improved the original heuristic by dividing length to the power of α with 0 < α < 1 where α ∈ [06 − 07] on development set which usually found to be best [57].
[ 57 ]
[ { "id": "1609.08144_all_0", "text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text. Its architecture typically consists of two recurrent neural networks (RNNs), one to consume the input text sequence and one to generate translated output text. NMT is often accompanied by an attention mechanism  which helps it cope effectively with long input sequences. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_1", "text": " An advantage of Neural Machine Translation is that it sidesteps many brittle design choices in traditional phrase-based machine translation . In practice, however, NMT systems used to be worse in accuracy than phrase-based translation systems, especially when training on very large-scale datasets as used for the very best publicly available translation systems. Three inherent weaknesses of Neural Machine Translation are responsible for this gap: its slower training and inference speed, ineffectiveness in dealing with rare words, and sometimes failure to translate all words in the source sentence. Firstly, it generally takes a considerable amount of time and computational resources to train an NMT system on a large-scale translation dataset, thus slowing the rate of experimental turnaround time and innovation. For inference they are generally much slower than phrase-based systems due to the large number of parameters used. Secondly, NMT lacks robustness in translating rare words. Though this can be addressed in principle by training a “copy model” to mimic a traditional alignment model , or by using the attention mechanism to copy rare words , these approaches are both unreliable at scale, since the quality of the alignments varies across languages, and the latent alignments produced by the attention mechanism are unstable when the network is deep. Also, simple copying may not always be the best strategy to cope with rare words, for example when a transliteration is more appropriate. Finally, NMT systems sometimes produce output sentences that do not translate all parts of the input sentence – in other words, they fail to completely “cover” the input, which can result in surprising translations. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_2", "text": " This work presents the design and implementation of GNMT, a production NMT system at Google, that aims to provide solutions to the above problems. In our implementation, the recurrent networks are Long Short-Term Memory (LSTM) RNNs (23, 17). Our LSTM RNNs have 8 layers, with residual connections between layers to encourage gradient flow . For parallelism, we connect the attention from the bottom layer of the decoder network to the top layer of the encoder network. To improve inference time, we employ low-precision arithmetic for inference, which is further accelerated by special hardware (Google’s Tensor Processing Unit, or TPU). To effectively deal with rare words, we use sub-word units (also known as “wordpieces”) for inputs and outputs in our system. Using wordpieces gives a good balance between the flexibility of single characters and the efficiency of full words for decoding, and also sidesteps the need for special treatment of unknown words. Our beam search technique includes a length normalization procedure to deal efficiently with the problem of comparing hypotheses of different lengths during decoding, and a coverage penalty to encourage the model to translate all of the provided input. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_3", "text": " Our implementation is robust, and performs well on a range of datasets across many pairs of languages without the need for language-specific adjustments. Using the same implementation, we are able to achieve results comparable to or better than previous state-of-the-art systems on standard benchmarks, while delivering great improvements over Google’s phrase-based production translation system. Specifically, on WMT’14 English-to-French, our single model scores 38.95 BLEU, an improvement of 7.5 BLEU from a single model without an external alignment model reported in  and an improvement of 1.2 BLEU from a single model without an external alignment model reported in . Our single model is also comparable to a single model in , while not making use of any alignment model as being used in . Likewise on WMT’14 English-to-German, our single model scores 24.17 BLEU, which is 3.4 BLEU better than a previous competitive baseline . On production data, our implementation is even more effective. Human evaluations show that GNMT has reduced translation errors by 60% compared to our previous phrase-based system on many pairs of languages: English ↔↔\\leftrightarrow French, English ↔↔\\leftrightarrow Spanish, and English ↔↔\\leftrightarrow Chinese. Additional experiments suggest the quality of the resulting translation system gets closer to that of average human translators. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_4", "text": " Statistical Machine Translation (SMT) has been the dominant translation paradigm for decades (3, 4, 5). Practical implementations of SMT are generally phrase-based systems (PBMT) which translate sequences of words or phrases where the lengths may differ . ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_5", "text": " Even prior to the advent of direct Neural Machine Translation, neural networks have been used as a component within SMT systems with some success. Perhaps one of the most notable attempts involved the use of a joint language model to learn phrase representations  which yielded an impressive improvement when combined with phrase-based translation. This approach, however, still makes use of phrase-based translation systems at its core, and therefore inherits their shortcomings. Other proposed approaches for learning phrase representations  or learning end-to-end translation with neural networks  offered encouraging hints, but ultimately delivered worse overall accuracy compared to standard phrase-based systems. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_6", "text": " The concept of end-to-end learning for machine translation has been attempted in the past (e.g., ) with limited success. Following seminal papers in the area (41, 2), NMT translation quality has crept closer to the level of phrase-based translation systems for common research benchmarks. Perhaps the first successful attempt at surpassing phrase-based translation was described in . On WMT’14 English-to-French, this system achieved a 0.5 BLEU improvement compared to a state-of-the-art phrase-based system. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_7", "text": " Since then, many novel techniques have been proposed to further improve NMT: using an attention mechanism to deal with rare words , a mechanism to model translation coverage , multi-task and semi-supervised training to incorporate more data (14, 29), a character decoder , a character encoder , subword units  also to deal with rare word outputs, different kinds of attention mechanisms , and sentence-level loss minimization (39, 34). While the translation accuracy of these systems has been encouraging, systematic comparison with large scale, production quality phrase-based translation systems has been lacking. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_8", "text": " Our model (see Figure 1) follows the common sequence-to-sequence learning framework  with attention . It has three components: an encoder network, a decoder network, and an attention network. The encoder transforms a source sentence into a list of vectors, one vector per input symbol. Given this list of vectors, the decoder produces one symbol at a time, until the special end-of-sentence symbol (EOS) is produced. The encoder and decoder are connected through an attention module which allows the decoder to focus on different regions of the source sentence during the course of decoding. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_9", "text": " For notation, we use bold lower case to denote vectors (e.g., 𝐯,𝐨𝐢𝐯subscript𝐨𝐢\\mathbf{v,o_{i}}), bold upper case to represent matrices (e.g., 𝐔,𝐖𝐔𝐖\\mathbf{U,W}), cursive upper case to represent sets (e.g., 𝒱,𝒯𝒱𝒯\\mathscr{V,T}), capital letters to represent sequences (e.g. X𝑋X, Y𝑌Y), and lower case to represent individual symbols in a sequence, (e.g., x1subscript𝑥1x_{1}, x2subscript𝑥2x_{2}). ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_10", "text": " Let (X,Y)𝑋𝑌(X,Y) be a source and target sentence pair. Let X=x1,x2,x3,…,xM𝑋subscript𝑥1subscript𝑥2subscript𝑥3…subscript𝑥𝑀X=x_{1},x_{2},x_{3},...,x_{M} be the sequence of M𝑀M symbols in the source sentence and let Y=y1,y2,y3,…,yN𝑌subscript𝑦1subscript𝑦2subscript𝑦3…subscript𝑦𝑁Y=y_{1},y_{2},y_{3},...,y_{N} be the sequence of N𝑁N symbols in the target sentence. The encoder is simply a function of the following form: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_11", "text": " 𝐱𝟏,𝐱𝟐,…,𝐱𝐌=E​n​c​o​d​e​r​R​N​N​(x1,x2,x3,…,xM)subscript𝐱1subscript𝐱2…subscript𝐱𝐌𝐸𝑛𝑐𝑜𝑑𝑒𝑟𝑅𝑁𝑁subscript𝑥1subscript𝑥2subscript𝑥3…subscript𝑥𝑀\\mathbf{x_{1},x_{2},...,x_{M}}=EncoderRNN(x_{1},x_{2},x_{3},...,x_{M}) (1) In this equation, 𝐱𝟏,𝐱𝟐,…,𝐱𝐌subscript𝐱1subscript𝐱2…subscript𝐱𝐌\\mathbf{x_{1},x_{2},...,x_{M}} is a list of fixed size vectors. The number of members in the list is the same as the number of symbols in the source sentence (M𝑀M in this example). Using the chain rule the conditional probability of the sequence P​(Y|X)𝑃conditional𝑌𝑋P(Y|X) can be decomposed as: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_12", "text": " P​(Y|X)=P​(Y|𝐱𝟏,𝐱𝟐,𝐱𝟑,…,𝐱𝐌)=∏i=1NP​(yi|y0,y1,y2,…,yi−1;𝐱𝟏,𝐱𝟐,𝐱𝟑,…,𝐱𝐌)𝑃conditional𝑌𝑋𝑃conditional𝑌subscript𝐱1subscript𝐱2subscript𝐱3…subscript𝐱𝐌superscriptsubscriptproduct𝑖1𝑁𝑃conditionalsubscript𝑦𝑖subscript𝑦0subscript𝑦1subscript𝑦2…subscript𝑦𝑖1subscript𝐱1subscript𝐱2subscript𝐱3…subscript𝐱𝐌\\begin{split}P(Y|X)&=P(Y|\\mathbf{x_{1},x_{2},x_{3},...,x_{M}})\\\\ &=\\prod_{i=1}^{N}P(y_{i}|y_{0},y_{1},y_{2},...,y_{i-1};\\mathbf{x_{1},x_{2},x_{3},...,x_{M}})\\end{split} (2) where y0subscript𝑦0y_{0} is a special “beginning of sentence” symbol that is prepended to every target sentence. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_13", "text": " During inference we calculate the probability of the next symbol given the source sentence encoding and the decoded target sequence so far: P​(yi|y0,y1,y2,y3,…,yi−1;𝐱𝟏,𝐱𝟐,𝐱𝟑,…,𝐱𝐌)𝑃conditionalsubscript𝑦𝑖subscript𝑦0subscript𝑦1subscript𝑦2subscript𝑦3…subscript𝑦𝑖1subscript𝐱1subscript𝐱2subscript𝐱3…subscript𝐱𝐌P(y_{i}|y_{0},y_{1},y_{2},y_{3},...,y_{i-1};\\mathbf{x_{1}},\\mathbf{x_{2}},\\mathbf{x_{3}},...,\\mathbf{x_{M}}) (3) Our decoder is implemented as a combination of an RNN network and a softmax layer. The decoder RNN network produces a hidden state 𝐲𝐢subscript𝐲𝐢\\mathbf{y_{i}} for the next symbol to be predicted, which then goes through the softmax layer to generate a probability distribution over candidate output symbols. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_14", "text": " In our experiments we found that for NMT systems to achieve good accuracy, both the encoder and decoder RNNs have to be deep enough to capture subtle irregularities in the source and target languages. This observation is similar to previous observations that deep LSTMs significantly outperform shallow LSTMs . In that work, each additional layer reduced perplexity by nearly 10%. Similar to , we use a deep stacked Long Short Term Memory (LSTM)  network for both the encoder RNN and the decoder RNN. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_15", "text": " Our attention module is similar to . More specifically, let 𝐲i−1subscript𝐲𝑖1\\mathbf{y}_{i-1} be the decoder-RNN output from the past decoding time step (in our implementation, we use the output from the bottom decoder layer). Attention context 𝐚isubscript𝐚𝑖\\mathbf{a}_{i} for the current time step is computed according to the following formulas: st=A​t​t​e​n​t​i​o​n​F​u​n​c​t​i​o​n​(𝐲i−1,𝐱t)∀t,1≤t≤Mpt=exp⁡(st)/∑t=1Mexp⁡(st)∀t,1≤t≤M𝐚i=∑t=1Mpt.𝐱t\\begin{split}s_{t}&=AttentionFunction(\\mathbf{y}_{i-1},\\mathbf{x}_{t})\\quad\\forall t,\\quad 1\\leq t\\leq M\\\\ p_{t}&=\\exp(s_{t})/\\sum_{t=1}^{M}\\exp(s_{t})\\quad\\quad\\forall t,\\quad 1\\leq t\\leq M\\\\ \\mathbf{a}_{i}&=\\sum_{t=1}^{M}p_{t}.\\mathbf{x}_{t}\\end{split} (4) where A​t​t​e​n​t​i​o​n​F​u​n​c​t​i​o​n𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛AttentionFunction in our implementation is a feed forward network with one hidden layer. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_16", "text": " As mentioned above, deep stacked LSTMs often give better accuracy over shallower models. However, simply stacking more layers of LSTM works only to a certain number of layers, beyond which the network becomes too slow and difficult to train, likely due to exploding and vanishing gradient problems (33, 22). In our experience with large-scale translation tasks, simple stacked LSTM layers work well up to 4 layers, barely with 6 layers, and very poorly beyond 8 layers. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_17", "text": " Motivated by the idea of modeling differences between an intermediate layer’s output and the targets, which has shown to work well for many projects in the past (16, 21, 40), we introduce residual connections among the LSTM layers in a stack (see Figure 2). More concretely, let LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} and LSTMi+1subscriptLSTM𝑖1\\mathrm{LSTM}_{i+1} be the i𝑖i-th and (i+1)𝑖1(i+1)-th LSTM layers in a stack, whose parameters are 𝐖isuperscript𝐖𝑖\\mathbf{W}^{i} and 𝐖i+1superscript𝐖𝑖1\\mathbf{W}^{i+1} respectively. At the t𝑡t-th time step, for the stacked LSTM without residual connections, we have: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_18", "text": " 𝐜ti,𝐦ti=LSTMi​(𝐜t−1i,𝐦t−1i,𝐱ti−1;𝐖i)𝐱ti=𝐦ti𝐜ti+1,𝐦ti+1=LSTMi+1​(𝐜t−1i+1,𝐦t−1i+1,𝐱ti;𝐖i+1)formulae-sequencesuperscriptsubscript𝐜𝑡𝑖superscriptsubscript𝐦𝑡𝑖subscriptLSTM𝑖superscriptsubscript𝐜𝑡1𝑖superscriptsubscript𝐦𝑡1𝑖superscriptsubscript𝐱𝑡𝑖1superscript𝐖𝑖superscriptsubscript𝐱𝑡𝑖superscriptsubscript𝐦𝑡𝑖superscriptsubscript𝐜𝑡𝑖1superscriptsubscript𝐦𝑡𝑖1subscriptLSTM𝑖1superscriptsubscript𝐜𝑡1𝑖1superscriptsubscript𝐦𝑡1𝑖1superscriptsubscript𝐱𝑡𝑖superscript𝐖𝑖1\\begin{split}\\mathbf{c}_{t}^{i},\\mathbf{m}_{t}^{i}&=\\mathrm{LSTM}_{i}(\\mathbf{c}_{t-1}^{i},\\mathbf{m}_{t-1}^{i},\\mathbf{x}_{t}^{i-1};\\mathbf{W}^{i})\\\\ \\mathbf{x}_{t}^{i}&=\\mathbf{m}_{t}^{i}\\\\ \\mathbf{c}_{t}^{i+1},\\mathbf{m}_{t}^{i+1}&=\\mathrm{LSTM}_{i+1}(\\mathbf{c}_{t-1}^{i+1},\\mathbf{m}_{t-1}^{i+1},\\mathbf{x}_{t}^{i};\\mathbf{W}^{i+1})\\end{split} (5) where 𝐱tisuperscriptsubscript𝐱𝑡𝑖\\mathbf{x}_{t}^{i} is the input to LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} at time step t𝑡t, and 𝐦tisuperscriptsubscript𝐦𝑡𝑖\\mathbf{m}_{t}^{i} and 𝐜tisuperscriptsubscript𝐜𝑡𝑖\\mathbf{c}_{t}^{i} are the hidden states and memory states of LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} at time step t𝑡t, respectively. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_19", "text": " With residual connections between LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} and LSTMi+1subscriptLSTM𝑖1\\mathrm{LSTM}_{i+1}, the above equations become: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_20", "text": " 𝐜ti,𝐦ti=LSTMi​(𝐜t−1i,𝐦t−1i,𝐱ti−1;𝐖i)𝐱ti=𝐦ti+𝐱ti−1𝐜ti+1,𝐦ti+1=LSTMi+1​(𝐜t−1i+1,𝐦t−1i+1,𝐱ti;𝐖i+1)formulae-sequencesuperscriptsubscript𝐜𝑡𝑖superscriptsubscript𝐦𝑡𝑖subscriptLSTM𝑖superscriptsubscript𝐜𝑡1𝑖superscriptsubscript𝐦𝑡1𝑖superscriptsubscript𝐱𝑡𝑖1superscript𝐖𝑖superscriptsubscript𝐱𝑡𝑖superscriptsubscript𝐦𝑡𝑖superscriptsubscript𝐱𝑡𝑖1superscriptsubscript𝐜𝑡𝑖1superscriptsubscript𝐦𝑡𝑖1subscriptLSTM𝑖1superscriptsubscript𝐜𝑡1𝑖1superscriptsubscript𝐦𝑡1𝑖1superscriptsubscript𝐱𝑡𝑖superscript𝐖𝑖1\\begin{split}\\mathbf{c}_{t}^{i},\\mathbf{m}_{t}^{i}&=\\mathrm{LSTM}_{i}(\\mathbf{c}_{t-1}^{i},\\mathbf{m}_{t-1}^{i},\\mathbf{x}_{t}^{i-1};\\mathbf{W}^{i})\\\\ \\mathbf{x}_{t}^{i}&=\\mathbf{m}_{t}^{i}+\\mathbf{x}_{t}^{i-1}\\\\ \\mathbf{c}_{t}^{i+1},\\mathbf{m}_{t}^{i+1}&=\\mathrm{LSTM}_{i+1}(\\mathbf{c}_{t-1}^{i+1},\\mathbf{m}_{t-1}^{i+1},\\mathbf{x}_{t}^{i};\\mathbf{W}^{i+1})\\end{split} (6) Residual connections greatly improve the gradient flow in the backward pass, which allows us to train very deep encoder and decoder networks. In most of our experiments, we use 8 LSTM layers for the encoder and decoder, though residual connections can allow us to train substantially deeper networks (similar to what was observed in ). ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_21", "text": " For translation systems, the information required to translate certain words on the output side can appear anywhere on the source side. Often the source side information is approximately left-to-right, similar to the target side, but depending on the language pair the information for a particular output word can be distributed and even be split up in certain regions of the input side. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_22", "text": " To have the best possible context at each point in the encoder network it makes sense to use a bi-directional RNN  for the encoder, which was also used in . To allow for maximum possible parallelization during computation (to be discussed in more detail in section 3.3), bi-directional connections are only used for the bottom encoder layer – all other encoder layers are uni-directional. Figure 3 illustrates our use of bi-directional LSTMs at the bottom encoder layer. The layer LSTMfsubscriptLSTM𝑓\\mathrm{LSTM}_{f} processes the source sentence from left to right, while the layer LSTMbsubscriptLSTM𝑏\\mathrm{LSTM}_{b} processes the source sentence from right to left. Outputs from LSTMfsubscriptLSTM𝑓\\mathrm{LSTM}_{f} (𝐱𝐭𝐟→→superscriptsubscript𝐱𝐭𝐟\\overrightarrow{\\mathbf{x_{t}^{f}}}) and LSTMbsubscriptLSTM𝑏\\mathrm{LSTM}_{b} (𝐱𝐭𝐛←←superscriptsubscript𝐱𝐭𝐛\\overleftarrow{\\mathbf{x_{t}^{b}}}) are first concatenated and then fed to the next layer LSTM1subscriptLSTM1\\mathrm{LSTM}_{1}. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_23", "text": " Due to the complexity of our model, we make use of both model parallelism and data parallelism to speed up training. Data parallelism is straightforward: we train n𝑛n model replicas concurrently using a Downpour SGD algorithm . The n𝑛n replicas all share one copy of model parameters, with each replica asynchronously updating the parameters using a combination of Adam and SGD algorithms. In our experiments, n𝑛n is often around 10. Each replica works on a mini-batch of m𝑚m sentence pairs at a time, which is often 128 in our experiments. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_24", "text": " In addition to data parallelism, model parallelism is used to improve the speed of the gradient computation on each replica. The encoder and decoder networks are partitioned along the depth dimension and are placed on multiple GPUs, effectively running each layer on a different GPU. Since all but the first encoder layer are uni-directional, layer i+1𝑖1i+1 can start its computation before layer i𝑖i is fully finished, which improves training speed. The softmax layer is also partitioned, with each partition responsible for a subset of symbols in the output vocabulary. Figure 1 shows more details of how partitioning is done. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_25", "text": " Model parallelism places certain constraints on the model architectures we can use. For example, we cannot afford to have bi-directional LSTM layers for all the encoder layers, since doing so would reduce parallelism among subsequent layers, as each layer would have to wait until both forward and backward directions of the previous layer have finished. This would effectively constrain us to make use of only 2 GPUs in parallel (one for the forward direction and one for the backward direction). For the attention portion of the model, we chose to align the bottom decoder output to the top encoder output to maximize parallelism when running the decoder network. Had we aligned the top decoder layer to the top encoder layer, we would have removed all parallelism in the decoder network and would not benefit from using more than one GPU for decoding. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_26", "text": " Neural Machine Translation models often operate with fixed word vocabularies even though translation is fundamentally an open vocabulary problem (names, numbers, dates etc.). There are two broad categories of approaches to address the translation of out-of-vocabulary (OOV) words. One approach is to simply copy rare words from source to target (as most rare words are names or numbers where the correct translation is just a copy), either based on the attention model , using an external alignment model , or even using a more complicated special purpose pointing network . Another broad category of approaches is to use sub-word units, e.g., chararacters , mixed word/characters , or more intelligent sub-words . ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_27", "text": " Our most successful approach falls into the second category (sub-word units), and we adopt the wordpiece model (WPM) implementation initially developed to solve a Japanese/Korean segmentation problem for the Google speech recognition system . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters. It is similar to the method used in to deal with rare words in Neural Machine Translation. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_28", "text": " For processing arbitrary words, we first break words into wordpieces given a trained wordpiece model. Special word boundary symbols are added before training of the model such that the original word sequence can be recovered from the wordpiece sequence without ambiguity. At decoding time, the model first produces a wordpiece sequence, which is then converted into the corresponding word sequence. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_29", "text": " Here is an example of a word sequence and the corresponding wordpiece sequence: • Word: Jet makers feud over seat width with big orders at stake • wordpieces: _J et _makers _fe ud _over _seat _width _with _big _orders _at _stake ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_30", "text": " In the above example, the word “Jet” is broken into two wordpieces “_J” and “et”, and the word “feud” is broken into two wordpieces “_fe” and “ud”. The other words remain as single wordpieces. “_” is a special character added to mark the beginning of a word. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_31", "text": " The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. Given a training corpus and a number of desired tokens D𝐷D, the optimization problem is to select D𝐷D wordpieces such that the resulting corpus is minimal in the number of wordpieces when segmented according to the chosen wordpiece model. Our greedy algorithm to this optimization problem is similar to  and is described in more detail in . Compared to the original implementation used in , we use a special symbol only at the beginning of the words and not at both ends. We also cut the number of basic characters to a manageable number depending on the data (roughly 500 for Western languages, more for Asian languages) and map the rest to a special unknown character to avoid polluting the given wordpiece vocabulary with very rare characters. We find that using a total vocabulary of between 8k and 32k wordpieces achieves both good accuracy (BLEU scores) and fast decoding speed across all pairs of language pairs we have tried. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_32", "text": " As mentioned above, in translation it often makes sense to copy rare entity names or numbers directly from the source to the target. To facilitate this type of direct copying, we always use a shared wordpiece model for both the source language and target language. Using this approach, it is guaranteed that the same string in source and target sentence will be segmented in exactly the same way, making it easier for the system to learn to copy these tokens. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_33", "text": " Wordpieces achieve a balance between the flexibility of characters and efficiency of words. We also find that our models get better overall BLEU scores when using wordpieces – possibly due to the fact that our models now deal efficiently with an essentially infinite vocabulary without resorting to characters only. The latter would make the average lengths of the input and output sequences much longer, and therefore would require more computation. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_34", "text": " A second approach we use is the mixed word/character model. As in a word model, we keep a fixed-size word vocabulary. However, unlike in a conventional word model where OOV words are collapsed into a single UNK symbol, we convert OOV words into the sequence of its constituent characters. Special prefixes are prepended to the characters, to 1) show the location of the characters in a word, and 2) to distinguish them from normal in-vocabulary characters. There are three prefixes: <B>,<M>, and <E>, indicating beginning of the word, middle of the word and end of the word, respectively. For example, let’s assume the word Miki is not in the vocabulary. It will be preprocessed into a sequence of special tokens: <B>M <M>i <M>k <E>i. The process is done on both the source and the target sentences. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_35", "text": " Given a dataset of parallel text containing N𝑁N input-output sequence pairs, denoted 𝒟≡{(X(i),Y∗(i))}i=1N𝒟superscriptsubscriptsuperscript𝑋𝑖superscript𝑌absent𝑖𝑖1𝑁\\mathcal{D}\\equiv\\left\\{(X^{(i)},Y^{*(i)})\\right\\}_{i=1}^{N}, standard maximum-likelihood training aims at maximizing the sum of log probabilities of the ground-truth outputs given the corresponding inputs, 𝒪ML​(𝜽)=∑i=1Nlog⁡Pθ​(Y∗(i)∣X(i)).subscript𝒪ML𝜽superscriptsubscript𝑖1𝑁subscript𝑃𝜃conditionalsuperscript𝑌absent𝑖superscript𝑋𝑖\\mathcal{O}_{\\mathrm{ML}}(\\bm{\\mathbf{\\theta}})=\\sum_{i=1}^{N}\\log{P}_{\\theta}(Y^{*(i)}\\mid X^{(i)})~{}. (7) The main problem with this objective is that it does not reflect the task reward function as measured by the BLEU score in translation. Further, this objective does not explicitly encourage a ranking among incorrect output sequences – where outputs with higher BLEU scores should still obtain higher probabilities under the model – since incorrect outputs are never observed during training. In other words, using maximum-likelihood training only, the model will not learn to be robust to errors made during decoding since they are never observed, which is quite a mismatch between the training and testing procedure. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_36", "text": " Several recent papers (34, 39, 32) have considered different ways of incorporating the task reward into optimization of neural sequence-to-sequence models. In this work, we also attempt to refine a model pre-trained on the maximum likelihood objective to directly optimize for the task reward. We show that, even on large datasets, refinement of state-of-the-art maximum-likelihood models using task reward improves the results considerably. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_37", "text": " We consider model refinement using the expected reward objective (also used in ), which can be expressed as 𝒪RL​(𝜽)=∑i=1N∑Y∈𝒴Pθ​(Y∣X(i))​r​(Y,Y∗(i)).subscript𝒪RL𝜽superscriptsubscript𝑖1𝑁subscript𝑌𝒴subscript𝑃𝜃conditional𝑌superscript𝑋𝑖𝑟𝑌superscript𝑌absent𝑖\\mathcal{O}_{\\mathrm{RL}}(\\bm{\\mathbf{\\theta}})=\\sum_{i=1}^{N}\\sum_{Y\\in\\mathcal{Y}}{P}_{\\theta}(Y\\mid X^{(i)})~{}r(Y,Y^{*(i)}). (8) Here, r​(Y,Y∗(i))𝑟𝑌superscript𝑌absent𝑖r(Y,Y^{*(i)}) denotes the per-sentence score, and we are computing an expectation over all of the output sentences Y𝑌Y, up to a certain length. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_38", "text": " The BLEU score has some undesirable properties when used for single sentences, as it was designed to be a corpus measure. We therefore use a slightly different score for our RL experiments which we call the “GLEU score”. For the GLEU score, we record all sub-sequences of 1, 2, 3 or 4 tokens in output and target sequence (n-grams). We then compute a recall, which is the ratio of the number of matching n-grams to the number of total n-grams in the target (ground truth) sequence, and a precision, which is the ratio of the number of matching n-grams to the number of total n-grams in the generated output sequence. Then GLEU score is simply the minimum of recall and precision. This GLEU score’s range is always between 0 (no matches) and 1 (all match) and it is symmetrical when switching output and target. According to our experiments, GLEU score correlates quite well with the BLEU metric on a corpus level but does not have its drawbacks for our per sentence reward objective. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_39", "text": " As is common practice in reinforcement learning, we subtract the mean reward from r​(Y,Y∗(i))𝑟𝑌superscript𝑌absent𝑖r(Y,Y^{*(i)}) in equation 8. The mean is estimated to be the sample mean of m𝑚m sequences drawn independently from distribution Pθ​(Y∣X(i))subscript𝑃𝜃conditional𝑌superscript𝑋𝑖{P}_{\\theta}(Y\\mid X^{(i)}). In our implementation, m𝑚m is set to be 15. To further stabilize training, we optimize a linear combination of ML (equation 7) and RL (equation 8) objectives as follows: 𝒪Mixed​(𝜽)=α∗𝒪ML​(𝜽)+𝒪RL​(𝜽)subscript𝒪Mixed𝜽𝛼subscript𝒪ML𝜽subscript𝒪RL𝜽\\mathcal{O}_{\\mathrm{Mixed}}(\\bm{\\mathbf{\\theta}})=\\alpha*\\mathcal{O}_{\\mathrm{ML}}(\\bm{\\mathbf{\\theta}})+\\mathcal{O}_{\\mathrm{RL}}(\\bm{\\mathbf{\\theta}}) (9) α𝛼\\alpha in our implementation is typically set to be 0.0170.0170.017. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_40", "text": " In our setup, we first train a model using the maximum likelihood objective (equation 7) until convergence. We then refine this model using a mixed maximum likelihood and expected reward objective (equation 9), until BLEU score on a development set is no longer improving. The second step is optional. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_41", "text": " One of the main challenges in deploying our Neural Machine Translation model to our interactive production translation service is that it is computationally intensive at inference, making low latency translation difficult, and high volume deployment computationally expensive. Quantized inference using reduced precision arithmetic is one technique that can significantly reduce the cost of inference for these models, often providing efficiency improvements on the same computational devices. For example, in , it is demonstrated that a convolutional neural network model can be sped up by a factor of 4-6 with minimal loss on classification accuracy on the ILSVRC-12 benchmark. In , it is demonstrated that neural network model weights can be quantized to only three states, -1, 0, and +1. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_42", "text": " Many of those previous studies (19, 20, 43, 27) however mostly focus on CNN models with relatively few layers. Deep LSTMs with long sequences pose a novel challenge in that quantization errors can be significantly amplified after many unrolled steps or after going through a deep LSTM stack. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_43", "text": " In this section, we present our approach to speed up inference with quantized arithmetic. Our solution is tailored towards the hardware options available at Google. To reduce quantization errors, additional constraints are added to our model during training so that it is quantizable with minimal impact on the output of the model. That is, once a model is trained with these additional constraints, it can be subsequently quantized without loss to translation quality. Our experimental results suggest that those additional constraints do not hurt model convergence nor the quality of a model once it has converged. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_44", "text": " Recall from equation 6 that in an LSTM stack with residual connections there are two accumulators: 𝐜tisuperscriptsubscript𝐜𝑡𝑖\\mathbf{c}_{t}^{i} along the time axis and 𝐱tisuperscriptsubscript𝐱𝑡𝑖\\mathbf{x}_{t}^{i} along the depth axis. In theory, both of the accumulators are unbounded, but in practice, we noticed their values remain quite small. For quantized inference, we explicitly constrain the values of these accumulators to be within (-δ𝛿\\delta, δ𝛿\\delta) to guarantee a certain range that can be used for quantization later. The forward computation of an LSTM stack with residual connections is modified to the following: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_45", "text": " 𝐜′ti,𝐦ti=LSTMi​(𝐜t−1i,𝐦t−1i,𝐱ti−1;𝐖i)𝐜ti=max⁡(−δ,min⁡(δ,𝐜′ti))𝐱′ti=𝐦ti+𝐱ti−1𝐱ti=max⁡(−δ,min⁡(δ,𝐱′ti))𝐜′ti+1,𝐦ti+1=LSTMi+1​(𝐜t−1i+1,𝐦t−1i+1,𝐱ti;𝐖i+1)𝐜ti+1=max⁡(−δ,min⁡(δ,𝐜′ti+1))formulae-sequencesuperscriptsubscriptsuperscript𝐜′𝑡𝑖superscriptsubscript𝐦𝑡𝑖subscriptLSTM𝑖superscriptsubscript𝐜𝑡1𝑖superscriptsubscript𝐦𝑡1𝑖superscriptsubscript𝐱𝑡𝑖1superscript𝐖𝑖superscriptsubscript𝐜𝑡𝑖𝛿𝛿superscriptsubscriptsuperscript𝐜′𝑡𝑖superscriptsubscriptsuperscript𝐱′𝑡𝑖superscriptsubscript𝐦𝑡𝑖superscriptsubscript𝐱𝑡𝑖1superscriptsubscript𝐱𝑡𝑖𝛿𝛿superscriptsubscriptsuperscript𝐱′𝑡𝑖superscriptsubscriptsuperscript𝐜′𝑡𝑖1superscriptsubscript𝐦𝑡𝑖1subscriptLSTM𝑖1superscriptsubscript𝐜𝑡1𝑖1superscriptsubscript𝐦𝑡1𝑖1superscriptsubscript𝐱𝑡𝑖superscript𝐖𝑖1superscriptsubscript𝐜𝑡𝑖1𝛿𝛿superscriptsubscriptsuperscript𝐜′𝑡𝑖1\\begin{split}\\mathbf{c^{\\prime}}_{t}^{i},\\mathbf{m}_{t}^{i}&=\\mathrm{LSTM}_{i}(\\mathbf{c}_{t-1}^{i},\\mathbf{m}_{t-1}^{i},\\mathbf{x}_{t}^{i-1};\\mathbf{W}^{i})\\\\ \\mathbf{c}_{t}^{i}&=\\max(-\\delta,\\min(\\delta,\\mathbf{c^{\\prime}}_{t}^{i}))\\\\ \\mathbf{x^{\\prime}}_{t}^{i}&=\\mathbf{m}_{t}^{i}+\\mathbf{x}_{t}^{i-1}\\\\ \\mathbf{x}_{t}^{i}&=\\max(-\\delta,\\min(\\delta,\\mathbf{x^{\\prime}}_{t}^{i}))\\\\ \\mathbf{c^{\\prime}}_{t}^{i+1},\\mathbf{m}_{t}^{i+1}&=\\mathrm{LSTM}_{i+1}(\\mathbf{c}_{t-1}^{i+1},\\mathbf{m}_{t-1}^{i+1},\\mathbf{x}_{t}^{i};\\mathbf{W}^{i+1})\\\\ \\mathbf{c}_{t}^{i+1}&=\\max(-\\delta,\\min(\\delta,\\mathbf{c^{\\prime}}_{t}^{i+1}))\\end{split} (10) Let us expand LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} in equation 10 to include the internal gating logic. For brevity, we drop all the superscripts i𝑖i. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_46", "text": " 𝐖=(𝐖1,𝐖2,𝐖3,𝐖4,𝐖5,𝐖6,𝐖7,𝐖8)𝐢t=sigmoid​(𝐖1​𝐱t+𝐖2​𝐦t)𝐢′t=tanh⁡(𝐖3​𝐱t+𝐖4​𝐦t)𝐟t=sigmoid​(𝐖5​𝐱t+𝐖6​𝐦t)𝐨t=sigmoid​(𝐖7​𝐱t+𝐖8​𝐦t)𝐜t=𝐜t−1⊙𝐟t+𝐢′t⊙𝐢t𝐦t=𝐜t⊙𝐨t𝐖subscript𝐖1subscript𝐖2subscript𝐖3subscript𝐖4subscript𝐖5subscript𝐖6subscript𝐖7subscript𝐖8subscript𝐢𝑡sigmoidsubscript𝐖1subscript𝐱𝑡subscript𝐖2subscript𝐦𝑡subscriptsuperscript𝐢′𝑡subscript𝐖3subscript𝐱𝑡subscript𝐖4subscript𝐦𝑡subscript𝐟𝑡sigmoidsubscript𝐖5subscript𝐱𝑡subscript𝐖6subscript𝐦𝑡subscript𝐨𝑡sigmoidsubscript𝐖7subscript𝐱𝑡subscript𝐖8subscript𝐦𝑡subscript𝐜𝑡direct-productsubscript𝐜𝑡1subscript𝐟𝑡direct-productsubscriptsuperscript𝐢′𝑡subscript𝐢𝑡subscript𝐦𝑡direct-productsubscript𝐜𝑡subscript𝐨𝑡\\begin{split}\\mathbf{W}&=(\\mathbf{W}_{1},\\mathbf{W}_{2},\\mathbf{W}_{3},\\mathbf{W}_{4},\\mathbf{W}_{5},\\mathbf{W}_{6},\\mathbf{W}_{7},\\mathbf{W}_{8})\\\\ \\mathbf{i}_{t}&=\\text{sigmoid}(\\mathbf{W}_{1}\\mathbf{x}_{t}+\\mathbf{W}_{2}\\mathbf{m}_{t})\\\\ \\mathbf{i^{\\prime}}_{t}&=\\tanh(\\mathbf{W}_{3}\\mathbf{x}_{t}+\\mathbf{W}_{4}\\mathbf{m}_{t})\\\\ \\mathbf{f}_{t}&=\\text{sigmoid}(\\mathbf{W}_{5}\\mathbf{x}_{t}+\\mathbf{W}_{6}\\mathbf{m}_{t})\\\\ \\mathbf{o}_{t}&=\\text{sigmoid}(\\mathbf{W}_{7}\\mathbf{x}_{t}+\\mathbf{W}_{8}\\mathbf{m}_{t})\\\\ \\mathbf{c}_{t}&=\\mathbf{c}_{t-1}\\odot\\mathbf{f}_{t}+\\mathbf{i^{\\prime}}_{t}\\odot\\mathbf{i}_{t}\\\\ \\mathbf{m}_{t}&=\\mathbf{c}_{t}\\odot\\mathbf{o}_{t}\\end{split} (11) When doing quantized inference, we replace all the floating point operations in equations 10 and 11 with fixed-point integer operations with either 8-bit or 16-bit resolution. The weight matrix 𝐖𝐖\\mathbf{W} above is represented using an 8-bit integer matrix 𝐖𝐐𝐖𝐐\\mathbf{WQ} and a float vector 𝐬𝐬\\mathbf{s}, as shown below: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_47", "text": " 𝐬i=max⁡(abs​(𝐖​(i,:)))𝐖𝐐​(i,j)=round​(𝐖​(i,j)/𝐬i×127.0)subscript𝐬𝑖abs𝐖𝑖:𝐖𝐐𝑖𝑗round𝐖𝑖𝑗subscript𝐬𝑖127.0\\begin{split}\\mathbf{s}_{i}&=\\max(\\text{abs}(\\mathbf{W}(i,:)))\\\\ \\mathbf{WQ}(i,j)&=\\text{round}(\\mathbf{W}(i,j)/\\mathbf{s}_{i}\\times 127.0)\\end{split} (12) All accumulator values (𝐜tisuperscriptsubscript𝐜𝑡𝑖\\mathbf{c}_{t}^{i} and 𝐱tisuperscriptsubscript𝐱𝑡𝑖\\mathbf{x}_{t}^{i}) are represented using 16-bit integers representing the range (−δ,δ)𝛿𝛿(-\\delta,\\delta). All matrix multiplications (e.g., 𝐖1​𝐱tsubscript𝐖1subscript𝐱𝑡\\mathbf{W}_{1}\\mathbf{x}_{t}, 𝐖2​𝐦tsubscript𝐖2subscript𝐦𝑡\\mathbf{W}_{2}\\mathbf{m}_{t}, etc.) in equation 11 are done using 8-bit integer multiplication accumulated into larger accumulators. All other operations, including all the activations (sigmoid, tanh\\tanh) and elementwise operations (⊙direct-product\\odot, ++) are done using 16-bit integer operations. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_48", "text": " We now turn our attention to the log-linear softmax layer. During training, given the decoder RNN network output 𝐲𝐭subscript𝐲𝐭\\mathbf{y_{t}}, we compute the probability vector 𝐩𝐭subscript𝐩𝐭\\mathbf{p_{t}} over all candidate output symbols as follows: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_49", "text": " 𝐯𝐭=𝐖𝐬∗𝐲𝐭𝐯𝐭′=max⁡(−γ,min⁡(γ,𝐯𝐭))𝐩𝐭=s​o​f​t​m​a​x​(𝐯𝐭′)subscript𝐯𝐭subscript𝐖𝐬subscript𝐲𝐭superscriptsubscript𝐯𝐭′𝛾𝛾subscript𝐯𝐭subscript𝐩𝐭𝑠𝑜𝑓𝑡𝑚𝑎𝑥superscriptsubscript𝐯𝐭′\\begin{split}\\mathbf{v_{t}}&=\\mathbf{W_{s}}*\\mathbf{y_{t}}\\\\ \\mathbf{v_{t}^{\\prime}}&=\\max(-\\gamma,\\min(\\gamma,\\mathbf{v_{t}}))\\\\ \\mathbf{p_{t}}&=softmax(\\mathbf{v_{t}^{\\prime}})\\end{split} (13) In equation 13, 𝐖𝐬subscript𝐖𝐬\\mathbf{W_{s}} is the weight matrix for the linear layer, which has the same number of rows as the number of symbols in the target vocabulary with each row corresponding to one unique target symbol. 𝐯𝐯\\mathbf{v} represents the raw logits, which are first clipped to be between −γ𝛾-\\gamma and γ𝛾\\gamma and then normalized into a probability vector 𝐩𝐩\\mathbf{p}. Input 𝐲𝐭subscript𝐲𝐭\\mathbf{y_{t}} is guaranteed to be between −δ𝛿-\\delta and δ𝛿\\delta due to the quantization scheme we applied to the decoder RNN. The clipping range γ𝛾\\gamma for the logits 𝐯𝐯\\mathbf{v} is determined empirically, and in our case, it is set to 252525. In quantized inference, the weight matrix 𝐖𝐬subscript𝐖𝐬\\mathbf{W_{s}} is quantized into 8 bits as in equation 12, and the matrix multiplication is done using 8 bit arithmetic. The calculations within the s​o​f​t​m​a​x𝑠𝑜𝑓𝑡𝑚𝑎𝑥softmax function and the attention model are not quantized during inference. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_50", "text": " It is worth emphasizing that during training of the model we use full-precision floating point numbers. The only constraints we add to the model during training are the clipping of the RNN accumulator values into (−δ,δ)𝛿𝛿(-\\delta,\\delta) and softmax logits into (−γ,γ)𝛾𝛾(-\\gamma,\\gamma). γ𝛾\\gamma is fixed to be at 25.025.025.0, while the value for δ𝛿\\delta is gradually annealed from a generous bound of δ=8.0𝛿8.0\\delta=8.0 at the beginning of training, to a rather stringent bound of δ=1.0𝛿1.0\\delta=1.0 towards the end of training. At inference time, δ𝛿\\delta is fixed at 1.01.01.0. Those additional constraints do not degrade model convergence nor the decoding quality of the model when it has converged. In Figure 4, we compare the loss vs. steps for an unconstrained model (the blue curve) and a constrained model (the red curve) on WMT’14 English-to-French. We can see that the loss for the constrained model is slightly better, possibly due to regularization roles those constraints play. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_51", "text": " Our solution strikes a good balance between efficiency and accuracy. Since the computationally expensive operations (the matrix multiplications) are done using 8-bit integer operations, our quantized inference is quite efficient. Also, since error-sensitive accumulator values are stored using 16-bit integers, our solution is very accurate and is robust to quantization errors. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_52", "text": " In Table 1 we compare the inference speed and quality when decoding the WMT’14 English-to-French development set (a concatenation of newstest2012 and newstest2013 test sets for a total of 6003 sentences) on CPU, GPU and Google’s Tensor Processing Unit (TPU) respectively.111https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html The model used here for comparison is trained with quantization constraints on the ML objective only (i.e., without reinforcement learning based model refinement). When the model is decoded on CPU and GPU, it is not quantized and all operations are done using full-precision floats. When it is decoded on TPU, certain operations, such as embedding lookup and attention module, remain on the CPU, and all other quantized operations are off-loaded to the TPU. In all cases, decoding is done on a single machine with two Intel Haswell CPUs, which consists in total of 88 CPU cores (hyperthreads). The machine is equipped with an NVIDIA GPU (Tesla k80) for the experiment with GPU or a single Google TPU for the experiment with TPU. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_53", "text": " Table 1 shows that decoding using reduced precision arithmetics on the TPU suffers a very minimal loss of 0.0072 on log perplexity, and no loss on BLEU at all. This result matches previous work reporting that quantizing convolutional neural network models can retain most of the model quality. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_54", "text": " Table 1 also shows that decoding our model on CPU is actually 2.3 times faster than on GPU. Firstly, our dual-CPUs host machine offers a theoretical peak FLOP performance which is more than two thirds that of the GPU. Secondly, the beam search algorithm forces the decoder to incur a non-trivial amount of data transfer between the host and the GPU at every decoding step. Hence, our current decoder implementation is not fully utilizing the computation capacities that a GPU can theoretically offer during inference. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_55", "text": " Finally, Table 1 shows that decoding on TPUs is 3.4 times faster than decoding on CPUs, demonstrating that quantized arithmetics is much faster on TPUs than both CPUs or GPUs. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_56", "text": " Unless otherwise noted, we always train and evaluate quantized models in our experiments. Because there is little difference from a quality perspective between a model decoded on CPUs and one decoded on TPUs, we use CPUs to decode for model evaluation during training and experimentation and use TPUs to serve production traffic. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_57", "text": " We use beam search during decoding to find the sequence Y𝑌Y that maximizes a score function s​(Y,X)𝑠𝑌𝑋s(Y,X) given a trained model. We introduce two important refinements to the pure max-probability based beam search algorithm: a coverage penalty  and length normalization. With length normalization, we aim to account for the fact that we have to compare hypotheses of different length. Without some form of length-normalization regular beam search will favor shorter results over longer ones on average since a negative log-probability is added at each step, yielding lower (more negative) scores for longer sentences. We first tried to simply divide by the length to normalize. We then improved on that original heuristic by dividing by l​e​n​g​t​hα𝑙𝑒𝑛𝑔𝑡superscriptℎ𝛼length^{\\alpha}, with 0<α<10𝛼10<\\alpha<1 where α𝛼\\alpha is optimized on a development set (α∈(0.6−0.7)𝛼delimited-()0.60.7\\alpha\\in(0.6-0.7) was usually found to be best). Eventually we designed the empirically-better scoring function below, which also includes a coverage penalty to favor translations that fully cover the source sentence according to the attention module. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_58", "text": " More concretely, the scoring function s​(Y,X)𝑠𝑌𝑋s(Y,X) that we employ to rank candidate translations is defined as follows: ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_59", "text": " s​(Y,X)=log⁡(P​(Y|X))/l​p​(Y)+c​p​(X;Y)l​p​(Y)=(5+|Y|)α(5+1)αc​p​(X;Y)=β∗∑i=1|X|log⁡(min⁡(∑j=1|Y|pi,j,1.0)),𝑠𝑌𝑋𝑃conditional𝑌𝑋𝑙𝑝𝑌𝑐𝑝𝑋𝑌𝑙𝑝𝑌superscript5𝑌𝛼superscript51𝛼𝑐𝑝𝑋𝑌𝛽superscriptsubscript𝑖1𝑋superscriptsubscript𝑗1𝑌subscript𝑝𝑖𝑗1.0\\begin{split}s(Y,X)&=\\log(P(Y|X))/lp(Y)+cp(X;Y)\\\\ lp(Y)&=\\frac{(5+|Y|)^{\\alpha}}{(5+1)^{\\alpha}}\\\\ cp(X;Y)&=\\beta*\\sum_{i=1}^{|X|}{\\log(\\min(\\sum_{j=1}^{|Y|}{p_{i,j}},1.0))},\\end{split} (14) where pi,jsubscript𝑝𝑖𝑗p_{i,j} is the attention probability of the j𝑗j-th target word yjsubscript𝑦𝑗y_{j} on the i𝑖i-th source word xisubscript𝑥𝑖x_{i}. By construction (equation 4), ∑i=0|X|pi,jsuperscriptsubscript𝑖0𝑋subscript𝑝𝑖𝑗\\sum_{i=0}^{|X|}{p_{i,j}} is equal to 1. Parameters α𝛼\\alpha and β𝛽\\beta control the strength of the length normalization and the coverage penalty. When α=0𝛼0\\alpha=0 and β=0𝛽0\\beta=0, our decoder falls back to pure beam search by probability. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_60", "text": " During beam search, we typically keep 8-12 hypotheses but we find that using fewer (4 or 2) has only slight negative effects on BLEU scores. Besides pruning the number of considered hypotheses, two other forms of pruning are used. Firstly, at each step, we only consider tokens that have local scores that are not more than b​e​a​m​s​i​z​e𝑏𝑒𝑎𝑚𝑠𝑖𝑧𝑒beamsize below the best token for this step. Secondly, after a normalized best score has been found according to equation 14, we prune all hypotheses that are more than b​e​a​m​s​i​z​e𝑏𝑒𝑎𝑚𝑠𝑖𝑧𝑒beamsize below the best normalized score so far. The latter type of pruning only applies to full hypotheses because it compares scores in the normalized space, which is only available when a hypothesis ends. This latter form of pruning also has the effect that very quickly no more hypotheses will be generated once a sufficiently good hypothesis has been found, so the search will end quickly. The pruning speeds up search by 30%−40%percent30percent4030\\%-40\\% when run on CPUs compared to not pruning (where we simply stop decoding after a predetermined maximum output length of twice the source length). Typically we use b​e​a​m​s​i​z​e=3.0𝑏𝑒𝑎𝑚𝑠𝑖𝑧𝑒3.0beamsize=3.0, unless otherwise noted. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_61", "text": " To improve throughput during decoding we can put many sentences (typically up to 35) of similar length into a batch and decode all of those in parallel to make use of available hardware optimized for parallel computations. In this case the beam search only finishes if all hypotheses for all sentences in the batch are out of beam, which is slightly less efficient theoretically, but in practice is of negligible additional computational cost. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_62", "text": " Table 2 shows the impact of α𝛼\\alpha and β𝛽\\beta on the BLEU score when decoding the WMT’14 English-to-French development set. The model used here for experiments is trained using the ML objective only (without RL refinement). As can be seen from the results, having some length normalization and coverage penalty improves BLEU score considerably (from 30.3 to 31.4). ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_63", "text": " We find that length normalization (α𝛼\\alpha) and coverage penalty (β𝛽\\beta) are less effective for models with RL refinement. Table 3 summarizes our results. This is understandable, as during RL refinement, the models already learn to pay attention to the full source sentence to not under-translate or over-translate, which would result in a penalty on the BLEU (or GLEU) scores. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_64", "text": " We found that the optimal α𝛼\\alpha and β𝛽\\beta vary slightly for different models. Based on tuning results using internal Google datasets, we use α=0.2𝛼0.2\\alpha=0.2 and β=0.2𝛽0.2\\beta=0.2 in our experiments, unless noted otherwise. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_65", "text": " In this section, we present our experimental results on two publicly available corpora used extensively as benchmarks for Neural Machine Translation systems: WMT’14 English-to-French (WMT En→→\\rightarrowFr) and English-to-German (WMT En→→\\rightarrowDe). On these two datasets, we benchmark GNMT models with word-based, character-based, and wordpiece-based vocabularies. We also present the improved accuracy of our models after fine-tuning with RL and model ensembling. Our main objective with these datasets is to show the contributions of various components in our implementation, in particular the wordpiece model, RL model refinement, and model ensembling. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_66", "text": " In addition to testing on publicly available corpora, we also test GNMT on Google’s translation production corpora, which are two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair. We compare the accuracy of our model against human accuracy and the best Phrase-Based Machine Translation (PBMT) production system for Google Translate. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_67", "text": " In all experiments, our models consist of 8 encoder layers and 8 decoder layers. (Since the bottom encoder layer is actually bi-directional, in total there are 9 logically distinct LSTM passes in the encoder.) The attention network is a simple feedforward network with one hidden layer with 1024 nodes. All of the models use 1024 LSTM nodes per encoder and decoder layers. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_68", "text": " We evaluate our model on the WMT En→→\\rightarrowFr dataset, the WMT En→→\\rightarrowDe dataset, as well as many Google-internal production datasets. On WMT En→→\\rightarrowFr, the training set contains 36M sentence pairs. On WMT En→→\\rightarrowDe, the training set contains 5M sentence pairs. In both cases, we use newstest2014 as the test sets to compare against previous work (31, 37, 45). The combination of newstest2012 and newstest2013 is used as the development set. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_69", "text": " In addition to WMT, we also evaluate our model on some Google-internal datasets representing a wider spectrum of languages with distinct linguistic properties: English ↔↔\\leftrightarrow French, English ↔↔\\leftrightarrow Spanish and English ↔↔\\leftrightarrow Chinese. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_70", "text": " We evaluate our models using the standard BLEU score metric. To be comparable to previous work (41, 31, 45), we report tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which is also used in . ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_71", "text": " As is well-known, BLEU score does not fully capture the quality of a translation. For that reason we also carry out side-by-side (SxS) evaluations where we have human raters evaluate and compare the quality of two translations presented side by side for a given source sentence. Side-by-side scores range from 0 to 6, with a score of 0 meaning “completely nonsense translation”, and a score of 6 meaning “perfect translation: the meaning of the translation is completely consistent with the source, and the grammar is correct”. A translation is given a score of 4 if “the sentence retains most of the meaning of the source sentence, but may have some grammar mistakes”, and a translation is given a score of 2 if “the sentence preserves some of the meaning of the source sentence but misses significant parts”. These scores are generated by human raters who are fluent in both languages and hence often capture translation quality better than BLEU scores. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_72", "text": " The models are trained by a system we implemented using TensorFlow. The training setup follows the classic data parallelism paradigm. There are 12 replicas running concurrently on separate machines. Every replica updates the shared parameters asynchronously. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_73", "text": " We initialize all trainable parameters uniformly between (-0.04, 0.04). As is common wisdom in training RNN models, we apply gradient clipping (similar to ): all gradients are uniformly scaled down such that the norm of the modified gradients is no larger than a fixed constant, which is 5.05.05.0 in our case. If the norm of the original gradients is already smaller than or equal to the given threshold, then gradients are not changed. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_74", "text": " For the first stage of maximum likelihood training (that is, to optimize for objective function 7), we use a combination of Adam and simple SGD learning algorithms provided by the TensorFlow runtime system. We run Adam for the first 60k steps, after which we switch to simple SGD. Each step in training is a mini-batch of 128 examples. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_75", "text": " We find that Adam accelerates training at the beginning, but Adam alone converges to a worse point than a combination of Adam first, followed by SGD (Figure 5). For the Adam part, we use a learning rate of 0.00020.00020.0002, and for the SGD part, we use a learning rate of 0.50.50.5. We find that it is important to also anneal the learning rate after a certain number of total steps. For the WMT En→→\\rightarrowFr dataset, we begin to anneal the learning rate after 1.2M steps, after which we halve the learning rate every 200k steps for an additional 800k steps. On WMT En→→\\rightarrowFr, it takes around 6 days to train a basic model using 96 NVIDIA K80 GPUs. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_76", "text": " Once a model is fully converged using the ML objective, we switch to RL based model refinement, i.e., we further optimize the objective function as in equation 9. We refine a model until the BLEU score does not change much on the development set. For this model refinement phase, we simply run the SGD optimization algorithm. The number of steps needed to refine a model varies from dataset to dataset. For WMT En→→\\rightarrowFr, it takes around 3 days to complete 400k steps. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_77", "text": " To prevent overfitting, we apply dropout during training with a scheme similar to . For the WMT En→→\\rightarrowFr and En→→\\rightarrowDe datasets, we set the dropout probability to be 0.20.20.2 and 0.30.30.3 respectively. Due to various technical reasons, dropout is only applied during the ML training phase, not during the RL refinement phase. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_78", "text": " The exact hyper-parameters vary from dataset to dataset and from model to model. For the WMT En→→\\rightarrowDe dataset, since it is significantly smaller than the WMT En→→\\rightarrowFr dataset, we use a higher dropout probability, and also train smaller models for fewer steps overall. On the production data sets, we typically do not use dropout, and we train the models for more steps. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_79", "text": " The models in our experiments are word-based, character-based, mixed word-character-based or several wordpiece models with varying vocabulary sizes. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_80", "text": " For the word model, we selected the most frequent 212K source words as the source vocabulary and the most popular 80k target words as the target vocabulary. Words not in the source vocabulary or the target vocabulary (unknown words) are converted into special <first_char>_UNK_<last_char> symbols. Note, in this case, there is more than one UNK (e.g., our production word models have roughly 5000 different UNKs in this case). We then use the attention mechanism to copy a corresponding word from the source to replace these unknown words during decoding . ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_81", "text": " The mixed word-character model is similar to the word model, except the out-of-vocabulary (OOV) words are converted into sequences of characters with special delimiters around them as described in section 4.2 in more detail. In our experiments, the vocabulary size for the mixed word-character model is 32K. For the pure character model, we simply split all words into constituent characters, resulting typically in a few hundred basic characters (including special symbols appearing in the data). For the wordpiece models, we train 3 different models with vocabulary sizes of 8K, 16K, and 32K. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_82", "text": " Table 4 summarizes our results on the WMT En→→\\rightarrowFr dataset. In this table, we also compare against other strong baselines without model ensembling. As can be seen from the table, “WPM-32K”, a wordpiece model with a shared source and target vocabulary of 32K wordpieces, performs well on this dataset and achieves the best quality as well as the fastest inference speed. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_83", "text": " The pure character model (char input, char output) works surprisingly well on this task, not much worse than the best wordpiece models in BLEU score. However, these models are rather slow to train and slow to use as the sequences are much longer. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_84", "text": " Our best model, WPM-32K, achieves a BLEU score of 38.95. Note that this BLEU score represents the averaged score of 8 models we trained. The maximum BLEU score of the 8 models is higher at 39.37. We point out that our models are completely self-contained, as opposed to previous models reported in , which depend on some external alignment models to achieve their best results. Also note that all our test set numbers were achieved by picking an optimal model on the development set which was then used to decode the test set. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_85", "text": " Note that the timing numbers for this section are obtained on CPUs, not TPUs. We use here the same CPU machine as described above, and run the decoder with a batchsize of 16 sentences in parallel and a maximum of 4 concurrent hypotheses at any time per sentence. The time per sentence is the total decoding time divided by the number of respective sentences in the test set. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_86", "text": " Similarly, the results of WMT En→→\\rightarrowDe are presented in Table 5. Again, we find that wordpiece models achieves the best BLEU scores. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_87", "text": " WMT En→→\\rightarrowDe is considered a more difficult task than WMT En→→\\rightarrowFr as it has much less training data, and German, as a more morphologically rich language, needs a huge vocabulary for word models. Thus it is more advantageous to use wordpiece or mixed word/character models, which provide a gain of more than 2 BLEU points on top of the word model and about 4 BLEU points on top of previously reported results in (6, 45). Our best model, WPM-32K, achieves a BLEU score of 24.61, which is averaged over 8 runs. Consistently, on the production corpora, wordpiece models tend to be better than other models both in terms of speed and accuracy. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_88", "text": " The models trained in the previous section are optimized for log-likelihood of the next step prediction which may not correlate well with translation quality, as discussed in section 5. We use RL training to fine-tune sentence BLEU scores after normal maximum-likelihood training. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_89", "text": " The results of RL fine-tuning on the best En→→\\rightarrowFr and En→→\\rightarrowDe models are presented in Table 6, which show that fine-tuning the models with RL can improve BLEU scores. On WMT En→→\\rightarrowFr, model refinement improves BLEU score by close to 1 point. On En→→\\rightarrowDe, RL-refinement slightly hurts the test performance even though we observe about 0.4 BLEU points improvement on the development set. The results presented in Table 6 are the average of 8 independent models. We also note that there is an overlap between the wins from the RL refinement and the decoder fine-tuning (i.e., the introduction of length normalization and coverage penalty). On a less fine-tuned decoder (e.g., if the decoder does beam search by log-probability only), the win from RL would have been bigger (as is evident from comparing results in Table 2 and Table 3). ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_90", "text": " We ensemble 8 RL-refined models to obtain a state-of-the-art result of 41.16 BLEU points on the WMT En→→\\rightarrowFr dataset. Our results are reported in Table 7. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_91", "text": " We ensemble 8 RL-refined models to obtain a state-of-the-art result of 26.30 BLEU points on the WMT En→→\\rightarrowDe dataset. Our results are reported in Table 8. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_92", "text": " Finally, to better understand the quality of our models and the effect of RL refinement, we carried out a four-way side-by-side human evaluation to compare our NMT translations against the reference translations and the best phrase-based statistical machine translations. During the side-by-side comparison, humans are asked to rate four translations given a source sentence. The four translations are: 1) the best phrase-based translations as downloaded from http://matrix.statmt.org/systems/show/2065, 2) an ensemble of 8 ML-trained models, 3) an ensemble of 8 ML-trained and then RL-refined models, and 4) reference human translations as taken directly from newstest2014, Our results are presented in Table 9. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_93", "text": " The results show that even though RL refinement can achieve better BLEU scores, it barely improves the human impression of the translation quality. This could be due to a combination of factors including: 1) the relatively small sample size for the experiment (only 500 examples for side-by-side), 2) the improvement in BLEU score by RL is relatively small after model ensembling (0.81), which may be at a scale that human side-by-side evaluations are insensitive to, and 3) the possible mismatch between BLEU as a metric and real translation quality as perceived by human raters. Table 11 contains some example translations from PBMT, \"NMT before RL\" and \"Human\", along with the side-by-side scores that human raters assigned to each translation (some of which we disagree with, see the table caption). ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_94", "text": " We have carried out extensive experiments on many Google-internal production data sets. As the experiments above cast doubt on whether RL improves the real translation quality or simply the BLEU metric, RL-based model refinement is not used during these experiments. Given the larger volume of training data available in the Google corpora, dropout is also not needed in these experiments. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_95", "text": " In this section we describe our experiments with human perception of the translation quality. We asked human raters to rate translations in a three-way side-by-side comparison. The three sides are from: 1) translations from the production phrase-based statistical translation system used by Google, 2) translations from our GNMT system, and 3) translations by humans fluent in both languages. Reported here in Table 10 are averaged rated scores for English ↔↔\\leftrightarrow French, English ↔↔\\leftrightarrow Spanish and English ↔↔\\leftrightarrow Chinese. All the GNMT models are wordpiece models, without model ensembling, and use a shared source and target vocabulary with 32K wordpieces. On each pair of languages, the evaluation data consist of 500 randomly sampled sentences from Wikipedia and news websites, and the corresponding human translations to the target language. The results show that our model reduces translation errors by more than 60% compared to the PBMT model on these major pairs of languages. A typical distribution of side-by-side scores is shown in Figure 6. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_96", "text": " As expected, on this metric the GNMT system improves also compared to the PBMT system. In some cases human and GNMT translations are nearly indistinguishable on the relatively simplistic and isolated sentences sampled from Wikipedia and news articles for this experiment. Note that we have observed that human raters, even though fluent in both languages, do not necessarily fully understand each randomly sampled sentence sufficiently and hence cannot necessarily generate the best possible translation or rate a given translation accurately. Also note that, although the scale for the scores goes from 0 (complete nonsense) to 6 (perfect translation) the human translations get an imperfect score of only around 5 in Table 10, which shows possible ambiguities in the translations and also possibly non-calibrated raters and translators with a varying level of proficiency. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_97", "text": " Testing our GNMT system on particularly difficult translation cases and longer inputs than just single sentences is the subject of future work. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_98", "text": " In this paper, we describe in detail the implementation of Google’s Neural Machine Translation (GNMT) system, including all the techniques that are critical to its accuracy, speed, and robustness. On the public WMT’14 translation benchmark, our system’s translation quality approaches or surpasses all currently published results. More importantly, we also show that our approach carries over to much larger production data sets, which have several orders of magnitude more data, to deliver high quality translations. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_99", "text": " Our key findings are: 1) that wordpiece modeling effectively handles open vocabularies and the challenge of morphologically rich languages for translation quality and inference speed, 2) that a combination of model and data parallelism can be used to efficiently train state-of-the-art sequence-to-sequence NMT models in roughly a week, 3) that model quantization drastically accelerates translation inference, allowing the use of these large models in a deployed production environment, and 4) that many additional details like length-normalization, coverage penalties, and similar are essential to making NMT systems work well on real data. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" }, { "id": "1609.08144_all_100", "text": " Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets. In particular, compared to the previous phrase-based production system, this GNMT system delivers roughly a 60% reduction in translation errors on several popular language pairs. ", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" } ]
Why is deduplication chosen as one of the baselines?
Deduplicating the pretraining corpora proves to mitigate privacy risks for LMs [8].
[ 8 ]
[ { "id": "2210.01504_all_0", "text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical notes, and 128-bit UUIDs (Carlini et al., 2021; Lee et al., 2022; Huang et al., 2022; Lehman et al., 2021). In 2021, an AI chatbot Iruda became the first AI system to be sued for violating the Personal Information Protection Act after generating the exact home addresses and bank account numbers of actual individuals unintentionally (Park, 2021).  Heikkilä (2022) has also shown that GPT-3 (Brown et al., 2020), one of the most well-known LM currently in commercial use, offered detailed private information about the Editor-in-Chief of MIT Technology Review including his family members, work address, and phone number. Considering findings that show extracting training data gets easier as LMs scale to larger sizes (Carlini et al., 2022a) and that it is common practice for practitioners to release billion parameters pretrained LMs for public use (Gao et al., 2020; Black et al., 2021; Zhang et al., 2022), it has become important to provide privacy guarantees for large LMs. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_1", "text": " Practitioners are required to delete personal information from the LMs by individuals’ request because each individual has the “Right To Be Forgotten (RTBF)” (Mantelero, 2013; Graves et al., 2021) and can limit the direct and indirect commercial use of their personal information (Villaronga et al., 2018). Previous methods addressing privacy risks for language models attempt to remove all private information from the training data (data preprocessing) (Aura et al., 2006; Dernoncourt et al., 2017; Lison et al., 2021; Kandpal et al., 2022) or attempt to design algorithms that ensure differential privacy (DP) (Dwork, 2008; Dwork et al., 2006; Abadi et al., 2016; Anil et al., 2021; Li et al., 2022; Yu et al., 2022). Both approaches require retraining the underlying LM every time individuals want to practice their RTBF, which makes them inadequate for large LMs that are extremely costly to retrain. Furthermore, as pointed out by Brown et al. (2022), data preprocessing methods assume private information to be easily identifiable, specified, and removed and DP algorithms can only guarantee protection for information that has clear privacy borders, which makes them inadequate in the real-world scenarios where the standard of privacy might differ by each individual. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_2", "text": " To this end, we propose knowledge unlearning (Figure 1) as an efficient solution that can be applied with just a few parameter updates instead of pretraining the underlying LM again. We perform experiments on GPT-Neo LMs (125M, 1.3B, 2.7B) (Black et al., 2021) and show that simply changing the gradient descent to the opposite direction during language modeling (which can also be seen as maximizing instead of minimizing the loss function) is effective at protecting target sequences from extraction attacks with little to no performance degradation on the initial LM capabilities measured via 9 common NLP classification benchmarks (Hellaswag (Zellers et al., 2019), Lambada (Paperno et al., 2016), Winogrande (Sakaguchi et al., 2021), COPA (Gordon et al., 2012), ARC-Easy (Clark et al., 2018), ARC-Challenge (Clark et al., 2018), Piqa (Bisk et al., 2020), MathQA (Amini et al., 2019), and PubmedQA (Jin et al., 2019)) and 4 dialogue tasks (Wizard of Wikipedia (Dinan et al., 2019), Empathetic Dialogues (Rashkin et al., 2019), Blended Skill Talk (Smith et al., 2020), and Wizard of Internet (Komeili et al., 2022)). For some cases, knowledge unlearning unexpectedly shows significant improvements in LM performance for some of the benchmarks. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_3", "text": " We compare our approach with data deduplication method (Kandpal et al., 2022) and differential privacy decoding method (Majmudar et al., 2022) which are both known to mitigate privacy risks and show the effectiveness of knowledge unlearning by providing strong privacy protection while being much more efficient and robust. We also provide a general guideline that can be used to quantify the memorization and extraction likelihood of target token sequences and suggest when we can empirically consider them to have been “forgotten”. Specifically, we introduce a novel metric that measures the extraction likelihood by varying the prefix length of the target token sequence and quantifying how much of the suffix is actually extracted from the LM. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_4", "text": " Surprisingly, for knowledge unlearning, we find that it is easier to forget a chunk of instances sequentially rather than trying to forget them all at once. We provide further analysis and show that the difficulty of knowledge unlearning depends heavily on the target data being forgotten, especially the domain of the target data. We also provide empirical examples of performing extraction attacks and how exactly knowledge unlearning provides privacy protection for the LM. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_5", "text": " To summarize, our main contributions are fourfold: • We compare knowledge unlearning with two approaches from literature known to mitigate privacy risks: a data preprocessing approach and a Differential Privacy (DP) Decoding approach. We show that our approach results in little to no performance degradation of general capabilities (sometimes resulting in improvement) while providing strong privacy protections in situations individuals practice their RTBF whereas the data preprocessing approach provides weaker privacy protection while being orders of magnitude computationally demanding and the DP Decoding approach results in severe degradation of modeling performance. • We perform additional experiments to determine which factors contribute to the difficulty of knowledge unlearning and find that (1) trying to forget many samples at once results in substantial LM performance degradation which can be mitigated by sequentially forgetting chunks of data and that (2) the domain of the target data (Code, License, Wikipedia, etc.) plays a critical role in determining how hard they are to forget. • We provide a novel metric and a general guideline for quantifying the privacy risks for LMs and determine when they should be considered to have “forgotten” a given target sequence. • Knowledge unlearning surprisingly seems to make LMs stronger where the extreme cases bring +8.0% (37.6% →→\\rightarrow 45.6%), +10.1% (57.4% →→\\rightarrow 67.5%), and +7.9% (62.2% →→\\rightarrow 70.1%) improvements on Lambada for GPT-Neo 125M, 1.3B, and 2.7B, respectively. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_6", "text": " Prior work that tries to mitigate privacy risks for LMs can be divided mainly into data pre/post-processing methods and differential privacy methods. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_7", "text": " Data preprocessing aims to sanitize the training data; it aims to get rid of all data that might violate any kind of privacy from the training data prior to training. These methods mostly utilize measures such as parsers and classification models that try to identify and predict patterns that constitute private information. This is effective at identifying well-formatted private information such as social security numbers or special forms of medical notes (Aura et al., 2006; Dernoncourt et al., 2017; Lison et al., 2021; Kandpal et al., 2022). However, as pointed out by Brown et al. (2022), considering that private information is mostly context-dependent and sometimes in a non-specific format, data preprocessing methods cannot fully claim that they provide privacy guarantees, especially guarantees that match each individual’s standards. Methods that attempt to utilize post-processing methods such as applying censorship to the LM outputs still face the same limitations. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_8", "text": " In this work, we compare our proposed method with a data preprocessing approach proposed by Kandpal et al. (2022) which shows that deduplicating the training corpora before pretraining helps pretrain LMs that show stronger robustness against extraction attacks than an LM pretrained under the same circumstances without deduplicating the pretraining corpora. However, we highlight that this approach, which may still be effective at mitigating the overall privacy risks, is not the most suitable approach when considering a realistic scenario of individuals requesting the removal of their information from the implicit parameters of the LMs. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_9", "text": " Differential Privacy (DP) aims to guarantee that the effect of an individual input on the output of a specific function is bounded (Dwork, 2008; Dwork et al., 2006). In the context of deep neural networks, DP, which needs to be applied during the training phase, aims to construct models that can provide general guarantees that the individual information within the training data cannot be inferred (Abadi et al., 2016). While DP has shown to be surprisingly effective at fine-tuning LMs (Li et al., 2022; Yu et al., 2022), pretraining LMs with DP still suffers from substantial performance gap, expensive computation, and slow convergence (Anil et al., 2021). Furthermore, as pointed out by Brown et al. (2022), DP can only provide limited guarantees for LMs because DP requires a unified definition for privacy boundaries, which is inherently impossible for natural language data. Most importantly, in a realistic scenario where individuals may practice their Right-To-Be-Forgotten (RTBF) dynamically after model deployment, it is nontrivial to apply existing descent-based DP algorithms such as DP-SGD to only protection against targeted extraction attacks. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_10", "text": " Machine unlearning has received attention as an alternative approach to overcome data privacy issues in machine learning (Cao & Yang, 2015; Ginart et al., 2019; Bourtoule et al., 2021; Graves et al., 2021). Several studies attempt to explore machine unlearning for deep neural networks (Golatkar et al., 2020; Mehta et al., 2022). However, they mostly focus on proposing algorithms for image classification models where they aim to forget a whole class; that is, achieve random performance for specific image classes such as “cats” or “ships”. We are the first, to the best of our knowledge, to explore unlearning a specific sequence of tokens for LMs which is a quite different set-up from traditional image classification models (∼similar-to\\simtens of image classes vs. a sequence of tokens that can each be classified into V∈ℝ∼50,000𝑉superscriptℝsimilar-toabsent50000V\\in\\mathbb{R}^{\\sim 50,000}). In this work, we coin this approach as knowledge unlearning since we are more focused on forgetting specific knowledge represented by sequences of tokens. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_11", "text": " Zhou et al. (2022) focus on how forgetting can be leveraged to improve the performance of the underlying model. They propose “forget-and-relearn” that unifies existing iterative training algorithms by selectively removing undesirable information and re-learning good features, helping boost performance for the task of image classification and multi-agent emergence communication. The underlying assumption is that it is often easier to define and stop unwanted behavior than to teach good behavior. We also show this phenomenon in Section 4 where we unintentionally find unlearning just a few sequences of tokens sometimes boosts general LM capabilities. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_12", "text": " Previous work that explores to which extent LMs have memorized their training data approach the phenomenon with two different viewpoints. Some work view memorization of LMs simply as a threat to individual privacy (Carlini et al., 2021; 2022a; Jagielski et al., 2022) and utilize metrics that quantify how much the LMs are susceptible to adversarial attacks. These metrics are mostly dependent on the specific types of attacks such as the membership inference attack (Shokri et al., 2017) and measure the privacy risks of LMs by quantifying the success rate of these attacks. In our work, we instead focus on more targeted extraction attacks. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_13", "text": " Another line of work simply quantifies how much knowledge is accumulated and forgotten during pretraining by extracting relational knowledge about the world (Petroni et al., 2019; Lazaridou et al., 2021; Jang et al., 2022b; a). This line of work does not view memorization as a negative trait, but as a positive one that can be leveraged to extract world knowledge from its implicit parameters and perform knowledge-intensive tasks such as question answering or training knowledgeable conversation agents. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_14", "text": " Our work is highly related to Jagielski et al. (2022)’s work where they also assert that forgetting can be a relaxed version of differential privacy. However, there are two main differences between our work and theirs. First, they only analyze forgetting as a passive form of mitigating privacy, asserting that data seen early in large-scale training obtain privacy benefits, whereas we suggest a more active form of forgetting. Second, they only show analysis results with image classification and audio generation models while we specifically focus on large LMs. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_15", "text": " We propose simply negating the original training objective of minimizing the negative log-likelihood of the token sequences as our main method of knowledge unlearning in LMs. Specifically, given a sequence of tokens 𝒙=(x1,…,xT)𝒙subscript𝑥1…subscript𝑥𝑇\\bm{x}=(x_{1},...,x_{T}), our unlearning training objective is simply maximizing the following loss function: ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_16", "text": " ℒU​L​(fθ,𝒙)=−∑t=1Tlog​(pθ​(xt|x<t))subscriptℒ𝑈𝐿subscript𝑓𝜃𝒙superscriptsubscript𝑡1𝑇logsubscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥absent𝑡\\mathcal{L}_{UL}(f_{\\theta},\\bm{x})=-\\sum_{t=1}^{T}\\text{log}(p_{\\theta}(x_{t}|x_{<t})) (1) ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_17", "text": " where x<tsubscript𝑥absent𝑡x_{<t} denotes the token sequence x=(x1,…,xt−1)𝑥subscript𝑥1…subscript𝑥𝑡1x=(x_{1},...,x_{t-1}) and pθ​(xt|x<t)subscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥absent𝑡p_{\\theta}(x_{t}|x_{<t}) denotes the conditional probability of predicting the next token to be xtsubscript𝑥𝑡x_{t} when given x<tsubscript𝑥absent𝑡x_{<t} to an LM f𝑓f with parameters θ𝜃\\theta. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_18", "text": " In this subsection, we introduce two metrics we use to quantify the privacy risks given a specific token sequence and how we empirically define the token sequence to be forgotten. In this work, we do not utilize metrics such as membership inference attack recall (Shokri et al., 2017) since we are not interested in quantifying the general privacy risks of LMs, but instead the privacy risks on the specific target token sequences. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_19", "text": " We first introduce a new metric, EL. Given a sequence of tokens 𝒙=(x1,…,xT)𝒙subscript𝑥1…subscript𝑥𝑇\\bm{x}=(x_{1},...,x_{T}) and an LM f𝑓f with pre-trained parameters θ𝜃\\theta, we define EL to be as follows: ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_20", "text": " ELn​(𝒙)=∑t=1T−nOverlapn​(fθ​(x<t),x≥t)T−nsubscriptEL𝑛𝒙superscriptsubscript𝑡1𝑇𝑛subscriptOverlap𝑛subscript𝑓𝜃subscript𝑥absent𝑡subscript𝑥absent𝑡𝑇𝑛\\textsc{EL}_{n}(\\bm{x})=\\dfrac{\\sum_{t=1}^{T-n}\\textsc{Overlap}_{n}(f_{\\theta}(x_{<t}),x_{\\geq t})}{T-n} (2) Overlapn​(𝒂,𝒃)=∑c∈n​-​g​r​a​m​s​(𝒂)𝟙​{c∈n​-​g​r​a​m​s​(𝒃)}|n​-​g​r​a​m​s​(𝒂)|subscriptOverlap𝑛𝒂𝒃subscript𝑐𝑛-𝑔𝑟𝑎𝑚𝑠𝒂1𝑐𝑛-𝑔𝑟𝑎𝑚𝑠𝒃𝑛-𝑔𝑟𝑎𝑚𝑠𝒂\\textsc{Overlap}_{n}(\\bm{a},\\bm{b})=\\dfrac{\\sum_{c\\in n\\mbox{-}grams(\\bm{a})}\\mathbbm{1}\\{c\\in n\\mbox{-}grams(\\bm{b})\\}}{|n\\mbox{-}grams(\\bm{a})|} (3) ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_21", "text": " where n​-​g​r​a​m​s​()𝑛-𝑔𝑟𝑎𝑚𝑠n\\mbox{-}grams() denotes the list of n𝑛n-grams in the given token sequence and fθ​(x<t)subscript𝑓𝜃subscript𝑥absent𝑡f_{\\theta}(x_{<t}) denotes the output token sequences from the LM fθsubscript𝑓𝜃f_{\\theta} when given x<tsubscript𝑥absent𝑡x_{<t} as input that can have max lengths |x≥t|subscript𝑥absent𝑡|x_{\\geq t}| but may be shorter when the EOS (end-of-sequence) token is generated beforehand. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_22", "text": " The process of varying the prefix length |x<t|subscript𝑥absent𝑡|x_{<t}| can be seen as varying the strength of adversarial attacks. This is based on the assumption that the more prior information is provided about the target token sequence, the easier the LM will be able to extract it. Overall, EL can be seen as estimating the general extraction likelihood since we are measuring the average success rate of varying extraction attacks quantified via getting the n-gram overlap of generated and target token sequences. While previous metrics quantifying the privacy risks of LMs are dependent on specific adversarial attacks, this characteristic of EL allows it to quantify the general likelihood of extraction without any dependency on specific extraction attacks. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_23", "text": " We regard n𝑛n to be a hyper-parameter that can be varied depending on the stringency of privacy standards. The higher n𝑛n is set, the stricter we set the standard for a successful extraction attack. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_24", "text": " We define Memorization Accuracy (MA) as follows: ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_25", "text": " MA​(𝒙)=∑t=1T−1𝟙{ argmax(pθ(⋅|x<t))=xt}T−1\\textsc{MA}(\\bm{x})=\\dfrac{\\sum_{t=1}^{T-1}\\mathbbm{1}\\{\\text{ argmax}(p_{\\theta}(\\cdot|x_{<t}))=x_{t}\\}}{T-1} (4) ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_26", "text": " MA quantifies how much fθsubscript𝑓𝜃f_{\\theta} has memorized the given token sequences and was proposed by Tirumala et al. (2022) to analyze the training dynamics of large LMs. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_27", "text": " By utilizing both ELnsubscriptEL𝑛\\textsc{EL}_{n} and MA, we empirically define a specific token sequence 𝒙𝒙\\bm{x} to be forgotten and is no longer susceptible to extraction attacks when the following conditions are met: ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_28", "text": " ELn​(𝒙)≤1|D′|​∑𝒙′∈D′ELn​(𝒙′)​ and MA​(𝒙)≤1|D′|​∑𝒙′∈D′MA​(𝒙′)subscriptEL𝑛𝒙1superscript𝐷′subscriptsuperscript𝒙′superscript𝐷′subscriptEL𝑛superscript𝒙′ and MA𝒙1superscript𝐷′subscriptsuperscript𝒙′superscript𝐷′MAsuperscript𝒙′\\textsc{EL}_{n}(\\bm{x})\\leq\\dfrac{1}{|D^{\\prime}|}\\sum_{\\bm{x}^{\\prime}\\in D^{\\prime}}\\textsc{EL}_{n}(\\bm{x}^{\\prime})\\text{ and }\\textsc{MA}(\\bm{x})\\leq\\dfrac{1}{|D^{\\prime}|}\\sum_{\\bm{x}^{\\prime}\\in D^{\\prime}}\\textsc{MA}(\\bm{x}^{\\prime}) (5) ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_29", "text": " where D′superscript𝐷′D^{\\prime} represents a validation corpora not seen during training. In other words, we define 𝒙𝒙\\bm{x} to be forgotten when the ELnsubscriptEL𝑛\\textsc{EL}_{n}(𝒙𝒙\\bm{x}) and MA(𝒙𝒙\\bm{x}) reach a value that is lower than the average ELnsubscriptEL𝑛\\textsc{EL}_{n} and MA on token sequences that were not seen during training. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_30", "text": " For the experiments, we use the GPT-Neo (125M, 1.3B, 2.7B) LMs (Black et al., 2021) initially pretrained on all of the Pile corpora (825GB) (Gao et al., 2020), and the OPT (125M, 1.3B, 2.7B) LMs (Zhang et al., 2022), pretrained on a subset of the deduplicated version of the Pile as well as other corpora from different domains. For the experiments, we perform unlearning the GPT-Neo LMs and quantify the privacy risks of the target data compared to the OPT LMs to measure how effective our proposed approach is in contrast to deduplicating the training corpora before pretraining the underlying LM Kandpal et al. (2022). We do not use the exact LMs from Kandpal et al. (2022) because the LMs were not open-sourced, and thus use the OPT LMs instead. We also consider the Differential Privacy (DP) Decoding (Majmudar et al., 2022) as one of the baselines; This approach proposes a decoding strategy that performs linear interpolation of the original logits with the uniform distribution and performs nucleus sampling, which they theoretically show provides DP guarantees. λ𝜆\\lambda is set as the linear interpolation weight where λ=0𝜆0\\lambda=0 performs nucleus sampling from the uniform distribution and λ=1𝜆1\\lambda=1 performs regular nucleus sampling, using the logits as weights during random sampling. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_31", "text": " For the actual target data used to quantify the privacy risks of the LMs, we sample instances from the Training Data Extraction Challenge 111https://github.com/google-research/lm-extraction-benchmark where 15,000 examples (each are 200 token sequences long) from 16 different domains of the Pile corpora that are identified to be somewhat easy-to-extract are provided. For our experiments, we randomly sample s samples from the 15,000 examples and make the underlying LM forget the s samples at once. As a default, we show the average results of 5 random samplings of s samples for all of our experimental settings. We only provide the average of the 5 samplings and do not separately report the standard deviation. Instead, we provide the results of each individual run in Appendix A. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_32", "text": " Providing stronger privacy protections for LMs may become meaningless if it requires sacrificing their original capabilities. Thus, while quantifying the privacy risks of LMs, we also quantify the original LM capabilities by evaluating the LMs on 9 different classification tasks quantifying the general capabilities: Hellaswag (Zellers et al., 2019) and Lambada (Paperno et al., 2016) benchmarks to measure linguistic reasoning abilities, Winogrande (Sakaguchi et al., 2021) and COPA (Gordon et al., 2012) to measure commonsense reasoning abilities, and ARC-Easy (Clark et al., 2018), ARC-Challenge (Clark et al., 2018), Piqa (Bisk et al., 2020), MathQA (Amini et al., 2019), PubmedQA (Jin et al., 2019) benchmarks to measure the scientific reasoning abilities. We also evaluate on 4 dialogue tasks (Wizard of Wikipedia (Dinan et al., 2019), Empathetic Dialogues (Rashkin et al., 2019), Blended Skill Talk (Smith et al., 2020), and Wizard of Internet (Komeili et al., 2022)) to evaluate the generation capabilities of the LMs. We use the test set for Lambada and the validation set for the rest of the datasets. We also show the results of measuring the perplexity on the validation corpora of Pile and Wikitext in Appendix B. We do not include measuring perplexity as one of the main evaluations because perplexity might not be the most suitable metric for quantifying general LM performance, especially in the case of unlearning (further explanation given in Appendix B. We evaluate DP Decoding only on the 4 dialogue tasks because the decoding strategy cannot be applied for performing the classification tasks which is evaluated by utilizing a verbalizer. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_33", "text": " For the learning rate, we set it to 5e-5. We show the effect of varying learning rates in Appendix D. We use a constant learning rate scheduling throughout the run. We fix the global batch size to be the same as s (how many samples are forgotten at once) because having global batch sizes smaller than s𝑠s proved to degrade general LM capabilities 222In Section 4.3, We show that s𝑠s plays a critical role in determining how much the unlearning will degrade in general capabilities of the LM since s=128𝑠128s=128 shows to result in much degradation. Method to mitigate this is proposed in Section 4.3 as well.. For ELnsubscriptEL𝑛\\textsc{EL}_{n}, we set n=10 which means EL measures the extraction likelihood of extracting n consecutive tokens of varying extraction attack 333We set the n𝑛n value to 10 since we empirically consider an extraction to be successful when 10 consecutive token sequences are successfully generated by the LM. We show varying the n𝑛n with values from (5,10,20,40) in Appendix H.. For calculating EL10subscriptEL10\\textsc{EL}_{10} and MA, we use a naïve greedy decoding strategy. We set both the dropout and weight decay rates to 0. Lastly, while we provide a guideline of empirically deciding a single token sequence to be forgotten in Section 3.2, for considering a chunk of s𝑠s token sequences to be forgotten, we use the average EL10subscriptEL10\\textsc{EL}_{10} and MA as an approximation of the individual EL10subscriptEL10\\textsc{EL}_{10} and MA. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_34", "text": " First, we show how we get the Forgetting Threshold for EL10subscriptEL10\\textsc{EL}_{10} and MA, the values where we consider the token sequence to be forgotten and unsusceptible from extraction attacks, for all model sizes of GPT-Neo LMs in Table 1. For D′superscript𝐷′D^{\\prime}, we perform weighted sampling (same domain distribution as the Pile training corpora) of 10,000 instances each with token lengths 200 from the Pile validation corpora, and measure the average EL10subscriptEL10\\textsc{EL}_{10} and MA (Equation 5), which are empirically set as the Forgetting Threshold values. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_35", "text": " Table 2 shows the main results of performing unlearning on LMs of varying sizes and the baselines. While we provide the average performances of the 5 random samplings in Table 2, we provide each individual runs in Appendix A for reference. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_36", "text": " We highlight five main observations regarding the results. (1) OPT LMs show a much lower EL10subscriptEL10\\textsc{EL}_{10} and MA than GPT-Neo LMs, confirming that deduplicating the pretraining corpora is indeed helpful for mitigating privacy risks. (2) Neo + DPD+ enables effective protection against extraction attacks demonstrated via the lowest EL and MA score; however, it brings severe degradation of generation capabilities measured via the Average F1 score of the 4 dialogue generation tasks. (3) Neo + UL+superscriptNeo + UL\\textsc{Neo + UL}^{+} results in severe degradation of both classification and dialogue tasks for the 125M, only severe degradation of dialogue tasks for 1.3B LM while for the 2.7B LMs, it enables retaining most of its previous capabilities. (4) While the LMs scale to larger sizes, it takes fewer epochs for the target sequences to be forgotten. Together with (3), this implies that larger LMs are strong unlearners. (5) While Neo + UL+superscriptNeo + UL\\textsc{Neo + UL}^{+} provides stronger privacy protection than OPT without sacrificing its performance from Neo for the 2.7B LM, it is much more computationally efficient (3,500,000x) than re-training the underlying LM, which is required for all data preprocessing approaches 444Computational efficiency is measured via FLOPs which is calculated by (6 × Total Training Tokens × Parameter Size) as in Brown et al. (2020). FLOPs for OPT LMs were estimated using information from Zhang et al. (2022). We provide the FLOPs for the methods in Appendix C.. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_37", "text": " Overall, results show unlearning to be an effective approach to providing a strong privacy protection while retaining and sometimes even improving general LM capabilities. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_38", "text": " We show the effect of varying s𝑠s (the # of data instances to be forgotten at once) in Figure 2 across model scales. We denote this approach as batch unlearning. As shown by the s=128𝑠128s=128 results, it is harder to forget more samples at once, resulting in substantial degradation of average LM performance regardless of how large the LM is. Since s≤32𝑠32s\\leq 32 does not show much degradation, we explore if sequentially unlearning can be a solution. In Figure 2b, we show the result of dividing the 128 samples into 4 chunks of 32 and performing sequential unlearning; we unlearn each chunk at a time until the chunk reaches the forgetting threshold. Surprisingly, as shown by the performance gap at s=128𝑠128s=128 between the dotted lines (the s=128𝑠128s=128 performance of Figure 2a) and straight lines, the end result is vastly different even though exactly the same instances were forgotten. Sequential unlearning shows almost no degradation of average LM performance. In Appendix G, we show that chunks once forgotten stay forgotten and that later chunks are forgotten much faster compared to the initial chunk. This result hints at the generalization of unlearning, which we do not further explore in the scope of this work. The result also suggests that knowledge unlearning can be continually applied to LMs when needed. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_39", "text": " To show exactly what happens to the LM during knowledge unlearning, we show how the performance of each of the LM benchmarks changes as we perform 10 runs of unlearning to the GPT-Neo (1.3B) model (each run with s=1𝑠1s=1) in Figure 3. As shown in the figure, the LM performance for each benchmark varies tremendously on which sample is chosen to be forgotten. Furthermore, the ending time of each run is different, indicating that some samples are forgotten faster than others. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_40", "text": " To provide a better intuition of exactly how knowledge unlearning guarantees privacy, we perform an extraction attack with a token sequence sample in Table 3 where we show the model-generated text from the extraction attack before and after applying knowledge unlearning. While the extraction attack is extremely successful at extracting the rest of the suffix before unlearning (100% of the token sequence), only a small portion (∼similar-to\\sim3% of the token sequence) of the suffix is extracted after applying unlearning. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_41", "text": " To measure why some instances are harder to forget, we perform 5 random samplings of s=8𝑠8s=8 from 8 different domains from the Training Data Extraction Challenge555https://github.com/google-research/lm-extraction-benchmark and perform unlearning on the GPT-Neo 1.3B LM. We also show the results of each individual run in Appendix A. As shown in Table 4, despite undergoing the same number of token updates (10 epochs of unlearning), different domains result in vastly different outcomes; enron emails results in the average LM performance degradation of only -0.4% while uspto backgrounds results in -4.5% degradation. Furthermore, the final EL10subscriptEL10\\textsc{EL}_{10} varies depending on the domain, suggesting that some domains (e.g., Freelaw) are harder to forget than others. Lastly, domains that are more structured, which means the data consists of some kind of patterns such as a list of emails (enron emails) or code (github (code)), seem to result in less degradation of LM performance in contrast to domains that are more unstructured, which means the data consist of mostly raw English text such as a review for journal submission (pubmed central). We provide examples from each domain in Appendix E. However, further analysis of understanding exactly which components make unlearning work should be made in future work. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_42", "text": " In this paper, we propose knowledge unlearning as a method for mitigating privacy risks in LMs that provides strong privacy protection with little to no degradation of general LM capabilities measured by evaluating on 9 common LM classification benchmarks and 4 dialogue benchmarks for the larger sized LMs. As large LMs expand their use cases, potentially affecting the daily lives of people, the research community should make sure that the privacy of individuals is not violated intentionally or unintentionally by the knowledge stored in the implicit parameters of these models. Since it is inherently impossible to prevent and predict all future privacy concerns prior to pretraining the LM, we suggest the community consider knowledge unlearning for ensuring privacy upon individuals’ requests post hoc pretraining. 666We provide some limitations of our work in Appendix I.. ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" }, { "id": "2210.01504_all_43", "text": " We thank Hanseok Oh, Minsu Kim, James Thorne, and Hyunji Lee for the useful discussion and feedback while preparing the paper draft. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)). ", "title": "Knowledge Unlearning for Mitigating Privacy Risks in Language Models" } ]
How can we learn the network to be invariant to gray value variations?
Data augmentation and drop-out layer can make the network invariant to gray value variations [15].
[ 15 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al.  was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_1", "text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al.  trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_2", "text": " Obviously, the strategy in Ciresan et al.  has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_3", "text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_4", "text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_5", "text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al.  in the scope of unsupervised feature learning. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_6", "text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_7", "text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_8", "text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_9", "text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_10", "text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_11", "text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk​(𝐱)=exp⁡(ak​(𝐱))/(∑k′=1Kexp⁡(ak′​(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak​(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk​(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk​(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak​(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk​(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ​(𝐱)​(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw​(𝐱)​log⁡(pℓ​(𝐱)​(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_12", "text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_13", "text": " The separation border is computed using morphological operations. The weight map is then computed as w​(𝐱)=wc​(𝐱)+w0⋅exp⁡(−(d1​(𝐱)+d2​(𝐱))22​σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_14", "text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_15", "text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_16", "text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge  that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_17", "text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_18", "text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_19", "text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_20", "text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_21", "text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_22", "text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" } ]
What are the main findings of the experiments with respect to plain networks and residual networks?
[Residual network reduce more training error than plain networks [33].
[ 33 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence (41, 44) reveals that network depth is of crucial importance, and the leading results (41, 44, 13, 16) on the challenging ImageNet dataset all exploit “very deep” models, with a depth of sixteen to thirty . Many other nontrivial visual recognition tasks (8, 12, 7, 32, 27) have also greatly benefited from very deep models. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_1", "text": " Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients (1, 9), which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization (23, 9, 37, 13) and intermediate normalization layers , which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_2", "text": " When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in (11, 42) and thoroughly verified by our experiments. Fig. 1 shows a typical example. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_3", "text": " The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_4", "text": " In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we let the stacked nonlinear layers fit another mapping of ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original mapping is recast into ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_5", "text": " The formulation of ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x} can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections (2, 34, 49) are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe ) without modifying the solvers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_6", "text": " We present comprehensive experiments on ImageNet to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_7", "text": " Similar phenomena are also shown on the CIFAR-10 set , suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_8", "text": " On the ImageNet classification dataset , we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets . Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_9", "text": " Residual Representations. In image recognition, VLAD is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector can be formulated as a probabilistic version of VLAD. Both of them are powerful shallow representations for image retrieval and classification (4, 48). For vector quantization, encoding residual vectors is shown to be more effective than encoding original vectors. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_10", "text": " In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning (45, 46), which relies on variables that represent residual vectors between two scales. It has been shown (3, 45, 46) that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_11", "text": " Shortcut Connections. Practices and theories that lead to shortcut connections (2, 34, 49) have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output (34, 49). In (44, 24), a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of (39, 38, 31, 47) propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In , an “inception” layer is composed of a shortcut branch and a few deeper branches. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_12", "text": " Concurrent with our work, “highway networks” (42, 43) present shortcut connections with gating functions . These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_13", "text": " Let us consider ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with 𝐱𝐱\\mathbf{x} denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions222This hypothesis, however, is still an open question. See ., then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., ℋ​(𝐱)−𝐱ℋ𝐱𝐱\\mathcal{H}(\\mathbf{x})-\\mathbf{x} (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we explicitly let these layers approximate a residual function ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original function thus becomes ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_14", "text": " This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_15", "text": " In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_16", "text": " We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: 𝐲=ℱ​(𝐱,{Wi})+𝐱.𝐲ℱ𝐱subscript𝑊𝑖𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+\\mathbf{x}. (1) Here 𝐱𝐱\\mathbf{x} and 𝐲𝐲\\mathbf{y} are the input and output vectors of the layers considered. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, ℱ=W2​σ​(W1​𝐱)ℱsubscript𝑊2𝜎subscript𝑊1𝐱\\mathcal{F}=W_{2}\\sigma(W_{1}\\mathbf{x}) in which σ𝜎\\sigma denotes ReLU and the biases are omitted for simplifying notations. The operation ℱ+𝐱ℱ𝐱\\mathcal{F}+\\mathbf{x} is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ​(𝐲)𝜎𝐲\\sigma(\\mathbf{y}), see Fig. 2). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_17", "text": " The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_18", "text": " The dimensions of 𝐱𝐱\\mathbf{x} and ℱℱ\\mathcal{F} must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Wssubscript𝑊𝑠W_{s} by the shortcut connections to match the dimensions: 𝐲=ℱ​(𝐱,{Wi})+Ws​𝐱.𝐲ℱ𝐱subscript𝑊𝑖subscript𝑊𝑠𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+W_{s}\\mathbf{x}. (2) We can also use a square matrix Wssubscript𝑊𝑠W_{s} in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Wssubscript𝑊𝑠W_{s} is only used when matching dimensions. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_19", "text": " The form of the residual function ℱℱ\\mathcal{F} is flexible. Experiments in this paper involve a function ℱℱ\\mathcal{F} that has two or three layers (Fig. 5), while more layers are possible. But if ℱℱ\\mathcal{F} has only a single layer, Eqn.(1) is similar to a linear layer: 𝐲=W1​𝐱+𝐱𝐲subscript𝑊1𝐱𝐱\\mathbf{y}=W_{1}\\mathbf{x}+\\mathbf{x}, for which we have not observed advantages. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_20", "text": " We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_21", "text": " We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_22", "text": " Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets (Fig. 3, left). The convolutional layers mostly have 3×\\times3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_23", "text": " It is worth noticing that our model has fewer filters and lower complexity than VGG nets (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_24", "text": " Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×\\times1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_25", "text": " Our implementation for ImageNet follows the practice in (21, 41). The image is resized with its shorter side randomly sampled in (256,480)256480(256,480) for scale augmentation . A 224×\\times224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted . The standard color augmentation in is used. We adopt batch normalization (BN) right after each convolution and before activation, following . We initialize the weights as in and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×10460superscript10460\\times 10^{4} iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout , following the practice in . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_26", "text": " In testing, for comparison studies we adopt the standard 10-crop testing . For best results, we adopt the fully-convolutional form as in (41, 13), and average the scores at multiple scales (images are resized such that the shorter side is in {224,256,384,480,640}224256384480640\\{224,256,384,480,640\\}). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_27", "text": " We evaluate our method on the ImageNet 2012 classification dataset that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_28", "text": " Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_29", "text": " The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_30", "text": " We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN , which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error333We have experimented with more training iterations (3×\\times) and still observed the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.. The reason for such optimization difficulties will be studied in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_31", "text": " Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×\\times3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_32", "text": " We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_33", "text": " Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_34", "text": " Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_35", "text": " Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_36", "text": " Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_37", "text": " Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design444Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.. For each residual function ℱℱ\\mathcal{F}, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×\\times1, 3×\\times3, and 1×\\times1 convolutions, where the 1×\\times1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×\\times3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_38", "text": " The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_39", "text": " 50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_40", "text": " 101-layer and 152-layer ResNets: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_41", "text": " The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 5). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 5). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_42", "text": " Comparisons with State-of-the-art Methods. In Table 5 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_43", "text": " We conducted more studies on the CIFAR-10 dataset , which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_44", "text": " The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×\\times32 images, with the per-pixel mean subtracted. The first layer is 3×\\times3 convolutions. Then we use a stack of 6​n6𝑛6n layers with 3×\\times3 convolutions on the feature maps of sizes {32,16,8}32168\\{32,16,8\\} respectively, with 2n𝑛n layers for each feature map size. The numbers of filters are {16,32,64}163264\\{16,32,64\\} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n𝑛n+2 stacked weighted layers. The following table summarizes the architecture: output map size 32×\\times32 16×\\times16 8×\\times8 # layers 1+2n𝑛n 2n𝑛n 2n𝑛n # filters 16 32 64 When shortcut connections are used, they are connected to the pairs of 3×\\times3 layers (totally 3​n3𝑛3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_45", "text": " We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in and BN but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in for training: 4 pixels are padded on each side, and a 32×\\times32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×\\times32 image. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_46", "text": " We compare n={3,5,7,9}𝑛3579n=\\{3,5,7,9\\}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see ), suggesting that such an optimization difficulty is a fundamental problem. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_47", "text": " Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_48", "text": " We further explore n=18𝑛18n=18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging555With an initial learning rate of 0.1, it starts converging (<<90% error) after several epochs, but still reaches similar accuracy.. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet and Highway (Table 6), yet is among the state-of-the-art results (6.43%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_49", "text": " Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×\\times3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_50", "text": " Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n=200𝑛200n=200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103superscript10310^{3}-layer network is able to achieve training error <<0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_51", "text": " But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout or dropout is applied to obtain the best results ((10, 25, 24, 35)) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_52", "text": " Our method has good generalization performance on other recognition tasks. Table 8 and  8 show the object detection baseline results on PASCAL VOC 2007 and 2012 and COCO . We adopt Faster R-CNN as the detection method. Here we are interested in the improvements of replacing VGG-16 with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@(.5, .95)), which is a 28% relative improvement. This gain is solely due to the learned representations. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_53", "text": " Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix. ", "title": "Deep Residual Learning for Image Recognition" } ]
How would the effectiveness of SqueezeNet's model compression be affected if a significantly smaller CNN is used instead of AlexNet?
by combining CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510× reduction in model size with no decrease in accuracy compared to the baseline [25].
[ 25 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: ∙∙\\bullet More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, communication overhead is directly proportional to the number of parameters in the model Iandola et al. (2016). In short, small models train faster due to requiring less communication. ∙∙\\bullet Less overhead when exporting new models to clients. For autonomous driving, companies such as Tesla periodically copy new models from their servers to customers’ cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla’s Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates Consumer Reports (2016). However, over-the-air updates of today’s typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. ∙∙\\bullet Feasible FPGA and embedded deployment. FPGAs often have less than 10MB111For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory and does not provide off-chip memory. of on-chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory bandwidth Qiu et al. (2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASIC to fit on a smaller die. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_1", "text": " As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_2", "text": " The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the SqueezeNet architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of SqueezeNet-like architectures. In Section 5, we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_3", "text": " The overarching goal of our work is to identify a model that has very few parameters while preserving accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Denton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model Denton et al. (2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN Han et al. (2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression Han et al. (2015a), and further designed a hardware accelerator called EIE Han et al. (2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_4", "text": " Convolutions have been used in artificial neural networks for at least 25 years; LeCun et al. helped to popularize CNNs for digit recognition applications in the late 1980s LeCun et al. (1989). In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i.e. RGB), and in each subsequent layer Lisubscript𝐿𝑖L_{i} the filters have the same number of channels as Li−1subscript𝐿𝑖1L_{i-1} has filters. The early work by LeCun et al. LeCun et al. (1989) uses 5x5xChannels222From now on, we will simply abbreviate HxWxChannels to HxW. filters, and the recent VGG Simonyan & Zisserman (2014) architectures extensively use 3x3 filters. Models such as Network-in-Network Lin et al. (2013) and the GoogLeNet family of architectures Szegedy et al. (2014); Ioffe & Szegedy (2015); Szegedy et al. (2015; 2016) use 1x1 filters in some layers. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_5", "text": " With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter dimensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the GoogLeNet papers propose Inception modules, which are comprised of a number of different dimensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 Szegedy et al. (2014) and sometimes 1x3 and 3x1 Szegedy et al. (2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_6", "text": " While the CNN microarchitecture refers to individual layers and modules, we define the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_7", "text": " Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG Simonyan & Zisserman (2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset Deng et al. (2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy He et al. (2015a). ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_8", "text": " The choice of connections across multiple layers or modules is an emerging area of CNN macroarchitectural research. Residual Networks (ResNet) He et al. (2015b) and Highway Networks Srivastava et al. (2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_9", "text": " Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NN’s accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for finding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization Snoek et al. (2012), simulated annealing Ludermir et al. (2006), randomized search Bergstra & Bengio (2012), and genetic algorithms Stanley & Miikkulainen (2002). To their credit, each of these papers provides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated approaches – instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions influence model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_10", "text": " In the following sections, we first propose and evaluate the SqueezeNet architecture with and without model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_11", "text": " In this section, we begin by outlining our design strategies for CNN architectures with few parameters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised mainly of Fire modules. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_12", "text": " Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures: ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_13", "text": " Strategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_14", "text": " Strategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 filters. We decrease the number of input channels to 3x3 filters using squeeze layers, which we describe in the next section. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_15", "text": " Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2) the choice of layers in which to downsample in the CNN architecture. Most commonly, downsampling is engineered into CNN architectures by setting the (stride >> 1) in some of the convolution or pooling layers (e.g. Szegedy et al. (2014); Simonyan & Zisserman (2014); Krizhevsky et al. (2012)). If early333In our terminology, an “early” layer is close to the input data. layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are concentrated toward the end444In our terminology, the “end” of the network is the classifier. of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed downsampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy He & Sun (2015). ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_16", "text": " Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_17", "text": " We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1. The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3.1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1​x​1subscript𝑠1𝑥1s_{1x1}, e1​x​1subscript𝑒1𝑥1e_{1x1}, and e3​x​3subscript𝑒3𝑥3e_{3x3}. In a Fire module, s1​x​1subscript𝑠1𝑥1s_{1x1} is the number of filters in the squeeze layer (all 1x1), e1​x​1subscript𝑒1𝑥1e_{1x1} is the number of 1x1 filters in the expand layer, and e3​x​3subscript𝑒3𝑥3e_{3x3} is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1​x​1subscript𝑠1𝑥1s_{1x1} to be less than (e1​x​1subscript𝑒1𝑥1e_{1x1} + e3​x​3subscript𝑒3𝑥3e_{3x3}), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section 3.1. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_18", "text": " We now describe the SqueezeNet CNN architecture. We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3.1. We present the full SqueezeNet architecture in Table 1. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_19", "text": " For brevity, we have omitted number of details and design choices about SqueezeNet from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_20", "text": " ∙∙\\bullet So that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. ∙∙\\bullet ReLU Nair & Hinton (2010) is applied to activations from squeeze and expand layers. ∙∙\\bullet Dropout Srivastava et al. (2014) with a ratio of 50% is applied after the fire9 module. ∙∙\\bullet Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by the NiN Lin et al. (2013) architecture. ∙∙\\bullet When training SqueezeNet, we begin with a learning rate of 0.04, and we linearly decrease the learning rate throughout training, as described in Mishkin et al. (2016). For details on the training protocol (e.g. batch size, learning rate, parameter initialization), please refer to our Caffe-compatible configuration files located here: https://github.com/DeepScale/SqueezeNet. ∙∙\\bullet The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e.g. 1x1 and 3x3) Jia et al. (2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_21", "text": " We released the SqueezeNet configuration files in the format defined by the Caffe CNN framework. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet Chen et al. (2015a), Chainer Tokui et al. (2015), Keras Chollet (2016), and Torch Collobert et al. (2011). Each of these has its own native format for representing a CNN architecture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN Chetlur et al. (2014) and MKL-DNN Das et al. (2016). The research community has ported the SqueezeNet CNN architecture for compatibility with a number of other CNN software frameworks: • MXNet Chen et al. (2015a) port of SqueezeNet: Haria (2016) • Chainer Tokui et al. (2015) port of SqueezeNet: Bell (2016) • Keras Chollet (2016) port of SqueezeNet: DT42 (2016) • Torch Collobert et al. (2011) port of SqueezeNet’s Fire Modules: Waghmare (2016) ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_22", "text": " We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet Krizhevsky et al. (2012) model that was trained to classify images using the ImageNet Deng et al. (2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet555Our baseline is bvlc_alexnet from the Caffe codebase Jia et al. (2014). and the associated model compression results as a basis for comparison when evaluating SqueezeNet. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_23", "text": " In Table 2, we review SqueezeNet in the context of recent model compression results. The SVD-based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% Denton et al. (2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet Han et al. (2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level Han et al. (2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table 2. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_24", "text": " It appears that we have surpassed the state-of-the-art results from the model compression community: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4×1.4\\times smaller model size than the best efforts from the model compression community while maintaining or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models “need” all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression Han et al. (2015a) to SqueezeNet, using 33% sparsity666Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3×3\\times decrease in model size. and 8-bit quantization. This yields a 0.66 MB model (363×363\\times smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compression with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510×510\\times smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_25", "text": " In addition, these results demonstrate that Deep Compression Han et al. (2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10×10\\times while preserving the baseline accuracy. In summary: by combining CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510×510\\times reduction in model size with no decrease in accuracy compared to the baseline. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_26", "text": " Finally, note that Deep Compression Han et al. (2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 328=4​x3284𝑥\\frac{32}{8}=4x with 8-bit quantization or 326=5.3​x3265.3𝑥\\frac{32}{6}=5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware – Efficient Inference Engine (EIE) – that can compute codebook-quantized CNNs more efficiently Han et al. (2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits Gysel (2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_27", "text": " So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_28", "text": " In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_29", "text": " In SqueezeNet, each Fire module has three dimensional hyperparameters that we defined in Section 3.2: s1​x​1subscript𝑠1𝑥1s_{1x1}, e1​x​1subscript𝑒1𝑥1e_{1x1}, and e3​x​3subscript𝑒3𝑥3e_{3x3}. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define b​a​s​ee𝑏𝑎𝑠subscript𝑒𝑒base_{e} as the number of expand filters in the first Fire module in a CNN. After every f​r​e​q𝑓𝑟𝑒𝑞freq Fire modules, we increase the number of expand filters by i​n​c​re𝑖𝑛𝑐subscript𝑟𝑒incr_{e}. In other words, for Fire module i𝑖i, the number of expand filters is ei=basee+(incre∗⌊if​r​e​q⌋e_{i}=base_{e}+(incr_{e}*{\\left\\lfloor{\\frac{i}{freq}}\\right\\rfloor}). In the expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define ei=ei,1​x​1+ei,3​x​3subscript𝑒𝑖subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3e_{i}=e_{i,{1x1}}+e_{i,{3x3}} with p​c​t3​x​3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} (in the range (0,1)01(0,1), shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, ei,3​x​3=ei∗p​c​t3​x​3subscript𝑒𝑖3𝑥3subscript𝑒𝑖𝑝𝑐subscript𝑡3𝑥3e_{i,{3x3}}=e_{i}*pct_{3x3}, and ei,1​x​1=ei∗(1−p​c​t3​x​3)subscript𝑒𝑖1𝑥1subscript𝑒𝑖1𝑝𝑐subscript𝑡3𝑥3e_{i,{1x1}}=e_{i}*(1-pct_{3x3}). Finally, we define the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range (0,1)01(0,1), shared by all Fire modules): si,1​x​1=S​R∗eisubscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖s_{i,{1x1}}=SR*e_{i} (or equivalently si,1​x​1=S​R∗(ei,1​x​1+ei,3​x​3)subscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3s_{i,{1x1}}=SR*(e_{i,{1x1}}+e_{i,{3x3}})). SqueezeNet (Table 1) is an example architecture that we generated with the aforementioned set of metaparameters. Specifically, SqueezeNet has the following metaparameters: b​a​s​ee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, i​n​c​re=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, p​c​t3​x​3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, f​r​e​q=2𝑓𝑟𝑒𝑞2freq=2, and S​R=0.125𝑆𝑅0.125SR=0.125. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_30", "text": " In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratio between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_31", "text": " In these experiments, we use SqueezeNet (Figure 2) as a starting point. As in SqueezeNet, these experiments use the following metaparameters: b​a​s​ee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, i​n​c​re=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, p​c​t3​x​3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, and f​r​e​q=2𝑓𝑟𝑒𝑞2freq=2. We train multiple models, where each model has a different squeeze ratio (SR)777Note that, for a given model, all Fire layers share the same squeeze ratio. in the range (0.125, 1.0). In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this figure.888Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in SqueezeNet have 0.125x the number of filters as the expand layers. From this figure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_32", "text": " In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 filters with 1x1 filters. An open question is, how important is spatial resolution in CNN filters? The VGG Simonyan & Zisserman (2014) architectures have 3x3 spatial resolution in most layers’ filters; GoogLeNet Szegedy et al. (2014) and Network-in-Network (NiN) Lin et al. (2013) have 1x1 filters in some layers. In GoogLeNet and NiN, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis.999To be clear, each filter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3. Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_33", "text": " We use the following metaparameters in this experiment: b​a​s​ee=i​n​c​re=128𝑏𝑎𝑠subscript𝑒𝑒𝑖𝑛𝑐subscript𝑟𝑒128base_{e}=incr_{e}=128, f​r​e​q=2𝑓𝑟𝑒𝑞2freq=2, S​R=0.500𝑆𝑅0.500SR=0.500, and we vary p​c​t3​x​3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} from 1% to 99%. In other words, each Fire module’s expand layer has a predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from “mostly 1x1” to “mostly 3x3”. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: S​R=0.500𝑆𝑅0.500SR=0.500 and p​c​t3​x​3=50%𝑝𝑐subscript𝑡3𝑥3percent50pct_{3x3}=50\\%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a larger model size but provides no improvement in accuracy on ImageNet. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_34", "text": " So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet He et al. (2015b), we explored three different architectures: ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_35", "text": " ∙∙\\bullet Vanilla SqueezeNet (as per the prior sections). ∙∙\\bullet SqueezeNet with simple bypass connections between some Fire modules. (Inspired by Srivastava et al. (2015); He et al. (2015b).) ∙∙\\bullet SqueezeNet with complex bypass connections between the remaining Fire modules. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_36", "text": " We illustrate these three variants of SqueezeNet in Figure 2. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_37", "text": " Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the final accuracy and/or ability to train the full model. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_38", "text": " One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the “same number of channels” requirement can’t be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is “just a wire,” we define a complex bypass as a bypass that includes a 1x1 convolution layer with the number of filters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_39", "text": " In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of information can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to flow around the squeeze layers. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_40", "text": " We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We fixed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_41", "text": " In this paper, we have proposed steps toward a more disciplined approach to the design-space exploration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50×50\\times fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510×510\\times smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) Han et al. (2016b), Han et al. use model compression during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also producing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_42", "text": " We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA Gschwend (2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_43", "text": " In the context of this paper, we focused on ImageNet as a target dataset. However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as fine-grained object recognition Zhang et al. (2013); Donahue et al. (2013), logo identification in images Iandola et al. (2015), and generating sentences about images Fang et al. (2015). ImageNet-trained CNNs have also been applied to a number of applications pertaining to autonomous driving, including pedestrian and vehicle detection in images Iandola et al. (2014); Girshick et al. (2015); Ashraf et al. (2016) and videos Chen et al. (2015b), as well as segmenting the shape of the road Badrinarayanan et al. (2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" }, { "id": "1602.07360_all_44", "text": " SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the design space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. ", "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size" } ]
Instead of concatenate the three parts, what can be more effective way to consider these three behaviors collectively?
Retention indicates a state that whether a set of apps is installed now [20].
[ 20 ]
[ { "id": "2005.13303_all_0", "text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, location, interested tags, accounts subscribed, and shopping interests (Liu et al., 2019). Among which, user behaviors on mobile app usage, including retention (which apps are currently installed on the phone), installation (when and which apps were ever installed recently), and uninstallation (when and which apps were removed from the phone recently), contain rich information about both long-term and short-term user interests (Lu et al., 2014). For example, if a user installs Google Photos, Snapseed, and Instagram, there is a good chance that she is an enthusiast of mobile photographing. If a user installs the popular game Honor of Kings, a.k.a. Arena of Valor recently, she might be a new gamer and is wondering how to play better. Such information is valuable for various downstream applications, and how to utilize them better is an exciting problem worthy of solving. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_1", "text": " Traditionally, mining from mobile app usage relies on task-specific handcrafted features. For example, recommending a new game app to users who have installed similar games can help avoid recommending to non-gamers. However, handcrafted feature engineering often requires substantial human efforts, and maybe sub-optimal when domain experts are absent. To improve efficiency and effectiveness, an automatic generation for general-purpose user representations from user behaviors on mobile app usage is in need. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_2", "text": " We have been working towards this goal since mid 2019, and several versions of models have been deployed. In this paper, we outline the most recent practice at Tencent. The main challenges of building general-purpose user representations for multiple downstream applications include: ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_3", "text": " • Retention, installation and uninstallation need to be modeled collectively. They represent the preference of users from different aspects, and building representations for the three parts separately and then concatenating them may limit the performance. For example, for users who have installed multiple games, uninstalling a game app may only indicate that she has finished the game and wants to start a new one. While for a user who has not installed other games, immediately uninstalling after installation may suggest that she does not like this kind of game at all. Modeling such complex relationships using traditional recurrent neural networks (RNNs) is challenging. • Actions of (un)installing apps are low-frequency and unevenly distributed over time. Figure 1 presents a demo of app installation and uninstallation records of a user. As excitement over the new phone fades, most users only install or uninstall apps when they need to. Moreover, users usually do not operate for even a month but may suddenly install or uninstall several apps in a single day. In this case, various intervals between every two behaviors are not omittable. Although RNN-based models have succeeded in analyzing user activities (Hidasi et al., 2016; Li et al., 2017), the behaviors in those scenarios are usually with notably higher-frequency and nearly even distribution over time. Therefore, traditional RNNs may not perform well for this task. • Many long-tailed apps suffer from serious sparsity. Popular apps like Wechat and Alipay have been installed on almost all the smartphones in China, while long-tailed apps may only have a few hundreds of installations among one million users. However, user’s behaviors over the long-tailed apps often reflect one’s personalized interests better. Building effective user representations need to utilize the information from long-tailed apps without suffering from severe sparsity. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_4", "text": " To achieve the goal, we design a tailored AutoEncoder-coupled Transformer Network (AETN) to analyze user behaviors on mobile app usage. The model follows a classical encoder-decoder framework with a bottleneck for user representation learning, and utilizes a multi-objective joint training scheme for parameter learning. Figure 2 shows the general framework. The model mainly consists of three parts, i.e., the retention autoencoder part, the (stacked) transformer encoder part, and the (stacked) transformer decoder part. The three parts are tied through parameter sharing and trained jointly. The proposed model is entirely unsupervised and carefully optimized for learning user embeddings from mobile app usage. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_5", "text": " The retention autoencoder serves as a foundational part of AETN. From the co-occurrence relationship of apps in retention data, it learns and shares effective app embeddings with the transformer network. As one of the designs to alleviate the problem of sparsity, we model the embeddings of apps with both app IDs and their corresponding category IDs. Therefore, if the usage of an app is gravely sparse, at least the category ID can provide some information. Another design is weight tying between the encoder and the decoder. Note that we only tie the first and the last layer of the autoencoder to leave enough flexibility. Weight tying can significantly reduce the number of free parameters, and hasten the convergence (Hidasi and Karatzoglou, 2018). Together with app embeddings, effective representations of user retention are obtained and provided to the transformer parts. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_6", "text": " On the other hand, the transformer parts model the retention, installation, and uninstallation collectively, and output the final user embeddings. Transformer networks have been proved effective for modeling (multiple) sequences and obtaining contextual representations in natural language processing (Devlin et al., 2019). Inspired by BERT (Devlin et al., 2019), in this paper, we use a stacked transformer network to consolidate different types of information. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_7", "text": " The transformer encoder part receives the user retention, shared app embeddings, date embeddings, and behavior type embeddings (retention, installation, and uninstallation) as input. Thus, the inputs altogether include complete information on when and whether users install or uninstall what apps as well as their current status of app usage. The date embeddings make the transformer suitable for modeling user behaviors that are low-frequency and distribute unevenly over time. Besides, we also introduce a masked app prediction task like BERT (Devlin et al., 2019) to help extract information more productively. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_8", "text": " After compressing all the input information at a bottleneck layer, the (stacked) transformer decoder part tries to reconstruct the (un)installation sequences. The reconstruction follows a manner similar to non-autoregressive translation (Gu et al., 2018). And the date embeddings, as well as the behavior type embeddings, are used as the queries. We also reconstruct the retention data from the bottleneck layer with a multi-layer perceptron network. The reconstruction processes force the bottleneck to retain as much information as possible from the original input through the transformer encoder. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_9", "text": " Besides, we use weight tying in the output layers of both the transformer encoder and the decoder. Moreover, to better encourage information interaction within the transformer network, we proposed a modified multi-head self-attention mechanism where the representations of retention or bottleneck are fed to attention mechanisms more directly during every attention step. All the components mentioned above are trained jointly over data from tens of millions of users of Tencent. Representations from the bottleneck of the transformer network are used as general-purpose user embeddings, which can fertilize many downstream applications that require user representations. The main contributions of our work are summarized as follows: • We introduce our recent practice of general-purpose user embedding learning based on mobile app usage for multiple downstream applications. • We propose a tailored model AETN to achieve the goal. With a carefully-designed neural network structure, the autoencoder-coupled transformer network overcomes the serious sparsity of long-tailed apps and the uneven distribution of activities, and models user behaviors on mobile app usage collectively. Our code is publicly available.111https://github.com/Junqi-Zhang/AETN • The cost of model training is acceptable in real application scenarios. Extensive online and offline experiments verify the effectiveness of the proposed model, which has been deployed in a real system at Tencent and achieved boosted performance in daily business. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_10", "text": " The rest of the paper is organized as follows. We introduce the background in Section 2. Section 3 and Section 4 describe our high-level system and the detailed design of AETN respectively. We present offline experiments and the online A/B testing in Section 5 and Section 6. The details of model deployment are presented in Section 7. Related work is discussed in Section 8, and Section 9 draws the conclusion. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_11", "text": " Tencent Mobile Manager is currently the most prevalent mobile security and management app in China, which serves nearly one billion users. We provide various auxiliary functionalities, including news recommendations, short video recommendations, app recommendations, etc. For example, users can reach personalized content feeds, including news, articles, and short videos, from the “Good Morning” tab of Tencent Mobile Manager, as well as from the “Find” tab of Tencent Wi-Fi Manager, a wingman app of Tencent Mobile Manager. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_12", "text": " We have built a data center to support various downstream applications. Traditional handcrafted feature engineering and shallow models may not maximize the value of data, therefore, in terms of user behaviors on mobile app usage, general-purpose user representations are desired. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_13", "text": " In this section, we introduce our AETN-based system from a high-level perspective and review its data processing, model training, and serving components. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_14", "text": " We need to preprocess the user data into a format suitable for subsequent models to handle and also reduce the noise in data. After data preprocessing, each user is represented with one’s “retention” and four sequences. “Retention” is a set of apps installed on one’s phone at present. Two of the sequences, representing recent “installation” operations, are composed of installed apps and corresponding dates. The rest two sequences represent recent “uninstallation” operations. To reduce the noise in user behaviors, we keep the most recent 10 installation or uninstallation operations in a week for each user. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_15", "text": " We use the following criteria to select the apps considered in the model. • We manually exclude some top-ranked apps which have been installed on almost every smartphone and can hardly represent user interests, such as Wechat. Meanwhile, we keep apps like Honor of Kings despite that they are popular, for they could still represent users’ personalized interests. • We exclude the apps pre-installed on smartphones by the manufacturers. • We also exclude the niche apps with installed capacities under a threshold. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_16", "text": " Besides, one app may have multiple package_names for different brands and models of smartphones. They are all merged to avoid duplication. For the categories of apps, we consider relatively finer-grained app categories, for example, we distinguish different subcategories of “Game” apps. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_17", "text": " After preprocessing data, we train the model and generate the user embeddings with the following steps, • Step 1: Model Training. We train the AETN using tens of millions of users. • Step 2: Inference. We extract user embeddings for all the users, and push the embeddings to a DCache system222https://github.com/Tencent/DCache for serving. • Step 3: Serving. Downstream applications can retrieval user embeddings using the feature ID and user IDs. Gradient boosting decision trees (GBDTs) and neutral networks (NNs) are typically used as downstream models. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_18", "text": " More details about the deployment are in Section 7. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_19", "text": " In this section, we first define the notations of user behaviors, followed by the detailed structure of the proposed network. Then, we elaborate on our designs for alleviating the problem of sparsity and our modification to vanilla transformers. Finally, we present the multi-objective joint training scheme for model optimization. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_20", "text": " As stated in Section 3.1, behaviors of each user are preprocessed into one’s “retention” and four sequences defined as follows. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_21", "text": " Retention. The retention of user u𝑢u can be represented by a multi-hot vector 𝒙u∈ℝMsubscript𝒙𝑢superscriptℝ𝑀\\bm{x}_{u}\\in\\mathbb{R}^{M}, and xu​m=1subscript𝑥𝑢𝑚1x_{um}=1 when app m𝑚m is installed, where M𝑀M is the number of considered apps. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_22", "text": " Installation and Uninstallation. The four sequences, representing user u𝑢u’s latest I𝐼I operations on installing or uninstalling apps, are denoted by 𝒮usubscript𝒮𝑢\\mathcal{S}_{u}: 𝒮u={\\displaystyle\\mathcal{S}_{u}=\\big{\\{} (au,1n,…,au,in,…,au,In),(du,1n,…,du,in,…,du,In),subscriptsuperscript𝑎𝑛𝑢1…subscriptsuperscript𝑎𝑛𝑢𝑖…subscriptsuperscript𝑎𝑛𝑢𝐼subscriptsuperscript𝑑𝑛𝑢1…subscriptsuperscript𝑑𝑛𝑢𝑖…subscriptsuperscript𝑑𝑛𝑢𝐼\\displaystyle(a^{n}_{u,1},\\dots,a^{n}_{u,i},\\dots,a^{n}_{u,I}),(d^{n}_{u,1},\\dots,d^{n}_{u,i},\\dots,d^{n}_{u,I}), (au,1l,…,au,il,…,au,Il),(du,1l,…,du,il,…,du,Il)}.\\displaystyle(a^{l}_{u,1},\\dots,a^{l}_{u,i},\\dots,a^{l}_{u,I}),(d^{l}_{u,1},\\dots,d^{l}_{u,i},\\dots,d^{l}_{u,I})\\big{\\}}. Specifically, au,insubscriptsuperscript𝑎𝑛𝑢𝑖a^{n}_{u,i} represents the ID of i𝑖i-th newly installed app of u𝑢u, and du,insubscriptsuperscript𝑑𝑛𝑢𝑖d^{n}_{u,i} is the corresponding date. au,ilsubscriptsuperscript𝑎𝑙𝑢𝑖a^{l}_{u,i} and du,ilsubscriptsuperscript𝑑𝑙𝑢𝑖d^{l}_{u,i} are the counterparts for uninstallation. Additionally, 1≤au,in1subscriptsuperscript𝑎𝑛𝑢𝑖1\\leq a^{n}_{u,i}, au,il≤Msubscriptsuperscript𝑎𝑙𝑢𝑖𝑀a^{l}_{u,i}\\leq M, and all the operations happened in the latest T𝑇T time intervals. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_23", "text": " Note that in the rest of the paper, we omit the subscript u𝑢u indicating a user in most notations for simplification. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_24", "text": " As shown in Figure 2, an autoencoder for retention, a transformer encoder, and a transformer decoder are three main parts in the proposed model. We connect the latter two parts with a bottleneck layer. There is also an embedding layer for the transformer encoder and another one for the decoder. Details about the network structure are as follows. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_25", "text": " Retention Autoencoder. The AETN employs an autoencoder of three hidden layers to reconstruct and encode one’s retention. The autoencoder can be described with triple tuples (f(p),𝐖(p),𝒃(p))superscript𝑓𝑝superscript𝐖𝑝superscript𝒃𝑝(f^{(p)},\\mathbf{W}^{(p)},\\bm{b}^{(p)}), where p∈{1,2,3,4}𝑝1234p\\in\\{1,2,3,4\\}. 𝐖(p)superscript𝐖𝑝\\mathbf{W}^{(p)} and 𝒃(p)superscript𝒃𝑝\\bm{b}^{(p)} are weights and biases of the p𝑝p-th layer, and f(p)superscript𝑓𝑝f^{(p)} represents the corresponding activation function. We choose the commonly used LeakyReLU (Maas et al., 2013) as the activation function for the former three layers, and f(4)superscript𝑓4f^{(4)} is the sigmoid function. Let 𝒙(p)superscript𝒙𝑝\\bm{x}^{(p)} represent the outputs of each layer, and it can be calculated as follows: (1) 𝒙(p)=f(p)​(𝒙(p−1)​𝐖(p)+𝒃(p)),p∈{1,2,3,4},formulae-sequencesuperscript𝒙𝑝superscript𝑓𝑝superscript𝒙𝑝1superscript𝐖𝑝superscript𝒃𝑝𝑝1234\\bm{x}^{(p)}=f^{(p)}(\\bm{x}^{(p-1)}\\mathbf{W}^{(p)}+\\bm{b}^{(p)}),~{}p\\in\\{1,2,3,4\\}, where 𝒙(0)superscript𝒙0\\bm{x}^{(0)} is normalized from one’s retention 𝒙𝒙\\bm{x} using the ℓ​2ℓ2\\ell 2 norm. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_26", "text": " The role of this autoencoder is two-folds. Firstly, it helps to learn high-quality app embeddings from the co-occurrence relationship of apps. The weight matrix of the first hidden layer 𝐖(1)superscript𝐖1\\mathbf{W}^{(1)} acts as the shared app embedding matrix 𝐖asuperscript𝐖𝑎\\mathbf{W}^{a} for the whole network, i.e., we have (2) 𝐖a=𝐖(1)∈ℝM×dm​o​d​e​l.superscript𝐖𝑎superscript𝐖1superscriptℝ𝑀subscript𝑑𝑚𝑜𝑑𝑒𝑙\\mathbf{W}^{a}=\\mathbf{W}^{(1)}\\in\\mathbb{R}^{M\\times{d_{model}}}. To further alleviate the problem of sparsity, the shared app embedding matrix is carefully designed and tied with some other weight matrices. More details are provided in Section 4.3. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_27", "text": " Secondly, this autoencoder provides effective representations of user retention for the transformer part. The transformer encoder part needs to be fed with the retention for compressing long-term interests into user embeddings. However, retention is originally in the form of high-dimensional sparse features. This autoencoder encodes retention into the first hidden layer 𝒙(1)∈ℝdm​o​d​e​lsuperscript𝒙1superscriptℝsubscript𝑑𝑚𝑜𝑑𝑒𝑙\\bm{x}^{(1)}\\in\\mathbb{R}^{d_{model}}. As a low-dimensional dense encoding, 𝒙(1)superscript𝒙1\\bm{x}^{(1)} plays an important role in the transformer encoder part. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_28", "text": " Transformer Encoder & Its Embedding Layer. The transformer encoder is the core part of AETN to combine and compress all the information, which does not work without a suitable embedding layer. Inspired by positional encodings (Vaswani et al., 2017), we design an embedding layer for the transformer encoder based on the shared app embeddings, date embeddings, and behavior type embeddings, as illustrated in Figure 3. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_29", "text": " The date embeddings are the key to making the whole network suitable for modeling user behaviors that are low-frequency and unevenly distributed over time. Through date embeddings, the subsequent transformer encoder directly receives the information about when the behaviors happened rather than inferring it from the order of behaviors. We denote the date embedding matrix as 𝐖d∈ℝT×dm​o​d​e​lsuperscript𝐖𝑑superscriptℝ𝑇subscript𝑑𝑚𝑜𝑑𝑒𝑙\\mathbf{W}^{d}\\in\\mathbb{R}^{T\\times d_{model}}, and date t𝑡t is represented by 𝒘td∈ℝdm​o​d​e​lsubscriptsuperscript𝒘𝑑𝑡superscriptℝsubscript𝑑𝑚𝑜𝑑𝑒𝑙\\bm{w}^{d}_{t}\\in\\mathbb{R}^{d_{model}}. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_30", "text": " The behavior type embeddings help the model to distinguish different types of user behaviors when integrating them all. For the three user behavior types (retention, installation, and uninstallation), the embeddings are 𝒘x,𝒘n,𝒘l∈ℝdm​o​d​e​lsuperscript𝒘𝑥superscript𝒘𝑛superscript𝒘𝑙superscriptℝsubscript𝑑𝑚𝑜𝑑𝑒𝑙\\bm{w}^{x},\\bm{w}^{n},\\bm{w}^{l}\\in\\mathbb{R}^{d_{model}}. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_31", "text": " Through this embedding layer, we construct the input representations for the transformer encoder, and the input includes complete information about one’s mobile app usage. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_32", "text": " Our encoder blocks share a similar basic structure with the origin transformer encoder (Vaswani et al., 2017), and to encourage the information interaction among different types of behaviors, we make small modifications to the multi-head self-attention mechanism. More details are presented in Section 4.4. To better extract information from user behaviors, inspired by the masked language model task in BERT (Devlin et al., 2019), we apply a masked app prediction task to installation and uninstallation sequences. The weight matrix of the output softmax is denoted by 𝐖Ω∈ℝdm​o​d​e​l×Msuperscript𝐖Ωsuperscriptℝsubscript𝑑𝑚𝑜𝑑𝑒𝑙𝑀\\mathbf{W}^{\\Omega}\\in\\mathbb{R}^{d_{model}\\times M}. More details about this training task are provided in Section 4.5. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_33", "text": " Bottleneck Layer. The bottleneck layer is where (low dimensional) user embeddings, denoted as 𝒆~bold-~𝒆\\bm{\\widetilde{e}}, are generated. As the encoder and the decoder fuse in this layer, the compressed information from original inputs becomes the source of information for reconstruction tasks. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_34", "text": " On top of the final hidden vector 𝒆xΩsubscriptsuperscript𝒆Ω𝑥\\bm{e}^{\\Omega}_{x}, i.e., the representations corresponding to the retention output by the transformer encoder, we use a single hidden layer autoencoder to further reduce the dimension from dm​o​d​e​lsubscript𝑑𝑚𝑜𝑑𝑒𝑙d_{model} to de​m​bsubscript𝑑𝑒𝑚𝑏d_{emb}. The activation function for the bottleneck is tanh. Then the reconstructed input of this autoencoder is fed to the transformer decoder part. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_35", "text": " In the training scheme, we reconstruct one’s retention from her user embedding with a multi-layer perceptron network and the sigmoid activation function. The weight matrix of the output layer is denoted as 𝐖Θ∈ℝdm​o​d​e​l×Msuperscript𝐖Θsuperscriptℝsubscript𝑑𝑚𝑜𝑑𝑒𝑙𝑀\\mathbf{W}^{\\Theta}\\in\\mathbb{R}^{d_{model}\\times M}. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_36", "text": " Transformer Decoder & Its Embedding Layer. The transformer decoder serves our purpose of reconstructing installation and uninstallation in a non-autoregressive manner (Gu et al., 2018). More concretely, we use the date and the behavior type as queries to search for valuable information from the user embedding to reconstruct corresponding installed or uninstalled apps. For this purpose, we design a new embedding layer for the transformer decode, sharing date embeddings and behavior type embeddings with the embedding layer of the encoder. Figure 4 shows the details of this embedding layer and the input for the decoder. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_37", "text": " To accomplish the task of reconstructing entire installation and uninstallation sequences, we feed all hidden vectors, which is corresponding to the installation and uninstallation, of this decoder into an output softmax layer. The weight matrix of this layer is denoted as 𝐖Φ∈ℝdm​o​d​e​l×Msuperscript𝐖Φsuperscriptℝsubscript𝑑𝑚𝑜𝑑𝑒𝑙𝑀\\mathbf{W}^{\\Phi}\\in\\mathbb{R}^{d_{model}\\times M}. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_38", "text": " We carefully design our weight matrices for several parts of the model, which helps to solve the sparsity problem and tightly couple the autoencoder part and the transformer parts. As shown in Figure 5, the app embeddings are built based on both the app ID and its corresponding category ID. Even if the usage of some app is gravely sparse, its category can still provide valid information. This setting helps to overcome the problem of sparsity. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_39", "text": " As introduced previously, we repeatedly use the M×dm​o​d​e​l𝑀subscript𝑑𝑚𝑜𝑑𝑒𝑙M\\times d_{model} embedding matrix for apps, i.e., at the input and output of the retention autoencoder, the input of the transformer encoder, the output for the masked app prediction, the output of the transformer decoder, as well as the reconstruction part for retention from the user embeddings (bottleneck). We tie the weight matrices of all these parts together, i.e., (3) 𝐖Ω=𝐖Θ=𝐖Φ=𝐖(4)=𝐖aT.superscript𝐖Ωsuperscript𝐖Θsuperscript𝐖Φsuperscript𝐖4superscriptsuperscript𝐖𝑎T\\displaystyle\\mathbf{W}^{\\Omega}=\\mathbf{W}^{\\Theta}=\\mathbf{W}^{\\Phi}=\\mathbf{W}^{(4)}={\\mathbf{W}^{a}}^{\\mathrm{T}}. We reduce the total number of parameters by tying weight matrices of the above layers, which benefits of overcoming the problem of sparsity. Moreover, weight tying benefits the backpropagation of the gradient and speeds the convergence. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_40", "text": " In our scenario, retention, bottleneck (user embeddings), installation, and uninstallation are heterogeneous. Each installation or uninstallation represents a single operation, but the retention or bottleneck is a cumulation of all the installation and uninstallation operations. Therefore, to better encourage the information interaction among retention, bottleneck, and (un)installation, the multi-head self-attention is modified, as shown in Figure 6. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_41", "text": " By concatenating the retention (for the transformer encoder part) or bottleneck (for the transformer decoder part) to each key and value for the scaled dot-product attention, we enforce the information interaction with retention or bottleneck in every attention step. In this way, the transformer encoder fuses the information from retention and (un)installation more efficiently, and the decoder extracts information from the bottleneck better for reconstruction tasks. This modification improves the quality of user embeddings, as shown by the experimental results. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_42", "text": " For model training, we apply a joint training scheme consisting of three tasks, i.e., ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_43", "text": " Task #1: Main Reconstruction. To generate general-purpose user embeddings on basis of their behaviors on mobile app usage, we train the proposed model to reconstruct all the retention, installation, and uninstallation information from the user embeddings. This task is indispensable in the joint training scheme and can be divided into two sub-tasks: (1) Reconstructing the retention data from the user embeddings (bottleneck layer) by a multi-layer perceptron network. We choose the sigmoid cross-entropy as the loss function. (2) Reconstructing the installation and uninstallation sequences by the transformer decoder. We calculate the loss of this sub-task by averaging the softmax cross-entropy loss of every (un)installation. The sum of the losses of these two sub-tasks is the loss of this main reconstruction task, and we denote the loss as ℒm​a​i​nsubscriptℒ𝑚𝑎𝑖𝑛\\mathcal{L}_{main}. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_44", "text": " Task #2: Auxiliary Retention Reconstruction. This auxiliary task is for the autoencoder part. We also choose the sigmoid cross-entropy as the loss function denoted as ℒa​u​xsubscriptℒ𝑎𝑢𝑥\\mathcal{L}_{aux}. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_45", "text": " Task #3: Masked App Prediction. This task is similar to the “Masked LM” task in BERT (Devlin et al., 2019). We randomly mask apps in installation and uninstallation but keep the corresponding date and behavior type. The transformer encoder is trained only to predict the masked apps. For simplicity, we just follow the masking rate in BERT and abandon the “random replacement or keep”. We calculate the loss of this task, denoted as ℒm​a​s​ksubscriptℒ𝑚𝑎𝑠𝑘\\mathcal{L}_{mask}, by averaging the softmax cross-entropy loss of every masked app. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_46", "text": " The final loss function of our model is the sum loss of above three tasks as well as the regularization loss, i.e., ℒ=ℒm​a​i​n+ℒa​u​x+ℒm​a​s​k+ℒr​e​gℒsubscriptℒ𝑚𝑎𝑖𝑛subscriptℒ𝑎𝑢𝑥subscriptℒ𝑚𝑎𝑠𝑘subscriptℒ𝑟𝑒𝑔\\mathcal{L}=\\mathcal{L}_{main}+\\mathcal{L}_{aux}+\\mathcal{L}_{mask}+\\mathcal{L}_{reg}. And ℒr​e​gsubscriptℒ𝑟𝑒𝑔\\mathcal{L}_{reg} is the ℓ​2ℓ2\\ell 2 norm regularization loss for all the trainable parameters. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_47", "text": " In this section, we demonstrate the offline performance of AETN in generating general-purpose user embeddings. We compare the baseline with four different versions of AETN in three typical downstream offline experiments. Then we show that the auxiliary retention reconstruction task for the autoencoder part can help the convergency of the transformer parts. Finally, we compare the user embeddings generated by the baseline and AETN intuitively. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_48", "text": " We use real industrial data from Tencent for model training. Following the rules introduced in Section 3.1, we consider more than 10 thousand apps. Then we sample 20 million users and 500 million records of installation and uninstallation dated from 2019.07 to 2019.12. We randomly split out about 5 million users for validation. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_49", "text": " We train and evaluate 5 models, including a baseline and 4 different versions of AETN, as follows. • DAE. Denoising autoencoder (Vincent et al., 2008, 2010) is widely applied for unsupervised representation learning. We train it to generate user embeddings based on user retention data. • AETN w/o ℒm​a​s​ksubscriptℒ𝑚𝑎𝑠𝑘\\bm{\\mathcal{L}_{mask}}. A degenerated version of AETN trained without the task of masked app prediction. • AETN w/o ℒa​u​xsubscriptℒ𝑎𝑢𝑥\\bm{\\mathcal{L}_{aux}}. Another degenerated version of AETN trained without the auxiliary retention reconstruction task. • V-AETN. The AETN with vanilla multi-head self-attention proposed in (Vaswani et al., 2017). • AETN. The complete version of the model which is introduced in Section 4. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_50", "text": " Details of model settings and hyper-parameter configurations are listed in Appendix A.1. RNN-based models are not involved. In addition to the uneven distribution of user behaviors, the low efficiency of the training makes them infeasible in our scenario. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_51", "text": " We conduct our offline experiments on three typical downstream applications, including applications from both related domains and a different domain. The evaluation tasks are as follows: ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_52", "text": " Test #1: Next Week’s Installation Prediction. This task is to predict which users are going to install specific (niche) categories of apps in next week. We collect data from about 5 million users and then divide them into a training set, a validation set, and a testing set in a 3:1:1 ratio. After generating user embeddings, we train multi-layer perceptron networks to predict whether one would install apps of four categories in next week. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_53", "text": " Test #2: Look-alike Audience Extension. This is a common task in computational advertising (Zhang et al., 2016; Mangalampalli et al., 2011). We use a dataset containing about half a million users with about 10% seed users for an out-of-vocabulary niche app. Following the common practice, we train XGBoost (Chen and Guestrin, 2016) look-alike models to evaluate different user embeddings, and report the 10-fold cross-validation results. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_54", "text": " Test #3: Feed Recommendation. To evaluate the universal user embeddings in a cross-domain scenario, we use feed recommendation data from the “Find” tab of Tencent Wi-Fi manager. We select about 1.2 million users and extract their behaviors in 8 days, then use the data from the first 7 days for training and the data from the last day for validation and testing. The training set contains about 27 million records, and the validation set and the testing set contain approximately 2 million records, respectively. We train Deep & Cross Networks (Wang et al., 2017) based on the generated user embeddings as well as other features for feed recommendations. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_55", "text": " In all three tasks, we use the area under the ROC curve (AUC) as the metric. We run each test 5 times and report the average. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_56", "text": " Table 1 shows the results in all three downstream experiments. We can draw the following conclusions from the results. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_57", "text": " All versions of AETN perform better than DAE. In next week’s installation prediction, AETN brings an average AUC improvement by 0.0631 for all four categories. The rest two applications still enjoy improvements by 0.0134 and 0.0048. This is a significant improvement for industrial applications where 0.1% AUC gain is remarkable (Ma et al., 2018). Such improvement confirms two hypotheses. Firstly, short-term user interests contained in installation and uninstallation are valuable for various downstream applications, to different extent. Secondly, the proposed AETN is capable of extracting long-term and short-term user interests from all types of user behaviors and compressing them together into the user embeddings. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_58", "text": " The masked app prediction task makes an important contribution to improve the quality of user embeddings. It brings up an average AUC by 0.0115 in next week’s installation prediction. Even for the look-alike audience extension and the feed recommendation, the AUC lift brought by this task is over 0.0010. We owe this to that the masked app prediction not only helps the transformer encoder extract information more efficiently, but also brings a data augmentation effect in the training process. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_59", "text": " The modified multi-head self-attention performs better than the vanilla one. The simple modification, which encourages information interaction among retention, bottleneck, and (un)installation, contributes an AUC gain of 0.0069 to the next week’s installation prediction. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_60", "text": " The auxiliary retention reconstruction also benefits the quality of generated user embeddings. Without this auxiliary task for the autoencoder part, the AUC in the next week’s installation prediction goes down by 0.0023. Besides the improvement in user embeddings, we find that the training efficiency is also improved by this auxiliary retention reconstruction. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_61", "text": " When training AETN and AETN w/o ℒa​u​xsubscriptℒ𝑎𝑢𝑥\\mathcal{L}_{aux}, we monitor the sum of ℒm​a​i​nsubscriptℒ𝑚𝑎𝑖𝑛\\mathcal{L}_{main} and ℒm​a​s​ksubscriptℒ𝑚𝑎𝑠𝑘\\mathcal{L}_{mask} on the validation dataset, to confirm the improvement in training efficiency brought by the auxiliary retention reconstruction. As shown in Figure 7a, the auxiliary task makes the transformer parts in the AETN converge faster. With the autoencoder and weight tying, gradients from the output layer can be passed through fewer layers to the app embedding matrix. Moreover, the complete version of AETN also achieves a lower loss when both models eventually converge. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_62", "text": " To intuitively compare the output embeddings by AETN and DAE, we measure the overlap rate of apps between pairs of neighbor users based on the embeddings. For each user, we choose the corresponding user with the least Euclidean distance according to the embeddings as the neighbor. We randomly sample 10 thousand users and find their neighbors from 1 million randomly-selected users. For each pair of neighbors, we calculate the overlap rate of apps in retention, installation, and uninstallation. Figure 7b shows the average results of all the neighbor pairs for both the AETN embeddings and DAE embeddings. The results show that the AETN succeeds in injecting information from installation and uninstallation into user embeddings and maintaining the majority of the retention. At the same time, the DAE embeddings, which we extract only based on the retention information, cannot provide much information regarding the installation and uninstallation. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_63", "text": " To further verify the effectiveness of the output user embeddings, we conduct online feed recommendation A/B testing from 2020-02-01 to 2020-02-10, in the “Good Morning” tab of Tencent Mobile Manager and the “Find” tab of Tencent Wi-Fi Manager. We split online A/B test traffic by userIDs evenly for the tested models. We evaluate the base models, models with DAE embeddings, and models with AETN embeddings. The improvement results compared with the base models are reported in Table 2. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_64", "text": " We mainly consider the following metrics. UV CTR measures the click-through rate in terms of the user view, and PV CTR measures the click-through rate in terms of the page view. Engagement measures the average staying time of each user. Clicks measures the average number of articles each user reads. From the table, we can find that compared with the base models, all considered metrics enjoy improvements by AETN embeddings, ranging from 2% to 8%. Compared with DAE embeddings, PV CTR, and Engagement enjoy more substantial improvements brought by the AETN embeddings, and we hypothesize that AETN introduces the installation and uninstallation information, thus could capture short-term interests of users in addition to the long-term interest from retention, and this information is more critical to PV CTR and Engagement. Comparing the results of the “Good Morning” tab and the “Find” tab, we can find that the improvements in the “Find” tab are more significant. It may be due to that users tend to read articles in the “Find” tab all along the day, in contrast to the “Good Morning” tab where users majorly read the news after getting up in the morning. The exposure per user in the “Find” tab is significantly more. Therefore, better modeling for user interests is even more critical. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_65", "text": " We implement the model with Tensorflow (Abadi et al., 2016). It takes about 60 hours for training using 4 NVIDIA Tesla M40 GPUs. As the embeddings represent both long-term and short-term interests of users, it is crucial to keep updating the embeddings for the best performance. However, a large number of users bring challenges to update frequently. Generally, we have two strategies for updating: ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_66", "text": " • Model Updating. We may update the model for the best performance. This method takes into consideration the emerging apps with an utterly up-to-date app list and the distribution of the data. However, updating the model changes the semantic structure of user embeddings completely. Thus, we need to update all downstream models simultaneously. • Feature Updating. We can also keep the model fixed and only update the features of users. Thus we have the up-to-date behaviors of users taken into consideration, and the updated embeddings can still be in the same semantic space. This strategy makes the updating less expensive. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_67", "text": " In practice, we find that feature updating is more cost-effective for downstream applications, which is because the apps usually do not change drastically within a few months. However, updating the embeddings for billion-scale users is still challenging. To reduce computation, we only update the representations of active users of downstream applications every day. This strategy can reduce the number of users that need to be updated each time to the order of millions. The model could be updated much less frequently. Once the model is updated, we use a new feature ID to prevent confusion. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_68", "text": " We summarize the related work in three fields, including applications with app behavior data, unsupervised feature extraction and transformer networks. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_69", "text": " User behaviors on mobile apps usage contain rich preference information and have been used in a variety of applications (Lu et al., 2014). The most significant of which is app install advertisements (Gogel, 2018; Lee and Shin, 2016) and mobile app recommendations (Zhu et al., 2014). Yahoo posted a large scale prediction engine for app install advertising based on a two-step logistic regression model considering user features generated from behaviors on apps (Bhamidipati et al., 2017). For reducing sparseness, Yahoo also classifies apps into predefined interest taxonomies when understanding app usage patterns (Radosavljevic et al., 2016). Usage patterns of apps are learned for app purchase recommendations with a Deep Memory Network(Gligorijevic et al., 2018). Beyond app install advertising, users’ app-installation behaviors are also used for news recommendations (Liu et al., 2017), where the knowledge of the neighborhood of the cold-start users is transferred from an APP domain to a new domain. A large survey on mobile app user behaviors across main app markets around the world was conducted to instruct cross-country app competitions and analyze the challenges for software engineering (Lim et al., 2014). ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_70", "text": " In this paper, we address the real-life need of general-purpose user embeddings based on user behaviors on app usage. The user embeddings can be used for a variety of downstream applications. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_71", "text": " Unsupervised representation learning is a long-standing problem (Bengio et al., 2013; Zhang et al., 2018), and autoencoders have been deployed successfully in many real-world applications (Baldi, 2012). It follows an encoder-decoder structure and tries to reconstruct the input through a bottleneck layer. Sparse autoencoders (Liu et al., 2016), denoising autoencoders (Vincent et al., 2008, 2010), variational autoencoders (Pu et al., 2016), adversarial autoencoders (Makhzani et al., 2015), and so on, have been proposed as extensions. Recently, more advanced unsupervised representation learning has been proposed, including BERT (Devlin et al., 2019) for natural language processing, and MoCo (He et al., 2019) for computer vision, With a large amount of data and deep models, unsupervised representation learning is able to achieve comparable or even better performance with fewer annotations than traditional supervised learning (Devlin et al., 2019; He et al., 2019). ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_72", "text": " In this paper, an unsupervised representation learning from user behaviors on mobile apps is presented. We address the unique challenges of this problem with the tailored autoencoder-coupled transformer network, and demonstrate the effectiveness. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_73", "text": " The transformer model was first introduced in (Vaswani et al., 2017), and has been used widely for modeling sequences in natural language processing tasks (Devlin et al., 2019), recommendations (Sun et al., 2019; Chen et al., 2019), and music generations (Huang et al., 2019). Transformers can simultaneously attend to every token of their input sequence with self-attention mechanism, and it is proved that a multi-head self-attention layer with a sufficient number of heads is at least as expressive as any convolutional layer (Cordonnier et al., 2020). Compared with recurrent neural networks like long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997), transformers are more parallelizable and require significantly less time to train on large datasets (Vaswani et al., 2017). Transformer-XL (Dai et al., 2019) and reformer (Kitaev et al., 2020) are proposed to further reduce the complexity when the length of sequences is very long, e.g., sequences of length 10,000. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_74", "text": " In this paper, we couple a transformer network with an autoencoder to model the retention, installation, and uninstallation collectively. We modify the vanilla transformer in order to emphasize the retention state or user embeddings when the installation and uninstallation are being modeled. ", "title": "general-purpose user embeddings based on mobile app usage" }, { "id": "2005.13303_all_75", "text": " In this paper, we present our recent practice for unsupervised user embedding learning based on mobile app usage. To address the unique challenges of this problem in the real system, we propose a tailored model called AutoEncoder-coupled Transformer Network (AETN). Extensive online and offline experimental results demonstrate the effectiveness of the proposed model. We also introduce the details about the deployment. The output general-purpose user embeddings can fertilize multiple downstream applications that require user representations at Tencent. Now the output embeddings have been serving the feed recommendation scenes in Tencent Mobile Manager and Tencent Wi-Fi Manager. In the future, we plan to explore fine-tuning the transformer encoder part for learning task-specific user embeddings. ", "title": "general-purpose user embeddings based on mobile app usage" } ]
Each convolution operates on that corresponding input channel group. If so, how the model learns the features from entire input space?
When multiple group convolutions are stacked together, the authors use channel shuffle, which divides the channels into subgroups within groups and shuffles them in a way that each group consists of subgroups from all other groups [10]. It lets the model learn from an entire input space despite the group convolution [11]. However, the paper does not explicitly report such side effects when group convolutions are not stacked together [9].
[ 10, 11, 9 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computation at billions of FLOPs. This report examines the opposite extreme: pursuing the best accuracy in very limited computational budgets at tens or hundreds of MFLOPs, focusing on common mobile platforms such as drones, robots, and smartphones. Note that many existing works (16, 22, 43, 42, 38, 27) focus on pruning, compressing, or low-bit representing a “basic” network architecture. Here we aim to explore a highly efficient basic architecture specially designed for our desired computing ranges. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_1", "text": " We notice that state-of-the-art basic architectures such as Xception  and ResNeXt  become less efficient in extremely small networks because of the costly dense 1×1111\\times 1 convolutions. We propose using pointwise group convolutions to reduce computation complexity of 1×1111\\times 1 convolutions. To overcome the side effects brought by group convolutions, we come up with a novel channel shuffle operation to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient architecture called ShuffleNet. Compared with popular structures like  (30, 9, 40), for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_2", "text": " We evaluate our models on the challenging ImageNet classification (4, 29) and MS COCO object detection  tasks. A series of controlled experiments shows the effectiveness of our design principles and the better performance over other structures. Compared with the state-of-the-art architecture MobileNet , ShuffleNet achieves superior performance by a significant margin, e.g. absolute 7.8% lower ImageNet top-1 error at level of 40 MFLOPs. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_3", "text": " We also examine the speedup on real hardware, i.e. an off-the-shelf ARM-based computing core. The ShuffleNet model achieves ∼similar-to\\sim13×\\times actual speedup (theoretical speedup is 18×\\times) over AlexNet  while maintaining comparable accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_4", "text": " The last few years have seen the success of deep neural networks in computer vision tasks (21, 36, 28), in which model designs play an important role. The increasing needs of running high quality deep neural networks on embedded devices encourage the study on efficient model designs . For example, GoogLeNet  increases the depth of networks with much lower complexity compared to simply stacking convolution layers. SqueezeNet  reduces parameters and computation significantly while maintaining accuracy. ResNet (9, 10) utilizes the efficient bottleneck structure to achieve impressive performance. SENet  introduces an architectural unit that boosts performance at slight computation cost. Concurrent with us, a very recent work  employs reinforcement learning and model search to explore efficient model designs. The proposed mobile NASNet model achieves comparable performance with our counterpart ShuffleNet model (26.0% @ 564 MFLOPs vs. 26.3% @ 524 MFLOPs for ImageNet classification error). But  do not report results on extremely tiny models (e.g. complexity less than 150 MFLOPs), nor evaluate the actual inference time on mobile devices. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_5", "text": " The concept of group convolution, which was first introduced in AlexNet  for distributing the model over two GPUs, has been well demonstrated its effectiveness in ResNeXt . Depthwise separable convolution proposed in Xception  generalizes the ideas of separable convolutions in Inception series (34, 32). Recently, MobileNet  utilizes the depthwise separable convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolution and depthwise separable convolution in a novel form. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_6", "text": " To the best of our knowledge, the idea of channel shuffle operation is rarely mentioned in previous work on efficient model design, although CNN library cuda-convnet  supports “random sparse convolution” layer, which is equivalent to random channel shuffle followed by a group convolutional layer. Such “random shuffle” operation has different purpose and been seldom exploited later. Very recently, another concurrent work   also adopt this idea for a two-stage convolution. However,   did not specially investigate the effectiveness of channel shuffle itself and its usage in tiny model design. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_7", "text": " This direction aims to accelerate inference while preserving accuracy of a pre-trained model. Pruning network connections (6, 7) or channels  reduces redundant connections in a pre-trained model while maintaining performance. Quantization (31, 27, 39, 45, 44) and factorization (22, 16, 18, 37) are proposed in literature to reduce redundancy in calculations to speed up inference. Without modifying the parameters, optimized convolution algorithms implemented by FFT (25, 35) and other methods  decrease time consumption in practice. Distilling  transfers knowledge from large models into small ones, which makes training small models easier. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_8", "text": " Modern convolutional neural networks (30, 33, 34, 32, 9, 10) usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception  and ResNeXt  introduce efficient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1×1111\\times 1 convolutions (also called pointwise convolutions in  ) into account, which require considerable complexity. For example, in ResNeXt  only 3×3333\\times 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in  ). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might significantly damage the accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_9", "text": " To address the issue, a straightforward solution is to apply channel sparse connections, for example group convolutions, also on 1×1111\\times 1 layers. By ensuring that each convolution operates only on the corresponding input channel group, group convolution significantly reduces computation cost. However, if multiple group convolutions stack together, there is one side effect: outputs from a certain channel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convolution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information flow between channel groups and weakens representation. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_10", "text": " If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups. This can be efficiently and elegantly implemented by a channel shuffle operation (Fig 1 (c)): suppose a convolutional layer with g𝑔g groups whose output has g×n𝑔𝑛g\\times n channels; we first reshape the output channel dimension into (g,n)𝑔𝑛(g,n), transposing and then flattening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_11", "text": " Channel shuffle operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efficient network unit with channel shuffle and group convolution. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_12", "text": " Taking advantage of the channel shuffle operation, we propose a novel ShuffleNet unit specially designed for small networks. We start from the design principle of bottleneck unit  in Fig 2 (a). It is a residual block. In its residual branch, for the 3×3333\\times 3 layer, we apply a computational economical 3×3333\\times 3 depthwise convolution  on the bottleneck feature map. Then, we replace the first 1×1111\\times 1 layer with pointwise group convolution followed by a channel shuffle operation, to form a ShuffleNet unit, as shown in Fig 2 (b). The purpose of the second pointwise group convolution is to recover the channel dimension to match the shortcut path. For simplicity, we do not apply an extra channel shuffle operation after the second pointwise layer as it results in comparable scores. The usage of batch normalization (BN)  and nonlinearity is similar to  (9, 40), except that we do not use ReLU after depthwise convolution as suggested by  . As for the case where ShuffleNet is applied with stride, we simply make two modifications (see Fig 2 (c)): (i) add a 3×3333\\times 3 average pooling on the shortcut path; (ii) replace the element-wise addition with channel concatenation, which makes it easy to enlarge channel dimension with little extra computation cost. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_13", "text": " Thanks to pointwise group convolution with channel shuffle, all components in ShuffleNet unit can be computed efficiently. Compared with ResNet  (bottleneck design) and ResNeXt , our structure has less complexity under the same settings. For example, given the input size c×h×w𝑐ℎ𝑤c\\times h\\times w and the bottleneck channels m𝑚m, ResNet unit requires h​w​(2​c​m+9​m2)ℎ𝑤2𝑐𝑚9superscript𝑚2hw(2cm+9m^{2}) FLOPs and ResNeXt has h​w​(2​c​m+9​m2/g)ℎ𝑤2𝑐𝑚9superscript𝑚2𝑔hw(2cm+9m^{2}/g) FLOPs, while our ShuffleNet unit requires only h​w​(2​c​m/g+9​m)ℎ𝑤2𝑐𝑚𝑔9𝑚hw(2cm/g+9m) FLOPs, where g𝑔g means the number of groups for convolutions. In other words, given a computational budget, ShuffleNet can use wider feature maps. We find this is critical for small networks, as tiny networks usually have an insufficient number of channels to process the information. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_14", "text": " In addition, in ShuffleNet depthwise convolution only performs on bottleneck feature maps. Even though depthwise convolution usually has very low theoretical complexity, we find it difficult to efficiently implement on low-power mobile devices, which may result from a worse computation/memory access ratio compared with other dense operations. Such drawback is also referred in  , which has a runtime library based on TensorFlow . In ShuffleNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possible. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_15", "text": " Built on ShuffleNet units, we present the overall ShuffleNet architecture in Table 1. The proposed network is mainly composed of a stack of ShuffleNet units grouped into three stages. The first building block in each stage is applied with stride = 2. Other hyper-parameters within a stage stay the same, and for the next stage the output channels are doubled. Similar to  , we set the number of bottleneck channels to 1/4 of the output channels for each ShuffleNet unit. Our intent is to provide a reference design as simple as possible, although we find that further hyper-parameter tunning might generate better results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_16", "text": " In ShuffleNet units, group number g𝑔g controls the connection sparsity of pointwise convolutions. Table 1 explores different group numbers and we adapt the output channels to ensure overall computation cost roughly unchanged (∼similar-to\\sim140 MFLOPs). Obviously, larger group numbers result in more output channels (thus more convolutional filters) for a given complexity constraint, which helps to encode more information, though it might also lead to degradation for an individual convolutional filter due to limited corresponding input channels. In Sec 4.1.1 we will study the impact of this number subject to different computational constrains. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_17", "text": " To customize the network to a desired complexity, we can simply apply a scale factor s𝑠s on the number of channels. For example, we denote the networks in Table 1 as ”ShuffleNet 1×\\times”, then ”ShuffleNet s×s\\times” means scaling the number of filters in ShuffleNet 1×\\times by s𝑠s times thus overall complexity will be roughly s2superscript𝑠2s^{2} times of ShuffleNet 1×\\times. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_18", "text": " We mainly evaluate our models on the ImageNet 2012 classification dataset (29, 4). We follow most of the training settings and hyper-parameters used in  , with two exceptions: (i) we set the weight decay to 4e-5 instead of 1e-4 and use linear-decay learning rate policy (decreased from 0.5 to 0); (ii) we use slightly less aggressive scale augmentation for data preprocessing. Similar modifications are also referenced in   because such small networks usually suffer from underfitting rather than overfitting. It takes 1 or 2 days to train a model for 3×1053superscript1053\\times 10^{5} iterations on 4 GPUs, whose batch size is set to 1024. To benchmark, we compare single crop top-1 performance on ImageNet validation set, i.e. cropping 224×224224224224\\times 224 center view from 256×256\\times input image and evaluating classification accuracy. We use exactly the same settings for all models to ensure fair comparisons. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_19", "text": " The core idea of ShuffleNet lies in pointwise group convolution and channel shuffle operation. In this subsection we evaluate them respectively. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_20", "text": " To evaluate the importance of pointwise group convolutions, we compare ShuffleNet models of the same complexity whose numbers of groups range from 1 to 8. If the group number equals 1, no pointwise group convolution is involved and then the ShuffleNet unit becomes an ”Xception-like”  structure. For better understanding, we also scale the width of the networks to 3 different complexities and compare their classification performance respectively. Results are shown in Table 2. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_21", "text": " From the results, we see that models with group convolutions (g>1𝑔1g>1) consistently perform better than the counterparts without pointwise group convolutions (g=1𝑔1g=1). Smaller models tend to benefit more from groups. For example, for ShuffleNet 1×\\times the best entry (g=8𝑔8g=8) is 1.2% better than the counterpart, while for ShuffleNet 0.5×\\times and 0.25×\\times the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it benefits more from enlarged feature maps. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_22", "text": " Table 2 also shows that for some models (e.g. ShuffleNet 0.5×\\times) when group numbers become relatively large (e.g. g=8𝑔8g=8), the classification score saturates or even drops. With an increase in group number (thus wider feature maps), input channels for each convolutional filter become fewer, which may harm representation capability. Interestingly, we also notice that for smaller models such as ShuffleNet 0.25×\\times larger group numbers tend to better results consistently, which suggests wider feature maps bring more benefits for smaller models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_23", "text": " The purpose of shuffle operation is to enable cross-group information flow for multiple group convolution layers. Table 3 compares the performance of ShuffleNet structures (group number is set to 3 or 8 for instance) with/without channel shuffle. The evaluations are performed under three different scales of complexity. It is clear that channel shuffle consistently boosts classification scores for different settings. Especially, when group number is relatively large (e.g. g=8𝑔8g=8), models with channel shuffle outperform the counterparts by a significant margin, which shows the importance of cross-group information interchange. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_24", "text": " Recent leading convolutional units in VGG , ResNet , GoogleNet , ResNeXt  and Xception  have pursued state-of-the-art results with large models (e.g. ≥1absent1\\geq 1GFLOPs), but do not fully explore low-complexity conditions. In this section we survey a variety of building blocks and make comparisons with ShuffleNet under the same complexity constraint. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_25", "text": " For fair comparison, we use the overall network architecture as shown in Table 1. We replace the ShuffleNet units in Stage 2-4 with other structures, then adapt the number of channels to ensure the complexity remains unchanged. The structures we explored include: ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_26", "text": " • VGG-like. Following the design principle of VGG net , we use a two-layer 3×\\times3 convolutions as the basic building block. Different from  , we add a Batch Normalization layer  after each of the convolutions to make end-to-end training easier. • ResNet. We adopt the ”bottleneck” design in our experiment, which has been demonstrated more efficient in   . Same as  , the bottleneck ratio111In the bottleneck-like units (like ResNet, ResNeXt or ShuffleNet) bottleneck ratio implies the ratio of bottleneck channels to output channels. For example, bottleneck ratio = 1:4:141:4 means the output feature map is 4 times the width of the bottleneck feature map. is also 1:4:141:4. • Xception-like. The original structure proposed in   involves fancy designs or hyper-parameters for different stages, which we find difficult for fair comparison on small models. Instead, we remove the pointwise group convolutions and channel shuffle operation from ShuffleNet (also equivalent to ShuffleNet with g=1𝑔1g=1). The derived structure shares the same idea of “depthwise separable convolution” as in  , which is called an Xception-like structure here. • ResNeXt. We use the settings of cardinality =16absent16=16 and bottleneck ratio =1:2:absent12=1:2 as suggested in  . We also explore other settings, e.g. bottleneck ratio =1:4:absent14=1:4, and get similar results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_27", "text": " We use exactly the same settings to train these models. Results are shown in Table 4. Our ShuffleNet models outperform most others by a significant margin under different complexities. Interestingly, we find an empirical relationship between feature map channels and classification accuracy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, ShuffleNet models are 50, 192, 192, 288, 576 respectively, which is consistent with the increase of accuracy. Since the efficient design of ShuffleNet, we can use more channels for a given computation budget, thus usually resulting in better performance. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_28", "text": " Note that the above comparisons do not include GoogleNet or Inception series (33, 34, 32). We find it nontrivial to generate such Inception structures to small networks because the original design of Inception module involves too many hyper-parameters. As a reference, the first GoogleNet version  has 31.3% top-1 error at the cost of 1.5 GFLOPs (See Table 6). More sophisticated Inception versions (34, 32) are more accurate, however, involve significantly increased complexity. Recently, Kim et al. propose a lightweight network structure named PVANET  which adopts Inception units. Our reimplemented PVANET (with 224×\\times224 input size) has 29.7% classification error with a computation complexity of 557 MFLOPs, while our ShuffleNet 2x model (g=3𝑔3g=3) gets 26.3% with 524 MFLOPs (see Table 6). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_29", "text": " Recently Howard et al. have proposed MobileNets  which mainly focus on efficient network architecture for mobile devices. MobileNet takes the idea of depthwise separable convolution from   and achieves state-of-the-art results on small models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_30", "text": " Table 5 compares classification scores under a variety of complexity levels. It is clear that our ShuffleNet models are superior to MobileNet for all the complexities. Though our ShuffleNet network is specially designed for small models (<150absent150<150 MFLOPs), we find it is still better than MobileNet for higher computation cost, e.g. 3.1% more accurate than MobileNet 1×\\times at the cost of 500 MFLOPs. For smaller networks (∼similar-to\\sim40 MFLOPs) ShuffleNet surpasses MobileNet by 7.8%. Note that our ShuffleNet architecture contains 50 layers while MobileNet only has 28 layers. For better understanding, we also try ShuffleNet on a 26-layer architecture by removing half of the blocks in Stage 2-4 (see ”ShuffleNet 0.5×\\times shallow (g=3𝑔3g=3)” in Table 5). Results show that the shallower model is still significantly better than the corresponding MobileNet, which implies that the effectiveness of ShuffleNet mainly results from its efficient structure, not the depth. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_31", "text": " Table 6 compares our ShuffleNet with a few popular models. Results show that with similar accuracy ShuffleNet is much more efficient than others. For example, ShuffleNet 0.5×\\times is theoretically 18×\\times faster than AlexNet  with comparable classification score. We will evaluate the actual running time in Sec 4.5. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_32", "text": " It is also worth noting that the simple architecture design makes it easy to equip ShuffeNets with the latest advances such as (13, 26). For example, in the authors propose Squeeze-and-Excitation (SE) blocks which achieve state-of-the-art results on large ImageNet models. We find SE modules also take effect in combination with the backbone ShuffleNets, for instance, boosting the top-1 error of ShuffleNet 2×\\times to 24.7% (shown in Table 5). Interestingly, though negligible increase of theoretical complexity, we find ShuffleNets with SE modules are usually 25∼40%similar-to25percent4025\\sim 40\\% slower than the “raw” ShuffleNets on mobile devices, which implies that actual speedup evaluation is critical on low-cost architecture design. In Sec 4.5 we will make further discussion. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_33", "text": " To evaluate the generalization ability for transfer learning, we test our ShuffleNet model on the task of MS COCO object detection . We adopt Faster-RCNN  as the detection framework and use the publicly released Caffe code (28, 17) for training with default settings. Similar to  , the models are trained on the COCO train+val dataset excluding 5000 minival images and we conduct testing on the minival set. Table 7 shows the comparison of results trained and evaluated on two input resolutions. Comparing ShuffleNet 2×\\times with MobileNet whose complexity are comparable (524 vs. 569 MFLOPs), our ShuffleNet 2×\\times surpasses MobileNet by a significant margin on both resolutions; our ShuffleNet 1×\\times also achieves comparable results with MobileNet on 600×\\times resolution, but has ∼similar-to\\sim4×\\times complexity reduction. We conjecture that this significant gain is partly due to ShuffleNet’s simple design of architecture without bells and whistles. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_34", "text": " Finally, we evaluate the actual inference speed of ShuffleNet models on a mobile device with an ARM platform. Though ShuffleNets with larger group numbers (e.g. g=4𝑔4g=4 or g=8𝑔8g=8) usually have better performance, we find it less efficient in our current implementation. Empirically g=3𝑔3g=3 usually has a proper trade-off between accuracy and actual inference time. As shown in Table 8, three input resolutions are exploited for the test. Due to memory access and other overheads, we find every 4×\\times theoretical complexity reduction usually results in ∼similar-to\\sim2.6×\\times actual speedup in our implementation. Nevertheless, compared with AlexNet  our ShuffleNet 0.5×\\times model still achieves ∼similar-to\\sim13×\\times actual speedup under comparable classification accuracy (the theoretical speedup is 18×\\times), which is much faster than previous AlexNet-level models or speedup approaches such as  (14, 16, 22, 42, 43, 38). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" } ]
How does the effectiveness of the residual representation method used in this paper compare to other methods such as Multigrid and hierarchical basis preconditioning in solving Partial Differential Equations?
Multigrid method reformulates the system as sub problems at multiple scales, where each sub problem is responsible for the residual solution between a coarser and a finer scale [21].
[ 21 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence (41, 44) reveals that network depth is of crucial importance, and the leading results (41, 44, 13, 16) on the challenging ImageNet dataset all exploit “very deep” models, with a depth of sixteen to thirty . Many other nontrivial visual recognition tasks (8, 12, 7, 32, 27) have also greatly benefited from very deep models. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_1", "text": " Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients (1, 9), which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization (23, 9, 37, 13) and intermediate normalization layers , which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_2", "text": " When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in (11, 42) and thoroughly verified by our experiments. Fig. 1 shows a typical example. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_3", "text": " The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_4", "text": " In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we let the stacked nonlinear layers fit another mapping of ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original mapping is recast into ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_5", "text": " The formulation of ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x} can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections (2, 34, 49) are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe ) without modifying the solvers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_6", "text": " We present comprehensive experiments on ImageNet to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_7", "text": " Similar phenomena are also shown on the CIFAR-10 set , suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_8", "text": " On the ImageNet classification dataset , we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets . Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_9", "text": " Residual Representations. In image recognition, VLAD is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector can be formulated as a probabilistic version of VLAD. Both of them are powerful shallow representations for image retrieval and classification (4, 48). For vector quantization, encoding residual vectors is shown to be more effective than encoding original vectors. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_10", "text": " In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning (45, 46), which relies on variables that represent residual vectors between two scales. It has been shown (3, 45, 46) that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_11", "text": " Shortcut Connections. Practices and theories that lead to shortcut connections (2, 34, 49) have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output (34, 49). In (44, 24), a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of (39, 38, 31, 47) propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In , an “inception” layer is composed of a shortcut branch and a few deeper branches. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_12", "text": " Concurrent with our work, “highway networks” (42, 43) present shortcut connections with gating functions . These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_13", "text": " Let us consider ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with 𝐱𝐱\\mathbf{x} denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions222This hypothesis, however, is still an open question. See ., then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., ℋ​(𝐱)−𝐱ℋ𝐱𝐱\\mathcal{H}(\\mathbf{x})-\\mathbf{x} (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we explicitly let these layers approximate a residual function ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original function thus becomes ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_14", "text": " This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_15", "text": " In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_16", "text": " We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: 𝐲=ℱ​(𝐱,{Wi})+𝐱.𝐲ℱ𝐱subscript𝑊𝑖𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+\\mathbf{x}. (1) Here 𝐱𝐱\\mathbf{x} and 𝐲𝐲\\mathbf{y} are the input and output vectors of the layers considered. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, ℱ=W2​σ​(W1​𝐱)ℱsubscript𝑊2𝜎subscript𝑊1𝐱\\mathcal{F}=W_{2}\\sigma(W_{1}\\mathbf{x}) in which σ𝜎\\sigma denotes ReLU and the biases are omitted for simplifying notations. The operation ℱ+𝐱ℱ𝐱\\mathcal{F}+\\mathbf{x} is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ​(𝐲)𝜎𝐲\\sigma(\\mathbf{y}), see Fig. 2). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_17", "text": " The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_18", "text": " The dimensions of 𝐱𝐱\\mathbf{x} and ℱℱ\\mathcal{F} must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Wssubscript𝑊𝑠W_{s} by the shortcut connections to match the dimensions: 𝐲=ℱ​(𝐱,{Wi})+Ws​𝐱.𝐲ℱ𝐱subscript𝑊𝑖subscript𝑊𝑠𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+W_{s}\\mathbf{x}. (2) We can also use a square matrix Wssubscript𝑊𝑠W_{s} in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Wssubscript𝑊𝑠W_{s} is only used when matching dimensions. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_19", "text": " The form of the residual function ℱℱ\\mathcal{F} is flexible. Experiments in this paper involve a function ℱℱ\\mathcal{F} that has two or three layers (Fig. 5), while more layers are possible. But if ℱℱ\\mathcal{F} has only a single layer, Eqn.(1) is similar to a linear layer: 𝐲=W1​𝐱+𝐱𝐲subscript𝑊1𝐱𝐱\\mathbf{y}=W_{1}\\mathbf{x}+\\mathbf{x}, for which we have not observed advantages. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_20", "text": " We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_21", "text": " We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_22", "text": " Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets (Fig. 3, left). The convolutional layers mostly have 3×\\times3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_23", "text": " It is worth noticing that our model has fewer filters and lower complexity than VGG nets (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_24", "text": " Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×\\times1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_25", "text": " Our implementation for ImageNet follows the practice in (21, 41). The image is resized with its shorter side randomly sampled in (256,480)256480(256,480) for scale augmentation . A 224×\\times224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted . The standard color augmentation in is used. We adopt batch normalization (BN) right after each convolution and before activation, following . We initialize the weights as in and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×10460superscript10460\\times 10^{4} iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout , following the practice in . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_26", "text": " In testing, for comparison studies we adopt the standard 10-crop testing . For best results, we adopt the fully-convolutional form as in (41, 13), and average the scores at multiple scales (images are resized such that the shorter side is in {224,256,384,480,640}224256384480640\\{224,256,384,480,640\\}). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_27", "text": " We evaluate our method on the ImageNet 2012 classification dataset that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_28", "text": " Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_29", "text": " The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_30", "text": " We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN , which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error333We have experimented with more training iterations (3×\\times) and still observed the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.. The reason for such optimization difficulties will be studied in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_31", "text": " Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×\\times3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_32", "text": " We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_33", "text": " Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_34", "text": " Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_35", "text": " Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_36", "text": " Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_37", "text": " Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design444Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.. For each residual function ℱℱ\\mathcal{F}, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×\\times1, 3×\\times3, and 1×\\times1 convolutions, where the 1×\\times1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×\\times3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_38", "text": " The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_39", "text": " 50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_40", "text": " 101-layer and 152-layer ResNets: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_41", "text": " The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 5). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 5). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_42", "text": " Comparisons with State-of-the-art Methods. In Table 5 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_43", "text": " We conducted more studies on the CIFAR-10 dataset , which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_44", "text": " The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×\\times32 images, with the per-pixel mean subtracted. The first layer is 3×\\times3 convolutions. Then we use a stack of 6​n6𝑛6n layers with 3×\\times3 convolutions on the feature maps of sizes {32,16,8}32168\\{32,16,8\\} respectively, with 2n𝑛n layers for each feature map size. The numbers of filters are {16,32,64}163264\\{16,32,64\\} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n𝑛n+2 stacked weighted layers. The following table summarizes the architecture: output map size 32×\\times32 16×\\times16 8×\\times8 # layers 1+2n𝑛n 2n𝑛n 2n𝑛n # filters 16 32 64 When shortcut connections are used, they are connected to the pairs of 3×\\times3 layers (totally 3​n3𝑛3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_45", "text": " We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in and BN but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in for training: 4 pixels are padded on each side, and a 32×\\times32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×\\times32 image. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_46", "text": " We compare n={3,5,7,9}𝑛3579n=\\{3,5,7,9\\}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see ), suggesting that such an optimization difficulty is a fundamental problem. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_47", "text": " Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_48", "text": " We further explore n=18𝑛18n=18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging555With an initial learning rate of 0.1, it starts converging (<<90% error) after several epochs, but still reaches similar accuracy.. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet and Highway (Table 6), yet is among the state-of-the-art results (6.43%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_49", "text": " Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×\\times3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_50", "text": " Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n=200𝑛200n=200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103superscript10310^{3}-layer network is able to achieve training error <<0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_51", "text": " But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout or dropout is applied to obtain the best results ((10, 25, 24, 35)) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_52", "text": " Our method has good generalization performance on other recognition tasks. Table 8 and  8 show the object detection baseline results on PASCAL VOC 2007 and 2012 and COCO . We adopt Faster R-CNN as the detection method. Here we are interested in the improvements of replacing VGG-16 with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@(.5, .95)), which is a 28% relative improvement. This gain is solely due to the learned representations. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_53", "text": " Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix. ", "title": "Deep Residual Learning for Image Recognition" } ]
Is the extended search strategy beneficial? Does the gain simply come from modified search space?
The randomly searched network on the same supernet could not outperform the proposed result with Single-Path NAS [73]. Thus the search strategy was beneficial [74].
[ 73, 74 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a challenging problem in various areas. A popular hardware solution is to develop a hardware accelerator, called neural processing unit (NPU), that achieves higher performance per watt than CPUs or GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_1", "text": " For a given hardware platform, several software techniques have been proposed to accelerate CNNs by approximate computing since deep learning applications can tolerate a certain range of computation inaccuracy. Some examples in this software approach are filter pruning (Li et al., 2016), quantization (Park et al., 2017), low-rank approximation (Kim et al., 2015). Accelerating CNNs is helpful to improve the accuracy by running a more compute-intensive CNN with higher accuracy within a given time budget. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_2", "text": " On the other hand, various algorithmic solutions have been proposed to improve the CNN architecture by introducing new operations, optimizing the hyper-parameters, or searching for better network architecture. New operations such as depth-wise convolution(DWConv) (Chollet, 2017) and mobile inverted bottleneck (MBConv) (Sandler et al., 2018) have been developed to replace the regular full convolution. Recently, automated neural architecture search (NAS) emerges as the default technique to find a CNN architecture with higher accuracy than manually-designed architectures, particularly image classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_3", "text": " A NAS technique explores a predefined search space and estimates the performance for each candidate architecture to find an optimal one with the highest accuracy under a given latency constraint. Thus there are three factors that affect the performance of NAS, as shown in Figure 1: search space, search strategy, and performance estimation. The search space of a NAS technique is usually restricted by a supernet that defines the topology of the largest network to explore. Since the performance of a network depends on the hardware platform, the NAS technique needs to be customized to a given hardware platform. While numerous NAS techniques have been proposed with various search strategies recently, their assumed hardware platforms are mostly GPUs. In this paper, we present a customized NAS technique for an NPU, which produces a CNN architecture with a better accuracy-latency tradeoff than existing models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_4", "text": " One of the most closely related work is the recently proposed NAS technique tailored for Google’s Edge-TPU (Gupta and Akin, 2020). While MBConv is widely used for GPU-aware NAS techniques, they prefer to use a single full convolution by fusing expansion layer and DWConv layer in some parts of the network, observing that the Edge-TPU runs the fused full convolution faster even though the required number of MAC (multiply-accumulate) operations is much larger. It confirms that the number of MAC operations is not a proper measure of latency, and platform-specific performance estimation is required. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_5", "text": " Since an NPU is much faster than a GPU, it enables us to explore the wider search space for NAS under a given latency constraint. Since there are many factors to define the search space, such as the number of layers, channels, kernel sizes, and so on, the search space grows exponentially as the allowed computation complexity grows. Hence, reducing the search space, as well as the search time, is very challenging for NPU-aware NAS techniques. While the aforementioned work for Google’s Edge TPU trains each architecture candidate from scratch to estimate the performance, it is not computationally efficient. In contrast, we adopt a fast differentiable hardware-aware One-Shot NAS, called Single-Path NAS (Stamoulis et al., 2019), in order to reduce the search time. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_6", "text": " Figure 2 shows an overview of the proposed NAS methodology that consists of three steps. In the first step, we change the supernet structure of the Single-Path NAS, which has a hierarchical structure based on MobileNetV2 (Sandler et al., 2018): A supernet structure consists of a series of stages that contain a series of blocks containing an MBConv micro-architecture inside. Since the network accuracy depends on the supernet structure, we make two extensions on the supernet structure to widen the search space. First, we allow stages to have a different number of blocks, called depth of the stage, considering the effect of stage depth on the accuracy and the latency. Second, we add parallel layers with different kernel sizes in each block, adopting the idea of mixed depthwise convolution (Tan and Le, 2019b) (MixConv). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_7", "text": " With the extended supernet structure, we apply the Single-Path NAS, which is also extended to support the extended supernet structure. In this step, we assume a shorter latency constraint than the required to reduce the search space and the search time. The last step is to scale up the baseline CNN adopting the compound scaling technique proposed in  (Tan and Le, 2019a) until the latency constraint is met. The proposed NAS methodology is named as S3NAS since it consists of 3 steps: Supernet design, SinglePath NAS, and Scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_8", "text": " For accurate latency estimation, an analytical latency estimator is devised, based on a cycle-level NPU simulator that runs an entire CNN considering the memory access overhead accurately. Since the NPU assumed in this paper can execute depth-wise separable convolution (DWConv), squeeze-and-excitation (SE), and h-swish activation function efficiently, the proposed supernet prefers DWConv to regular convolution. Observing that the accuracy is improved by around 1% if SE and h-swish activation function are used, we add a post-processing phase after a CNN network is found by NAS to add SE layers and to replace ReLU to h-swish activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_9", "text": " Experiments show that the proposed NAS technique could improve the accuracy-latency tradeoff over existing SoTA CNN models. Our best model achieves 82.72% top-1 accuracy on ImageNet with 11.66ms latency without any special data augmentation. Note that the latency is estimated by cycle-accurate simulation. For a fair comparison with the related work, the latency of each compared network is also estimated with the same simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_10", "text": " After an automated NAS technique based on reinforcement learning successfully found a better CNN architecture than manually-designed architectures (Zoph and Le, 2016), extensive research has been conducted to develop various NAS techniques based on reinforcement learning (Zoph et al., 2018; Tan et al., 2019). However, these NAS techniques are computationally intensive because they train each candidate architectures from scratch to estimate the goodness of it. Thus, one-shot neural architecture search approach (Pham et al., 2018) was introduced to reduce the search cost. In this approach, an over-parameterized super-model network is defined, and architecture search is performed by parameter optimization to reduce the complexity of the network. Gradient-based differentiable search has gained increasing popularity, and various NAS techniques have been proposed with different super-models and hyper-parameters (Pham et al., 2018; Guo et al., 2019; Chu et al., 2019; Liu et al., 2018; Cai et al., 2018). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_11", "text": " Among diverse techniques to decrease the search cost, Single-Path NAS (Stamoulis et al., 2019) was recently proposed to find a good architecture faster than the existing differentiable NAS techniques. This technique is extended to broaden the search space by including the squeeze-and-excitation (SE) block in the search space (Stamoulis et al., 2020). Our work is grounded on the original Single-Path NAS technique. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_12", "text": " Finding a hardware-friendly neural architecture has been facilitated as NAS algorithm improved. MNASNet (Tan et al., 2019) added a latency term in the objective function to discover better architectures with a given latency constraint on their target hardware platform. EfficientNet (Tan and Le, 2019a), whose search method is similar to MNASNet, introduced a novel scaling method, called compound scaling, to find more accurate networks as the latency constraint or FLOPS increases. Instead of finding a network directly for a given long latency constraint, they scale up the depth and the width of a small network with shorter latency and the input image size in a balanced way. They could achieve a set of networks with state-of-the-art performance over a range of latency constraints. They removed SE blocks and swish activation function from their search space for hardware platforms that do not support them efficiently to name the resultant network as EfficientNet-lite. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_13", "text": " While EfficientNet searches a set of networks over a range of latency constraints by scaling up, Once-For-All (Cai et al., 2019) network takes an opposite approach, scaling down. They first train a super-graph architecture by a novel method called progressive shrinking and search a sub-graph network that achieves good accuracy for a given latency constraint without re-training but cheap fine-tuning. They claim that a scaled-down network from the super-graph gives better accuracy than a network that is trained from scratch. They could find more accurate networks than EfficientNet for small latency constraints. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_14", "text": " To explore more efficient neural architectures on specific hardware, some NAS methods have proposed to define the design space of architecture exploration, tailored for the hardware platform. Gupta et al. (Gupta and Akin, 2020) devised a building block named fused inverted bottleneck convolution block and showed that this block is often more efficient than MBConv on their target NPU, Edge-TPU. They adopted compound scaling method to find high-performing architectures on Edge-TPU. Our work is closely related to this method. We devise a building block that consists of parallel DWConv layers with different kernel sizes, based on a preliminary experiment to find that it is better than the other alternative building blocks in terms of performance per latency (Tan and Le, 2019b). And we increase the search space by allowing stages to have a different number of blocks in the baseline supernet. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_15", "text": " A neural network typically consists of multiple stages, a sequence of blocks with the same number of output channels (width). There are studies on how to assign the number of blocks (depth) to each stage. Meng et al. (Meng et al., 2020) observed that the way of assigning depth to each stage affects the accuracy. Moreover, they argued that the good depth assignment of each stage could be inherited from the shallow ones as the total depth is increased, and proposed a layer-growing NAS method that could significantly reduce the search space. Furthermore, Radosavovic et al. (Radosavovic et al., 2020) discovered that among neural architectures with similar computational complexity, the ones whose stage width and depth have a quantized linear relationship tend to have higher accuracy. Based on similar observations, we apply this design principle to change the structure of the conventional One-Shot NAS supernet. In addition, we argue that placing more blocks in a stage with a larger width is beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_16", "text": " While the original DWConv block uses a single kernel size for depthwise convolution, mixing multiple kernel sizes for depthwise convolution was recently proposed, named as MixConv (Tan and Le, 2019b). Mixing multiple kernel sizes can be understood as having parallel branches inside a block. It is shown that MixConv is more efficient than ordinary DWConv (Tan and Le, 2019b). There exist some recent NAS methods (Mei et al., 2019; Chu et al., 2020) that also broaden their search space using DWConv with multiple kernel sizes to find better neural architectures. We adopt this approach in the supernet and formulate a differentiable latency model of this operation, enabling a latency-aware differentiable One-Shot NAS with MixConv. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_17", "text": " In this section, we will briefly review the Single-Path NAS technique and our target NPU. Before going further, we define some terminologies used in this paper, as shown in Figure 3. A neural architecture consists of stages at the top level. A stage consists of a sequence of blocks whose output feature maps have the same dimension. In the proposed supernet, a block is defined as MBConv that typically starts with 1×1 conv (expansion layer) and ends with 1×1 conv. Adopting the MixConv approach, the depthwise convolution layer consists of parallel superkernels whose kernel size will be determined during the NAS process. The width of block denotes the number of channels in the final output feature map of the block, and the width of stage is the width of the final block in the stage. We will call the total number of blocks starting from the very first block in the network up to the last block in a specific stage S, as the cumulative depth up to stage S. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_18", "text": " Differentiable NAS methods usually define architecture parameters to choose which convolution layer to use in the block, training each convolution layer independently. Single-Path NAS (Stamoulis et al., 2019) reduce the search cost by decreasing the number of trainable parameters by sharing the kernel weights between convolution layers. The key idea is designing an over-parameterized depthwise convolution kernel named superkernel, and letting each depthwise convolution kernel of candidate MBConvs directly inherit the weights of this superkernel. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_19", "text": " Let 𝐰k,esubscript𝐰𝑘𝑒\\mathbf{w}_{k,e} denote the depthwise convolution kernel of candidate MBConv with kernel size k and expansion ratio e (MBConvk,e). First, they introduce a large 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6}, which is the DWConv kernel of MBConv5,6. Then, the inner core of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} can be considered as 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6}, a DWConv kernel of MBConv3,6. A superkernel containing these two kernel size options can be expressed as Figure 4: (1) 𝐰∗,6=𝐰3,6+𝟙​(use​kernel​size​ 5)⋅𝐰5\\3,6subscript𝐰6subscript𝐰36⋅1usekernelsize5subscript𝐰\\536\\mathbf{w}_{*,6}=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\rm{use\\leavevmode\\nobreak\\ kernel\\leavevmode\\nobreak\\ size\\leavevmode\\nobreak\\ 5})\\cdot\\mathbf{w}_{5\\backslash 3,6} where 𝐰5\\3,esubscript𝐰\\53𝑒\\mathbf{w}_{5\\backslash 3,e} means the outer part, 𝐰5,e−𝐰3,esubscript𝐰5𝑒subscript𝐰3𝑒\\mathbf{w}_{5,e}-\\mathbf{w}_{3,e}. Next, they formulate conditions to determine the kernel size. They define a certain threshold value t𝑡t and compare the norm of the kernel weights with the threshold. If the norm of a subset weight is larger than the threshold, it remains in the supernet. To this end, Eq. (1) is changed as follows: (2) 𝐰∗,6​(tk=5)=𝐰3,6+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅𝐰5\\3,6subscript𝐰6subscript𝑡𝑘5subscript𝐰36⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscript𝐰\\536\\mathbf{w}_{*,6}(t_{k=5})=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot\\mathbf{w}_{5\\backslash 3,6} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_20", "text": " The threshold value is also trainable to be automatically chosen during training. To enable back-propagation, they relax 𝟙​(x>t)1𝑥𝑡\\mathbbm{1}(x>t) to σ​(x−t)𝜎𝑥𝑡\\sigma(x-t) when computing gradients. In addition, they optimize kernel weights and threshold values simultaneously. For a given tight search time, this method is shown to be more effective than the other methods (Stamoulis et al., 2020). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_21", "text": " Moreover, we can vary the number of channels by varying the expansion ratio of each block: we can use only the first half channels of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} and 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6} as 𝐰5,3subscript𝐰53\\mathbf{w}_{5,3} and 𝐰3,3subscript𝐰33\\mathbf{w}_{3,3}, respectively. By defining another set of trainable thresholds, the following formula is defined to determine the expansion ratio: (3) 𝐰∗,∗​(te=3,te=6,tk=5)=𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝐰∗,3​(tk=5)+𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝟙​(∥𝐰∗,6\\3​(tk=5)∥2>te=6)⋅𝐰∗,6\\3​(tk=5)subscript𝐰subscript𝑡𝑒3subscript𝑡𝑒6subscript𝑡𝑘5⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒3subscript𝐰3subscript𝑡𝑘5⋅⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒31superscriptdelimited-∥∥subscript𝐰\\63subscript𝑡𝑘52subscript𝑡𝑒6subscript𝐰\\63subscript𝑡𝑘5\\mathbf{w}_{*,*}(t_{e=3},t_{e=6},t_{k=5})=\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbf{w}_{*,3}(t_{k=5})+\\\\ \\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}(t_{k=5})\\rVert^{2}>t_{e=6})\\cdot\\mathbf{w}_{*,6\\backslash 3}(t_{k=5}) where 𝐰k,6\\3subscript𝐰𝑘\\63\\mathbf{w}_{k,6\\backslash 3} means the remaining half of channels, 𝐰k,6−𝐰k,3subscript𝐰𝑘6subscript𝐰𝑘3\\mathbf{w}_{k,6}-\\mathbf{w}_{k,3}. Note that if te=3subscript𝑡𝑒3t_{e=3} is sufficiently large, all channels can be removed to make the block a plain skip connection. Thus, they replace the original depthwise convolution kernel of MBConv5,6 with 𝐰∗,∗subscript𝐰\\mathbf{w}_{*,*}, yielding a differentiable and searchable MBConv with respect to the kernel size and expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_22", "text": " They also design a differentiable latency-aware loss function to consider hardware latency in the search algorithm. To this end, they define a function to estimate latency as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_23", "text": " (4) Lel=𝟙(∥𝐰∗,3∥2>te=3)⋅(P5,3l+𝟙(∥𝐰∗,6\\3∥2>te=6)⋅(P5,6l−P5,3l))subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰32subscript𝑡𝑒3subscriptsuperscript𝑃𝑙53⋅1superscriptdelimited-∥∥subscript𝐰\\632subscript𝑡𝑒6subscriptsuperscript𝑃𝑙56subscriptsuperscript𝑃𝑙53\\begin{split}L^{l}_{e}=&\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}\\rVert^{2}>t_{e=3})\\cdot(P^{l}_{5,3}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}\\rVert^{2}>t_{e=6})\\cdot(P^{l}_{5,6}-P^{l}_{5,3}))\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_24", "text": " (5) Ll=P3,6l/P5,6l⋅Lel+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅Lel⋅(1−P3,6l/P5,6l)superscript𝐿𝑙⋅subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscriptsuperscript𝐿𝑙𝑒1subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56\\begin{split}L^{l}=&P^{l}_{3,6}/P^{l}_{5,6}\\cdot L^{l}_{e}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot L^{l}_{e}\\cdot(1-P^{l}_{3,6}/P^{l}_{5,6})\\end{split} where Pk,elsubscriptsuperscript𝑃𝑙𝑘𝑒P^{l}_{k,e} is a profiled latency value for MBConvk,e for the l𝑙lth block in the supernet. Note that they used P3,6lsubscriptsuperscript𝑃𝑙36P^{l}_{3,6}, P5,3lsubscriptsuperscript𝑃𝑙53P^{l}_{5,3}, and P5,6lsubscriptsuperscript𝑃𝑙56P^{l}_{5,6} only to formulate Llsuperscript𝐿𝑙L^{l}, and the latency for MBConv3,3 is approximated using these values. Here is the latency-aware loss function designed: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_25", "text": " (6) C​E+λ⋅l​o​g​(∑lLl)𝐶𝐸⋅𝜆𝑙𝑜𝑔subscript𝑙superscript𝐿𝑙CE+\\lambda\\cdot log(\\sum_{l}L^{l}) Finally, they search for a neural architecture in two phases. First, they train the supernet by randomly choosing one of the candidate subgraphs in each training step. In this phase, they use CrossEntropy loss only. Next, they enable latency-aware loss function and train the supernet with the loss function, to decide the threshold values. By doing this, they could get a high-quality neural architecture with only eight epochs of ImageNet training set.111In our implementation, we changed the probability of selecting each candidate MBConvs to be equal. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_26", "text": " Even though the proposed methodology can be applied to any type of NPU, the current implementation is made for an adder-tree type NPU, called MIDAP (Kang et al., 2019). It has a fully-pipelined micro-architecture that consists of separate hardware modules and memory modules for convolution, activation function, and various reduction operations. Since it enables us to make a fully static schedule of operations without resource contention in the data path, we can estimate the end-to-end latency of a CNN quite accurately analytically. Unexpected delay may incur from off-chip DRAM delay that is not fully hidden by double buffering. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_27", "text": " Another good feature of MIDAP is that it efficiently supports the following operations that would lower the MAC (multiply-accumulate) utilization in other NPUs that have many MAC units: pooling, DWConv, and squeeze-and-excitation (SE). For DWConv operation, it does not use an adder tree but an alternative hardware logic that consists of a set of individual accumulators connected to the multiply units. For pooling and SE operations, reduction logic is included in the pipeline. Note that MIDAP has not been implemented as a real hardware chip yet but as a virtual prototype with a cycle-accurate simulator. Thanks to the cycle-accurate simulator that considers the DRAM access contention and parametrized DRAM access delay, we could build an accurate analytical model for end-to-end latency estimation, based on the profiling result with the simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_28", "text": " Inverted bottleneck with depth-wise convolution (MBConv) (Sandler et al., 2018) is a popular building block in recent mobile-friendly networks. However, it is not efficiently supported in existing NPUs that do not have specialized hardware units for DWConv (Gholami et al., 2018; Gupta and Akin, 2020). Thus Gupta et al. (Gupta and Akin, 2020) replaced an MBConv block with a fused building block that fuses an expansion layer and DWConv in MBConv into a single full convolution. Even though the fused block increases the number of multiplications significantly, it improves the MAC utilization larger so that the fused block is observed faster than MBConv on their target NPU, EdgeTPU. By adding this building block to their search space, they could successfully obtain different neural architectures for EdgeTPU from those for GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_29", "text": " Since DWConv is efficiently supported in MIDAP, however, the improvement of MAC utilization by fusing does not outweigh the increased computation complexity, which is observed in preliminary experiments. The experiment setup is similar to main experiment setup that will be explained in section 5.2. The experimental result is shown in Table 1. The latency constraint for fused block experiment is set to 7.0ms, while others are set to 2.15ms. In the combined experiment, we use the fused block in the 1st and the 2nd stages, and MBConv for the remaining stages since the latency gap between two building blocks is too high. As shown in the table, MBConv block shows the best tradeoff between accuracy and latency. Hence we prefer MBConv to the fused building block as the basic building block in the supernet for MIDAP. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_30", "text": " In this section, we explain the proposed S3NAS methodology that consists of three steps as displayed in Figure 2. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_31", "text": " The number of blocks is one of the key parameters in neural networks. It is observed that the total number of blocks affects the accuracy of neural architecture (He et al., 2016; Tan and Le, 2019a). In conventional One-Shot NAS methods, each stage in the supernet has the same number of blocks (Cai et al., 2018; Stamoulis et al., 2019; Wu et al., 2019). On the other hand, some recent studies (Meng et al., 2020; Radosavovic et al., 2020) report that the way of assigning the number of blocks in each stage has a noticeable impact on the accuracy, even with the same number of blocks in total. Hence we allow stages in the supernet to have a different number of blocks. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_32", "text": " We investigate the impact of assigning the number of blocks in the supernet with another preliminary experiment. We construct a network based on MobileNetV2, which has four blocks in every stage, and observe the change of accuracy as we reduce two blocks in a different stage in each experiment. Figure 5 shows that MBConvs with larger width has more impact on accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_33", "text": " As the number of multiplications in a DWConv is W×H×C×K2𝑊𝐻𝐶superscript𝐾2W\\times H\\times C\\times K^{2}, the later stage of DWConv tends to have shorter latency since the reduction of H×W𝐻𝑊H\\times W is larger than the increase of C𝐶C. Thus the impact on the latency by increasing the number of blocks in a later stage is not significant as displayed in Figure 5. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_34", "text": " Thus, we place more blocks to stages with larger width in the supernet, making the cumulative depth up to a specific stage is proportional to the width of the stage, which is similar to PyramidNet (Han et al., 2017). A recent study (Radosavovic et al., 2020) also claims that neural architectures with a linear relationship between the cumulative depth and the width tend to have higher accuracy with a similar amount of computation complexity. Our experiment shows that our modification to supernet enhances the efficiency of the search result in terms of accuracy as well as latency (Table 4). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_35", "text": " Another feature of the proposed supernet is to use mixed convolution (MixConv) that mixes different kernel sizes in the depth-wise convolution layer (Tan and Le, 2019b). Some recent NAS methods (Mei et al., 2019; Chu et al., 2020) also broaden their search space using DWConv with various kernel sizes and could find better neural architectures. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_36", "text": " Figure 6 depicts our building block structure. This block starts and ends with 1×1 convolution, with N𝑁N searchable superkernels in the middle. Each searchable superkernel is designed similarly to Eq. (3), while we may use different threshold values in each superkernel. The kernel sizes and expansion ratios are selected among predetermined values. If the j𝑗j-th searchable superkernel chooses an expansion ratio ejsubscript𝑒𝑗e_{j}, the j𝑗j-th kernel has ejsubscript𝑒𝑗e_{j} times more channels than the first 1×1 convolution. Compared with the original MixConv suggested in (Tan and Le, 2019b), the proposed building block supports more diverse combinations of kernel sizes and expansion ratios. It enhances the efficiency of search results on our target NPU (Table 5). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_37", "text": " We finish this subsection by highlighting the merit of Single-Path NAS on building a MixConv-based differentiable NAS. Conventional multi-path NAS methods would have difficulties when adding inverted bottleneck convolution with MixConv to their search space. Since the number of possible choices of such blocks grows proportionally to the partition number, multi-path NAS methods would introduce a significant increase in memory requirements and the search time. On the contrary, MixConv can be efficiently supported in Single-Path NAS, as explained below. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_38", "text": " We use a different latency estimation model, and a loss formula from the original SinglePath NAS technique explained in section 3.1. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_39", "text": " Suppose we concatenate N𝑁N searchable superkernels to build a MixConv-based building block, and let k→=(k1,⋯,kN),e→=(e1,⋯,eN)formulae-sequence→𝑘subscript𝑘1⋯subscript𝑘𝑁→𝑒subscript𝑒1⋯subscript𝑒𝑁\\vec{k}=(k_{1},\\cdots,k_{N}),\\vec{e}=(e_{1},\\cdots,e_{N}) where kj,ejsubscript𝑘𝑗subscript𝑒𝑗k_{j},e_{j} denote the kernel size and the expansion ratio of the j𝑗jth searchable superkernel. The estimated latency of a DWConv operation depends on the kernel size and the expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_40", "text": " For latency formulation, we first define two condition variables, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} and Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}}, that denote whether the j𝑗jth searchable superkernel chooses the kernel size kjsubscript𝑘𝑗k_{j} and the expansion ratio ejsubscript𝑒𝑗e_{j}, respectively; For example, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} is 1 if and only if the j𝑗jth searchable superkernel chooses kjsubscript𝑘𝑗k_{j}, and 0 otherwise. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_41", "text": " Let κ1<⋯<κKsubscript𝜅1⋯subscript𝜅𝐾\\kappa_{1}<\\cdots<\\kappa_{K} be the candidate kernel sizes, and 0=ϵ1<⋯<ϵE0subscriptitalic-ϵ1⋯subscriptitalic-ϵ𝐸0=\\epsilon_{1}<\\cdots<\\epsilon_{E} denote the candidate expansion ratios of the j𝑗jth searchable superkernel, respectively. Suppose kj=κcsubscript𝑘𝑗subscript𝜅𝑐k_{j}=\\kappa_{c}, then Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} can be formulated as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_42", "text": " (7) Fj,kj=(∏2≤i≤c𝟙​(∥𝐰j,κi\\κi−1,ϵE∥2>tj,κi))⋅fj,kj​, wherefj,kj={𝟙​(∥𝐰j,κc+1\\κc,ϵE∥2<tj,κc+1),if ​c<K1,if ​c=Ksubscript𝐹𝑗subscript𝑘𝑗⋅subscriptproduct2𝑖𝑐1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑖subscript𝜅𝑖1subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑖subscript𝑓𝑗subscript𝑘𝑗, wheresubscript𝑓𝑗subscript𝑘𝑗cases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑐1subscript𝜅𝑐subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑐1if 𝑐𝐾1if 𝑐𝐾\\begin{split}F_{j,k_{j}}&=\\left(\\prod_{2\\leq i\\leq c}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{i}\\backslash\\kappa_{i-1},\\epsilon_{E}}\\rVert^{2}>t_{j,\\kappa_{i}})\\right)\\cdot f_{j,k_{j}}\\text{, where}\\\\ f_{j,k_{j}}&=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{c+1}\\backslash\\kappa_{c},\\epsilon_{E}}\\rVert^{2}<t_{j,\\kappa_{c+1}}),&\\text{if }c<K\\\\ 1,&\\text{if }c=K\\end{cases}\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_43", "text": " Figure 7 depicts an example of this formula when the j𝑗jth searchable superkernel that has four candidate kernel sizes κ1<⋯<κ4subscript𝜅1⋯subscript𝜅4\\kappa_{1}<\\cdots<\\kappa_{4} chooses κ2subscript𝜅2\\kappa_{2} as the kernel size: kj=κ2subscript𝑘𝑗subscript𝜅2k_{j}=\\kappa_{2}. It means that weight 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} and 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}} are used, but the remaining weights starting from 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} are not used. Since 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} is always used, it is not included in the formula. To use 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}}, the norm of it has to be larger than tj,κ2subscript𝑡𝑗subscript𝜅2t_{j,\\kappa_{2}} while the norm of 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} should not be larger than tj,κ3subscript𝑡𝑗subscript𝜅3t_{j,\\kappa_{3}} to avoid the use of larger kernel sizes. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_44", "text": " We can formulate Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}} similarly: Gj,ejsubscript𝐺𝑗subscript𝑒𝑗\\displaystyle G_{j,e_{j}} =(∏2≤i≤d𝟙​(∥𝐰j,∗,ϵi\\ϵi−1∥2>tj,ϵi))⋅gj,ej​, whereabsent⋅subscriptproduct2𝑖𝑑1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑖subscriptitalic-ϵ𝑖12subscript𝑡𝑗subscriptitalic-ϵ𝑖subscript𝑔𝑗subscript𝑒𝑗, where\\displaystyle=\\left(\\prod_{2\\leq i\\leq d}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{i}\\backslash\\epsilon_{i-1}}\\rVert^{2}>t_{j,\\epsilon_{i}})\\right)\\cdot g_{j,e_{j}}\\text{, where} gj,ejsubscript𝑔𝑗subscript𝑒𝑗\\displaystyle g_{j,e_{j}} ={𝟙​(∥𝐰j,∗,ϵd+1\\ϵd∥2<tj,ϵd+1),if ​d<E1,if ​d=Eabsentcases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑑1subscriptitalic-ϵ𝑑2subscript𝑡𝑗subscriptitalic-ϵ𝑑1if 𝑑𝐸1if 𝑑𝐸\\displaystyle=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{d+1}\\backslash\\epsilon_{d}}\\rVert^{2}<t_{j,\\epsilon_{d+1}}),&\\text{if }d<E\\\\ 1,&\\text{if }d=E\\end{cases} when ej=ϵdsubscript𝑒𝑗subscriptitalic-ϵ𝑑e_{j}=\\epsilon_{d}. Then the condition for a MixConv-based building block to choose k→,e→→𝑘→𝑒\\vec{k},\\vec{e} can be expressed as ∏jNFj,kj​Gj,ejsuperscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_45", "text": " Now, the estimated latency of a single block is formulated as follows: (8) L=∑k→,e→(P​(k→,e→)​∏jNFj,kj​Gj,ej)𝐿subscript→𝑘→𝑒𝑃→𝑘→𝑒superscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗L=\\sum_{\\vec{k},\\vec{e}}(P(\\vec{k},\\vec{e})\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}) where P​(k→,e→)𝑃→𝑘→𝑒P(\\vec{k},\\vec{e}) denotes the profiled latency value of a MixConv-based building block corresponding to k→,e→→𝑘→𝑒\\vec{k},\\vec{e}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_46", "text": " Unlike the original Single-Path NAS that approximates the latency in Eq. (5) in some cases, we use the profiled latency value in all cases. Note that an expansion ratio can be zero, and if only one superkernel has a nonzero expansion ratio, the MixConv block is reduced to a plain MBConv block. Finally, we can estimate the latency by summing up these estimated latencies for all superkernels in the block, ∑L𝐿\\sum L. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_47", "text": " Since each superkernel is treated independently, some superkernels may have the same kernel size and expansion ratio. Then, even if two superkernel configurations express an equivalent block, as illustrated in Figure 8, they may have different estimated latency values, which is an artifact of the proposed profiling-based latency estimation method. To avoid this artifact, we enforce that there is only one kernel for each kernel size in the MixConv block. That is, we merge two kernels of the same size into one; For instance, the left MixConv is translated to the right MixConv in Figure 8 before latency estimation. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_48", "text": " Figure 9 shows the estimated latency and simulated latency of randomly generated 100 models on our search space. It validates the accuracy of the proposed latency model, whose mean absolute percentage error(MAPE) is about 0.16%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_49", "text": " The existing hardware-aware differentiable NAS methods mostly define some hyperparameters to balance between accuracy and latency, including SinglePath NAS, whose loss function is defined as Eq. (6). Since there is no information on the target latency in the loss function, in case there is a strict latency constraint, they have to pay additional search costs for the hyperparameters to let the final architecture have no larger latency than the constraint. In addition, this process needs to be repeated whenever the target latency is changed. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_50", "text": " We propose to modify the loss function to activate the latency-aware loss term only when the estimated latency is larger than the latency constraint as follows: (9) C​E+λ1⋅l​o​g​(1+λ2⋅R​e​L​U​((∑L)−T))𝐶𝐸⋅subscript𝜆1𝑙𝑜𝑔1⋅subscript𝜆2𝑅𝑒𝐿𝑈𝐿𝑇CE+\\lambda_{1}\\cdot log(1+\\lambda_{2}\\cdot ReLU((\\sum L)-T)) Although this is not a panacea, this modification significantly eases the search process, which will be discussed in section 5.2 with various experiments. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_51", "text": " In the second step, we intentionally use shorter latency to reduce the search space for the baseline network. After finding the baseline network with a shorter latency, we apply compound scaling to find an architecture with the final latency constraint. In this step, we conduct post-processing to add SE block and h-swish activation function if beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_52", "text": " It is well known that increasing depth (He et al., 2016), width (Zagoruyko and Komodakis, 2016), or input image size improves accuracy while it increases latency. However, if only one of these three factors is increased, the accuracy improvement is quickly saturated. Observing this fact, Tan et al. (Tan and Le, 2019a) proposed a compound scaling method that increases all three factors together. A scaling coefficient is defined for each factor. By judiciously assigning the scaling coefficients in a balanced fashion, they could improve the accuracy much larger than scaling a single factor only. Adopting this approach, we apply the compound scaling to the baseline architecture obtained in the previous step. Based on the ratio between the true latency constraint and the assumed latency constraint in the second step, we find the scaling coefficients considering the estimated latency increment. To keep the linear relationship between the width and cumulative depth, we use the same scaling coefficient for width and depth, differently from (Tan and Le, 2019a). Note that how to realize scaling depends on the baseline architecture. While the baseline architecture assumed in (Tan and Le, 2019a) has a series of identical blocks in each stage, a stage consists of heterogeneous blocks in our baseline architecture. Thus depth scaling is not realized by merely adding new blocks in each stage. We need to choose what types of blocks to add in each stage. We increase the number of blocks with more parameters first. To compute how many blocks to add in a stage, we multiply the depth of the stage by depth coefficient and round the multiplication result. Width scaling is applied to all blocks equally. Finally, we consider latency when we scale. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_53", "text": " In addition to compound scaling, we add two components in the post-processing step: h-swish activation function and squeeze-and-excitation (SE) block. A recent study (Park and Yoo, 2020) reports that SE and the h-swish activation function are no hurdles for 8-bit quantization. They could quantize a network with SE and h-swish without noticeable accuracy loss. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_54", "text": " Extensive studies have been conducted to find a better activation function than ReLU, and the swish activation function (Ramachandran et al., 2017) was found. Several neural networks (Tan and Le, 2019b; Mei et al., 2019; Tan and Le, 2019a) use swish activation function instead of ReLU to improve accuracy. Howard et al. (Howard et al., 2019) proposed a quantization-friendly version of the swish activation function called h-swish that has a similar impact on accuracy. So, we replace ReLU with h-swish (Howard et al., 2019) activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_55", "text": " Squeeze-and-Excitation(SE) is a lightweight operation which is shown to be beneficial to accuracy (Hu et al., 2018). Figure 10 depicts the structure of a SE block. For a given input feature map, it first computes the importance of the feature channels a representative value for global spatial information of each feature channel by global average pooling. After such squeeze operation generates channel-wise statistics, excitation operation captures channel-wise dependencies by two cascaded fully-connected layers to produce activation values, which represents the importance of each feature channel. Finally, channel-wise multiplication is performed between the activation values induced by the excitation operation and the input feature map for each channel. SE block is used in many recent architectures (Tan and Le, 2019a; Howard et al., 2019; Radosavovic et al., 2020). By adding SE blocks to the baseline network, we also observe the accuracy improvement. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_56", "text": " Figure 11 depicts an example distribution of activation values produced by two different SE blocks for three different images. The authors of the original paper (Hu et al., 2018) conjectured that if such distribution from a SE block does not differ widely between image classes, the SE block is not important. Thus, after training, they obtained averaged activation values of a SE block over multiple images in the same class. They compared the distributions of the averaged values over different image classes. They observed that removing the SE blocks that have similar distributions over different image classes incurs only a marginal loss in accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_57", "text": " Inspired by this observation, we propose to remove SE blocks selectively to minimize the additional computation cost caused by SE blocks. We obtain activation values from a SE block for each input image and measure how the distribution of activation values varies over different input images. For each channel c, we calculate the standard deviation σcsubscript𝜎𝑐\\sigma_{c} of activation values over different images. If σcsubscript𝜎𝑐\\sigma_{c} is small in most channels, the activation values from the SE block does not differ much over images. Conceptually, it implies that the SE block does not help to discriminate further which channel is more influential. From the engineering perspective, it means that channel-wise multiplication of a SE block is similar to constant multiplication, which can be handled by the following convolutional layer. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_58", "text": " We define a metric as the average of standard deviation values σcsubscript𝜎𝑐\\sigma_{c} over all channels that represent the diverseness of the activation distribution over different images. If the metric value is small, we remove the SE block. For example, in Figure 11, our metric of the SE block on the left side has a value of 0.021, while the right side has a value of 0.118, more than 5x larger than the left side; The left side is a better candidate for SE block removal. When we remove SE blocks according to this metric, the accuracy is found to be similar, while the latency got shorter (Table 6). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_59", "text": " We evaluate the proposed NAS technique for image classification with the ImageNet dataset. The current implementation is made for MIDAP (Kang et al., 2019) that can perform DWConv and SE operations efficiently so that MBConv is preferred to full 3-D convolution as the basic building block, as explained above. Latencies on the target NPU are obtained with the cycle-accurate simulator222https://github.com/cap-lab/MidapSim. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_60", "text": " A superkernel has two parameters to search: expansion ratio and kernel size. To limit the search space, we choose the expansion ratio among 0, 2, 4, and 6, and the kernel size between 3 and 5 when MBConv or full convolution is used as the building block. In the case of the MixConv-based building block, we use N𝑁N=3 superkenels whose expansion ratio is 0 or 2; The sum of the expansion ratio of three superkernels has the same range as the expansion ratio of a single MBConv block. To allow three superkernels to have different kernel sizes, we let one of three superkernels be able to have 7 as the kernel size. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_61", "text": " In the first phase of the neural architecture search, we train the supernet by randomly choosing one of the candidate subgraphs in each training step. We train the supernet for 8 epochs, with λ1=0subscript𝜆10\\lambda_{1}=0 in the loss function of Eq. 9, focusing only on the accuracy. We decrease the learning rate by 0.97 every 2.4 epochs, starting from 0.064. The other setting for network training is displayed in Table 4. Gradient clipping with a value of 10 is used in this phase. In the second phase, we set λ1=15,λ2=100formulae-sequencesubscript𝜆115subscript𝜆2100\\lambda_{1}=15,\\lambda_{2}=100 to consider latency in the loss function, and optimize the weights and threshold values of supernet for 2 epochs. After this second phase finishes, the final architecture topology is decided. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_62", "text": " Next, we train the final architecture again to determine the filter weights for 350 epochs with the ImageNet again, using the same setting described in Table 4. Unlike the search phase, the learning rate is increased from 0 to 0.064 in the first 5 epochs, then decayed by 0.97 every 2.4 epochs. Since we observed that the batch size is critical to accuracy when using the EfficientNet training code, we use a large batch size. Both network architecture search and final training are conducted on Google Cloud TPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_63", "text": " In the proposed NAS technique, two major extensions are made to the supernet, compared with the original SinglePath NAS technique. Table 3 shows the proposed supernet architecture with configuration parameters, block types and depths. It starts with a 7x7 convolution layer, followed by 5 stages that have a different number of blocks for feature extraction and 2 fully-connected networks for classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_64", "text": " The first extension is to allow stages to have a different number of blocks. To verify the goodness of this extension, we design two kinds of MBConv-based supernet with 20 blocks in total: a supernet with constant depth(baseline), a supernet with linear depth where the cumulative depth up to a specific stage is proportional to the width of the stage. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_65", "text": " As shown in Table 4, a supernet with linear depth outperforms a supernet with constant depth in terms of accuracy with similar latency. It confirms that this simple change of block assignment in supernet gives notable accuracy boost with the same latency constraint, without any additional optimization techniques. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_66", "text": " The second extension is to use multiple parallel superkernels in an MBConv block. To verify the benefit of it, we compare two different supernets with the same number of blocks in each stage. The accuracy and latency performance of the baseline supernet is the same as the previous experimental result shown in Table 4. Table 5 shows that the extended supernet with MixConv-based building blocks gives a better accuracy-latency tradeoff. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_67", "text": " We apply the proposed NAS method with the supernet architecture described above. The depth of 5 stages is set to 3,4,7,4,113474113,4,7,4,11, respectively. The latency constraint is set to 2.5 ms that corresponds to the latency of EfficientNet-B1 on our target NPU, MIDAP. Table 6 compares our search results with the state-of-the-art models: EdgeTPU (Gupta and Akin, 2020), EfficientNet (Tan and Le, 2019a), Once-For-All (Cai et al., 2019). The latency of the other models is obtained by running the network on the MIDAP cycle-accurate simulator. We compare the accuracy without quantization, assuming that quantization effects will be similar to all models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_68", "text": " As shown in Table 6, the baseline model, ours-M, found by the proposed NAS technique has higher accuracy than the other models on our target NPU; ours-M achieves more than 1.7% higher top-1 accuracy than EfficientNet-lite2 with similar latency. Moreover, it is 0.5% higher than EfficientNet-B1, even without using SE and h-swish activation function. Note that the number of parameters and the number of FLOPS in ours-M is larger than EfficientNet-B1. It implies that the complexity of the network is not a direct indicator of the end-to-end latency of the network. The end-to-end latency depends on the NPU architecture, and the proposed NAS technique could find a larger network with shorter latency by adding the latency factor to the loss function directly. The main benefit comes from different block assignment to stages. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_69", "text": " We improve the baseline network by adding the h-swish activation function and squeeze-and-excitation(SE) block to get the ours-M+ model. Figure 12 shows the topology of ours-M+ architecture in which the height of each block is proportional to the expansion ratio of the block. Compared with the baseline network, ours-M, we achieve around 1% accuracy boost with ours-M+, paying the cost of 16% latency increase. This model outperforms the other models, 0.5% higher accuracy and 14% faster than EfficientNet-B2. Since EfficientNet-B2 is too large to run with the default configuration on MIDAP, we increase the memory size for filter weights. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_70", "text": " Next, we applied compound scaling (Tan and Le, 2019a) to ours-M+ to obtain ours-L+ and ours-XL+. When we determine scaling coefficients, we keep the linear relationship between the cumulative depth and width of each stage, and scale the input image size more aggressively than (Tan and Le, 2019a). We make the number of filters to be multiples of 16 to maximize the MAC unit utilization on MIDAP. When we train our scaled model, we set the dropout ratio to 0.4, similar to EfficientNet-B4 training. The accuracy of ours-L+ is higher than EfficientNet-B3 and EfficientNet-lite4, while the accuracy of ours-XL+ is similar to EfficientNet-B4. Note that the difference between the searched network and the EfficientNet decreases as the network size increases. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_71", "text": " Finally, we selectively removed SE blocks from ours-XL+, resulting in ours-XL-rmSE+. We collected the activation values using randomly sampled 10K images from the training dataset and calculated the metric explained in Sec. 4.3.3. After removing SE blocks from ours-XL+ based on the metric, only about 60% of the blocks in the network have SE blocks. As a result, we could make the latency shorter, while the accuracy was slightly improved than ours-XL+. This model achieves 82.72% top-1 accuracy with only 11.66ms latency. It is much better than EfficientNet-EdgeTPU-L (Gupta and Akin, 2020) that achieves 80.62% FP32 top-1 accuracy with more than 20ms on EdgeTPU. Our architecture on MIDAP is about 2 times faster with 2.1% higher accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_72", "text": " Finally, we compare the search time. Since the TPU is faster than GPU, we report the wall clock time and the estimated GPU time (in parenthesis) that is 10 times longer than the wall clock time in the last column of Table 6 Our method takes 3 hours, which is much faster than the other methods. Note that we compare the total time to get one architecture from scratch without trained weights. Once-For-All (Cai et al., 2019) would require only short fine-tuning time after a neural architecture is searched. In contrast, we need to train the network after a network architecture is found. It took 40 hours on TPUv3 to train ours-M+. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_73", "text": " While most NAS techniques are not compared with a random search method, the authors (Li and Talwalkar, 2019) reported that a random search method is highly competitive. So we conducted an experiment to compare the proposed NAS technique with two random search methods, exploring the same search space defined by the supernet structure of ours-M. First, we designed a simple random search method that has the similar time complexity of the proposed technique. In this method, we randomly generate 15 models having a similar latency with ours-M, from the same search space. Then we train each of them for 1 epoch with cosine learning rate decay. After evaluating each of them, we choose the architecture with the topmost top-1 accuracy and fully train it. In the second method, called random selection, we randomly generate 20 models having a similar latency with ours-M and train them fully and take the architecture with the highest top-1 accuracy. Since the random selection method performs search and training simultaneously, it is slower than the proposed technique by the number of randomly generated models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_74", "text": " Comparison results are reported in Table 6. It is confirmed that both random selection and random search are quite competitive, but noticeably inferior to ours-M in terms of accuracy. In detail, the worst case of random selection showed 0.8% lower accuracy than ours-M. The best performance obtained from 20 randomly generated models is 79.19%, still lower than the accuracy of ours-M. Note that random search and random selection show similar performance that is no smaller than the other networks. It means that the search space defined by the supernet architecture has a more significant effect on the accuracy than the search method. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_75", "text": " There are two methods to find an architecture with a loose latency constraint. One is to use compound scaling that scales a small network with shorter latency, and the other is to search a network directly. To compare these two methods, we first scaled ours-M using the same scaling coefficients that we used to scale ours-M+ to ours-L+ and trained it. When conducting a direct search, we scaled the depth and width of the supernet and the input image size first and applied the proposed NAS technique for the scaled supernet. We used batch size 512 instead of 1024 during the architecture search due to the memory limitation of TPU. The comparison result is shown in Table 7 in terms of top-1 accuracy(%) and the latency on the target NPU(ms). Two results were similar while direct search needed 10 hours on TPUv3; It means that compound scaling is an effective method to find a large network fast. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_76", "text": " To examine how SE and h-swish impact accuracy individually, we compare four combinations as displayed in Table 8. The baseline is ours-M that does not use SE and h-swish activation function. Replacing ReLU with h-swish gives a marginal improvement on accuracy while adding SE blocks improves the accuracy noticeably. Adding both SE and h-swish activation function improves the accuracy by around 1%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_77", "text": " In this work, we propose a fast NPU-aware NAS methodology extending the Single-Path NAS technique (Stamoulis et al., 2019). We modify the supernet architecture by varying the number of blocks in stages and adding mixed depthwise convolution (Tan and Le, 2019b) to the search space. By modifying the loss function to directly include the target latency estimated by a cycle-accurate simulator of the target NPU, we could find a better baseline architecture with a shorter latency than the latency constraint. Using a tight latency constraint, we can reduce the search space to find the baseline network fast. Afterward, we apply compound scaling to find a larger network than the baseline network, and add SE blocks and h-swish activation functions in the post-processing step. Through the proposed NAS methodology, we could obtain a network with 82.72% accuracy with 11.66ms latency on our target NPU, without special data augmentation in training. It dominates the existing network models on the target NPU. It confirms the importance of supernet architecture design for a given NPU and effectiveness of the three-step approach in the proposed NAS methodology: supernet design, SinglePath NAS with a tighter latency constraint, and compound scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" } ]
What significance do the numbers like 51.3 and 30.0 have with respect to the One Billion Word benchmark?
As the average per-word log-probability, perplexity is a measure of how confused the language model is in predicting the next word [37].
[ 37 ]
[ { "id": "1602.02410_all_0", "text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amount of information about the knowledge that a corpora may contain. Indeed, models that are able to assign a low probability to sentences that are grammatically correct but unlikely may help other tasks in fundamental language understanding like question answering, machine translation, or text summarization. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_1", "text": " LMs have played a key role in traditional NLP tasks such as speech recognition (Mikolov et al., 2010; Arisoy et al., 2012), machine translation (Schwenk et al., 2012; Vaswani et al., ), or text summarization (Rush et al., 2015; Filippova et al., 2015). Often (although not always), training better language models improves the underlying metrics of the downstream task (such as word error rate for speech recognition, or BLEU score for translation), which makes the task of training better LMs valuable by itself. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_2", "text": " Further, when trained on vast amounts of data, language models compactly extract knowledge encoded in the training data. For example, when trained on movie subtitles (Serban et al., 2015; Vinyals & Le, 2015), these language models are able to generate basic answers to questions about object colors, facts about people, etc. Lastly, recently proposed sequence-to-sequence models employ conditional language models (Mikolov & Zweig, 2012) as their key component to solve diverse tasks like machine translation (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner et al., 2014) or video generation (Srivastava et al., 2015a). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_3", "text": " Deep Learning and Recurrent Neural Networks (RNNs) have fueled language modeling research in the past years as it allowed researchers to explore many tasks for which the strong conditional independence assumptions are unrealistic. Despite the fact that simpler models, such as N-grams, only use a short history of previous words to predict the next word, they are still a key component to high quality, low perplexity LMs. Indeed, most recent work on large scale LM has shown that RNNs are great in combination with N-grams, as they may have different strengths that complement N-gram models, but worse when considered in isolation (Mikolov et al., 2011; Mikolov, 2012; Chelba et al., 2013; Williams et al., 2015; Ji et al., 2015a; Shazeer et al., 2015). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_4", "text": " We believe that, despite much work being devoted to small data sets like the Penn Tree Bank (PTB) (Marcus et al., 1993), research on larger tasks is very relevant as overfitting is not the main limitation in current language modeling, but is the main characteristic of the PTB task. Results on larger corpora usually show better what matters as many ideas work well on small data sets but fail to improve on larger data sets. Further, given current hardware trends and vast amounts of text available on the Web, it is much more straightforward to tackle large scale modeling than it used to be. Thus, we hope that our work will help and motivate researchers to work on traditional LM beyond PTB – for this purpose, we will open-source our models and training recipes. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_5", "text": " We focused on a well known, large scale LM benchmark: the One Billion Word Benchmark data set (Chelba et al., 2013). This data set is much larger than PTB (one thousand fold,  800k word vocabulary and  1B words training data) and far more challenging. Similar to Imagenet (Deng et al., 2009), which helped advance computer vision, we believe that releasing and working on large data sets and models with clear benchmarks will help advance Language Modeling. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_6", "text": " The contributions of our work are as follows: ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_7", "text": " • We explored, extended and tried to unify some of the current research on large scale LM. • Specifically, we designed a Softmax loss which is based on character level CNNs, is efficient to train, and is as precise as a full Softmax which has orders of magnitude more parameters. • Our study yielded significant improvements to the state-of-the-art on a well known, large scale LM task: from 51.3 down to 30.0 perplexity for single models whilst reducing the number of parameters by a factor of 20. • We show that an ensemble of a number of different models can bring down perplexity on this task to 23.7, a large improvement compared to current state-of-art. • We share the model and recipes in order to help and motivate further research in this area. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_8", "text": " In Section 2 we review important concepts and previous work on language modeling. Section 3 presents our contributions to the field of neural language modeling, emphasizing large scale recurrent neural network training. Sections 4 and 5 aim at exhaustively describing our experience and understanding throughout the project, as well as emplacing our work relative to other known approaches. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_9", "text": " In this section we describe previous work relevant to the approaches discussed in this paper. A more detailed discussion on language modeling research is provided in (Mikolov, 2012). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_10", "text": " Language Modeling (LM) has been a central task in NLP. The goal of LM is to learn a probability distribution over sequences of symbols pertaining to a language. Much work has been done on both parametric (e.g., log-linear models) and non-parametric approaches (e.g., count-based LMs). Count-based approaches (based on statistics of N-grams) typically add smoothing which account for unseen (yet possible) sequences, and have been quite successful. To this extent, Kneser-Ney smoothed 5-gram models (Kneser & Ney, 1995) are a fairly strong baseline which, for large amounts of training data, have challenged other parametric approaches based on Neural Networks (Bengio et al., 2006). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_11", "text": " Most of our work is based on Recurrent Neural Networks (RNN) models which retain long term dependencies. To this extent, we used the Long-Short Term Memory model (Hochreiter & Schmidhuber, 1997) which uses a gating mechanism (Gers et al., 2000) to ensure proper propagation of information through many time steps. Much work has been done on small and large scale RNN-based LMs (Mikolov et al., 2010; Mikolov, 2012; Chelba et al., 2013; Zaremba et al., 2014; Williams et al., 2015; Ji et al., 2015a; Wang & Cho, 2015; Ji et al., 2015b). The architectures that we considered in this paper are represented in Figure 1. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_12", "text": " In our work, we train models on the popular One Billion Word Benchmark, which can be considered to be a medium-sized data set for count-based LMs but a very large data set for NN-based LMs. This regime is most interesting to us as we believe learning a very good model of human language is a complex task which will require large models, and thus large amounts of data. Further advances in data availability and computational resources helped our study. We argue this leap in scale enabled tremendous advances in deep learning. A clear example found in computer vision is Imagenet (Deng et al., 2009), which enabled learning complex vision models from large amounts of data (Krizhevsky et al., 2012). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_13", "text": " A crucial aspect which we discuss in detail in later sections is the size of our models. Despite the large number of parameters, we try to minimize computation as much as possible by adopting a strategy proposed in (Sak et al., 2014) of projecting a relatively big recurrent state space down so that the matrices involved remain relatively small, yet the model has large memory capacity. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_14", "text": " There is an increased interest in incorporating character-level inputs to build word embeddings for various NLP problems, including part-of-speech tagging, parsing and language modeling (Ling et al., 2015; Kim et al., 2015; Ballesteros et al., 2015). The additional character information has been shown useful on relatively small benchmark data sets. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_15", "text": " The approach proposed in (Ling et al., 2015) builds word embeddings using bidirectional LSTMs (Schuster & Paliwal, 1997; Graves & Schmidhuber, 2005) over the characters. The recurrent networks process sequences of characters from both sides and their final state vectors are concatenated. The resulting representation is then fed to a Neural Network. This model achieved very good results on a part-of-speech tagging task. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_16", "text": " In (Kim et al., 2015), the words characters are processed by a 1-d CNN (Le Cun et al., 1990) with max-pooling across the sequence for each convolutional feature. The resulting features are fed to a 2-layer highway network (Srivastava et al., 2015b), which allows the embedding to learn semantic representations. The model was evaluated on small-scale language modeling experiments for various languages and matched the best results on the PTB data set despite having 60% fewer parameters. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_17", "text": " Assigning probability distributions over large vocabularies is computationally challenging. For modeling language, maximizing log-likelihood of a given word sequence leads to optimizing cross-entropy between the target probability distribution (e.g., the target word we should be predicting), and our model predictions p𝑝p. Generally, predictions come from a linear layer followed by a Softmax non-linearity: p​(w)=exp⁡(zw)∑w′∈Vexp⁡(zw′)𝑝𝑤subscript𝑧𝑤subscriptsuperscript𝑤′𝑉subscript𝑧superscript𝑤′p(w)=\\frac{\\exp(z_{w})}{\\sum_{w^{\\prime}\\in V}\\exp(z_{w^{\\prime}})} where zwsubscript𝑧𝑤z_{w} is the logit corresponding to a word w𝑤w. The logit is generally computed as an inner product zw=hT​ewsubscript𝑧𝑤superscriptℎ𝑇subscript𝑒𝑤z_{w}=h^{T}e_{w} where hℎh is a context vector and ewsubscript𝑒𝑤e_{w} is a “word embedding” for w𝑤w. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_18", "text": " The main challenge when |V|𝑉|V| is very large (in the order of one million in this paper) is the fact that computing all inner products between hℎh and all embeddings becomes prohibitively slow during training (even when exploiting matrix-matrix multiplications and modern GPUs). Several approaches have been proposed to cope with the scaling issue: importance sampling (Bengio et al., 2003; Bengio & Senécal, 2008), Noise Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013), self normalizing partition functions (Vincent et al., 2015) or Hierarchical Softmax (Morin & Bengio, 2005; Mnih & Hinton, 2009) – they all offer good solutions to this problem. We found importance sampling to be quite effective on this task, and explain the connection between it and NCE in the following section, as they are closely related. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_19", "text": " Recurrent Neural Networks based LMs employ the chain rule to model joint probabilities over word sequences: ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_20", "text": " p​(w1,…,wN)=∏i=1Np​(wi|w1,…,wi−1)𝑝subscript𝑤1…subscript𝑤𝑁superscriptsubscriptproduct𝑖1𝑁𝑝conditionalsubscript𝑤𝑖subscript𝑤1…subscript𝑤𝑖1p(w_{1},\\ldots,w_{N})=\\prod_{i=1}^{N}p(w_{i}|w_{1},\\ldots,w_{i-1}) where the context of all previous words is encoded with an LSTM, and the probability over words uses a Softmax (see Figure 1(a)). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_21", "text": " As discussed in Section 2.3, a large scale Softmax is necessary for training good LMs because of the vocabulary size. A Hierarchical Softmax (Mnih & Hinton, 2009) employs a tree in which the probability distribution over words is decomposed into a product of two probabilities for each word, greatly reducing training and inference time as only the path specified by the hierarchy needs to be computed and updated. Choosing a good hierarchy is important for obtaining good results and we did not explore this approach further for this paper as sampling methods worked well for our setup. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_22", "text": " Sampling approaches are only useful during training, as they propose an approximation to the loss which is cheap to compute (also in a distributed setting) – however, at inference time one still has to compute the normalization term over all words. Noise Contrastive Estimation (NCE) proposes to consider a surrogate binary classification task in which a classifier is trained to discriminate between true data, or samples coming from some arbitrary distribution. If both the noise and data distributions were known, the optimal classifier would be: ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_23", "text": " p​(Y=t​r​u​e|w)=pd​(w)pd​(w)+k​pn​(w)𝑝𝑌conditional𝑡𝑟𝑢𝑒𝑤subscript𝑝𝑑𝑤subscript𝑝𝑑𝑤𝑘subscript𝑝𝑛𝑤p(Y=true|w)=\\frac{p_{d}(w)}{p_{d}(w)+kp_{n}(w)} where Y𝑌Y is the binary random variable indicating whether w𝑤w comes from the true data distribution, k𝑘k is the number of negative samples per positive word, and pdsubscript𝑝𝑑p_{d} and pnsubscript𝑝𝑛p_{n} are the data and noise distribution respectively (we dropped any dependency on previous words for notational simplicity). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_24", "text": " It is easy to show that if we train a logistic classifier pθ​(Y=t​r​u​e|w)=σ​(sθ​(w,h)−log⁡k​pn​(w))subscript𝑝𝜃𝑌conditional𝑡𝑟𝑢𝑒𝑤𝜎subscript𝑠𝜃𝑤ℎ𝑘subscript𝑝𝑛𝑤p_{\\theta}(Y=true|w)=\\sigma(s_{\\theta}(w,h)-\\log kp_{n}(w)) where σ𝜎\\sigma is the logistic function, then, p′​(w)=s​o​f​t​m​a​x​(sθ​(w,h))superscript𝑝′𝑤𝑠𝑜𝑓𝑡𝑚𝑎𝑥subscript𝑠𝜃𝑤ℎp^{\\prime}(w)=softmax(s_{\\theta}(w,h)) is a good approximation of pd​(w)subscript𝑝𝑑𝑤p_{d}(w) (sθsubscript𝑠𝜃s_{\\theta} is a logit which e.g. an LSTM LM computes). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_25", "text": " The other technique, which is based on importance sampling (IS), proposes to directly approximate the partition function (which comprises a sum over all words) with an estimate of it through importance sampling. Though the methods look superficially similar, we will derive a similar surrogate classification task akin to NCE which arrives at IS, showing a strong connection between the two. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_26", "text": " Suppose that, instead of having a binary task to decide if a word comes from the data or from the noise distribution, we want to identify the words coming from the true data distribution in a set W={w1,…,wk+1}𝑊subscript𝑤1…subscript𝑤𝑘1W=\\{w_{1},\\ldots,w_{k+1}\\}, comprised of k𝑘k noise samples and one data distribution sample. Thus, we can train a multiclass loss over a multinomial random variable Y𝑌Y which maximizes log⁡p​(Y=1|W)𝑝𝑌conditional1𝑊\\log p(Y=1|W), assuming w.l.o.g. that w1∈Wsubscript𝑤1𝑊w_{1}\\in W is always the word coming from true data. By Bayes rule, and ignoring terms that are constant with respect to Y𝑌Y, we can write: ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_27", "text": " p​(Y=k|W)∝Ypd​(wk)pn​(wk)subscriptproportional-to𝑌𝑝𝑌conditional𝑘𝑊subscript𝑝𝑑subscript𝑤𝑘subscript𝑝𝑛subscript𝑤𝑘p(Y=k|W)\\propto_{Y}\\frac{p_{d}(w_{k})}{p_{n}(w_{k})} and, following a similar argument than for NCE, if we define p​(Y=k|W)=s​o​f​t​m​a​x​(sθ​(wk)−log⁡pn​(wk))𝑝𝑌conditional𝑘𝑊𝑠𝑜𝑓𝑡𝑚𝑎𝑥subscript𝑠𝜃subscript𝑤𝑘subscript𝑝𝑛subscript𝑤𝑘p(Y=k|W)=softmax(s_{\\theta}(w_{k})-\\log p_{n}(w_{k})) then p′​(w)=s​o​f​t​m​a​x​(sθ​(w,h))superscript𝑝′𝑤𝑠𝑜𝑓𝑡𝑚𝑎𝑥subscript𝑠𝜃𝑤ℎp^{\\prime}(w)=softmax(s_{\\theta}(w,h)) is a good approximation of pd​(w​o​r​d)subscript𝑝𝑑𝑤𝑜𝑟𝑑p_{d}(word). Note that the only difference between NCE and IS is that, in NCE, we define a binary classification task between true or noise words with a logistic loss, whereas in IS we define a multiclass classification problem with a Softmax and cross entropy loss. We hope that our derivation helps clarify the similarities and differences between the two. In particular, we observe that IS, as it optimizes a multiclass classification task (in contrast to solving a binary task), may be a better choice. Indeed, the updates to the logits with IS are tied whereas in NCE they are independent. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_28", "text": " The character-level features allow for a smoother and compact parametrization of the word embeddings. Recent efforts on small scale language modeling have used CNN character embeddings for the input embeddings (Kim et al., 2015). Although not as straightforward, we propose an extension to this idea to also reduce the number of parameters of the Softmax layer. Recall from Section 2.3 that the Softmax computes a logit as zw=hT​ewsubscript𝑧𝑤superscriptℎ𝑇subscript𝑒𝑤z_{w}=h^{T}e_{w} where hℎh is a context vector and ewsubscript𝑒𝑤e_{w} the word embedding. Instead of building a matrix of |V|×|h|𝑉ℎ|V|\\times|h| (whose rows correspond to ewsubscript𝑒𝑤e_{w}), we produce ewsubscript𝑒𝑤e_{w} with a CNN over the characters of w𝑤w as ew=C​N​N​(c​h​a​r​sw)subscript𝑒𝑤𝐶𝑁𝑁𝑐ℎ𝑎𝑟subscript𝑠𝑤e_{w}=CNN(chars_{w}) – we call this a CNN Softmax. We used the same network architecture to dynamically generate the Softmax word embeddings without sharing the parameters with the input word-embedding sub-network. For inference, the vectors ewsubscript𝑒𝑤e_{w} can be precomputed, so there is no computational complexity increase w.r.t. the regular Softmax. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_29", "text": " We note that, when using an importance sampling loss such as the one described in Section 3.1, only a few logits have non-zero gradient (those corresponding to the true and sampled words). With a Softmax where ewsubscript𝑒𝑤e_{w} are independently learned word embeddings, this is not a problem. But we observed that, when using a CNN, all the logits become tied as the function mapping from w𝑤w to ewsubscript𝑒𝑤e_{w} is quite smooth. As a result, a much smaller learning rate had to be used. Even with this, the model lacks capacity to differentiate between words that have very different meanings but that are spelled similarly. Thus, a reasonable compromise was to add a small correction factor which is learned per word, such that: ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_30", "text": " zw=hT​C​N​N​(c​h​a​r​sw)+hT​M​c​o​r​rwsubscript𝑧𝑤superscriptℎ𝑇𝐶𝑁𝑁𝑐ℎ𝑎𝑟subscript𝑠𝑤superscriptℎ𝑇𝑀𝑐𝑜𝑟subscript𝑟𝑤z_{w}=h^{T}CNN(chars_{w})+h^{T}Mcorr_{w} where M𝑀M is a matrix projecting a low-dimensional embedding vector c​o​r​rw𝑐𝑜𝑟subscript𝑟𝑤corr_{w} back up to the dimensionality of the projected LSTM hidden state of hℎh. This amounts to adding a bottleneck linear layer, and brings the CNN Softmax much closer to our best result, as can be seen in Table 1, where adding a 128-dim correction halves the gap between regular and the CNN Softmax. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_31", "text": " Aside from a big reduction in the number of parameters and incorporating morphological knowledge from words, the other benefit of this approach is that out-of-vocabulary (OOV) words can easily be scored. This may be useful for other problems such as Machine Translation where handling out-of-vocabulary words is very important (Luong et al., 2014). This approach also allows parallel training over various data sets since the model is no longer explicitly parametrized by the vocabulary size – or the language. This has shown to help when using byte-level input embeddings for named entity recognition (Gillick et al., 2015), and we hope it will enable similar gains when used to map onto words. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_32", "text": " The CNN Softmax layer can handle arbitrary words and is much more efficient in terms of number of parameters than the full Softmax matrix. It is, though, still considerably slow, as to evaluate perplexities we need to compute the partition function. A class of models that solve this problem more efficiently are character-level LSTMs (Sutskever et al., 2011; Graves, 2013). They make predictions one character at a time, thus allowing to compute probabilities over a much smaller vocabulary. On the other hand, these models are more difficult to train and seem to perform worse even in small tasks like PTB (Graves, 2013). Most likely this is due to the sequences becoming much longer on average as the LSTM reads the input character by character instead of word by word. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_33", "text": " Thus, we combine the word and character-level models by feeding a word-level LSTM hidden state hℎh into a small LSTM that predicts the target word one character at a time (see Figure 1(c)). In order to make the whole process reasonably efficient, we train the standard LSTM model until convergence, freeze its weights, and replace the standard word-level Softmax layer with the aforementioned character-level LSTM. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_34", "text": " The resulting model scales independently of vocabulary size – both for training and inference. However, it does seem to be worse than regular and CNN Softmax – we are hopeful that further research will enable these models to replace fixed vocabulary models whilst being computationally attractive. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_35", "text": " All experiments were run using the TensorFlow system (Abadi et al., 2015), with the exception of some older models which were used in the ensemble. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_36", "text": " The experiments are performed on the 1B Word Benchmark data set introduced by (Chelba et al., 2013), which is a publicly available benchmark for measuring progress of statistical language modeling. The data set contains about 0.8B words with a vocabulary of 793471 words, including sentence boundary markers. All the sentences are shuffled and the duplicates are removed. The words that are out of vocabulary (OOV) are marked with a special UNK token (there are approximately 0.3% such words). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_37", "text": " The typical measure used for reporting progress in language modeling is perplexity, which is the average per-word log-probability on the holdout data set: e−1N​∑iln⁡pwisuperscript𝑒1𝑁subscript𝑖subscript𝑝subscript𝑤𝑖e^{-\\frac{1}{N}\\sum_{i}\\ln{p_{w_{i}}}}. We follow the standard procedure and sum over all the words (including the end of sentence symbol). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_38", "text": " We used the 1B Word Benchmark data set without any pre-processing. Given the shuffled sentences, they are input to the network as a batch of independent streams of words. Whenever a sentence ends, a new one starts without any padding (thus maximizing the occupancy per batch). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_39", "text": " For the models that consume characters as inputs or as targets, each word is fed to the model as a sequence of character IDs of preespecified length (see Figure 1(b)). The words were processed to include special begin and end of word tokens and were padded to reach the expected length. I.e. if the maximum word length was 10, the word “cat” would be transformed to “$cat^ ” due to the CNN model. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_40", "text": " In our experiments we found that limiting the maximum word length in training to 50 was sufficient to reach very good results while 32 was clearly insufficient. We used 256 characters in our vocabulary and the non-ascii symbols were represented as a sequence of bytes. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_41", "text": " We evaluated many variations of RNN LM architectures. These include the dimensionalities of the embedding layers, the state, projection sizes, and number of LSTM layers to use. Exhaustively trying all combinations would be extremely time consuming for such a large data set, but our findings suggest that LSTMs with a projection layer (i.e., a bottleneck between hidden states as in (Sak et al., 2014)) trained with truncated BPTT (Williams & Peng, 1990) for 20 steps performed well. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_42", "text": " Following (Zaremba et al., 2014) we use dropout (Srivastava, 2013) before and after every LSTM layer. The biases of LSTM forget gate were initialized to 1.0 (Jozefowicz et al., 2015). The size of the models will be described in more detail in the following sections, and the choices of hyper-parameters will be released as open source upon publication. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_43", "text": " For any model using character embedding CNNs, we closely follow the architecture from (Kim et al., 2015). The only important difference is that we use a larger number of convolutional features of 4096 to give enough capacity to the model. The resulting embedding is then linearly transformed to match the LSTM projection sizes. This allows it to match the performance of regular word embeddings but only uses a small fraction of parameters. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_44", "text": " The models were trained until convergence with an AdaGrad optimizer using a learning rate of 0.2. In all the experiments the RNNs were unrolled for 20 steps without ever resetting the LSTM states. We used a batch size of 128. We clip the gradients of the LSTM weights such that their norm is bounded by 1.0 (Pascanu et al., 2012). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_45", "text": " Using these hyper-parameters we found large LSTMs to be relatively easy to train. The same learning rate was used in almost all of the experiments. In a few cases we had to reduce it by an order of magnitude. Unless otherwise stated, the experiments were performed with 32 GPU workers and asynchronous gradient updates. Further details will be fully specified with the code upon publication. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_46", "text": " Training a model for such large target vocabulary (793471 words) required to be careful with some details about the approximation to full Softmax using importance sampling. We used a large number of negative (or noise) samples: 8192 such samples were drawn per step, but were shared across all the target words in the batch (2560 total, i.e. 128 times 20 unrolled steps). This results in multiplying (2560 x 1024) times (1024 x (8192+1)) (instead of (2560 x 1024) times (1024 x 793471)), i.e. about 100-fold less computation. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_47", "text": " In this section we summarize the results of our experiments and do an in-depth analysis. Table 1 contains all results for our models compared to previously published work. Table 2 shows previous and our own work on ensembles of models. We hope that our encouraging results, which improved the best perplexity of a single model from 51.3 to 30.0 (whilst reducing the model size considerably), and set a new record with ensembles at 23.7, will enable rapid research and progress to advance Language Modeling. For this purpose, we will release the model weights and recipes upon publication. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_48", "text": " Unsurprisingly, size matters: when training on a very large and complex data set, fitting the training data with an LSTM is fairly challenging. Thus, the size of the LSTM layer is a very important factor that influences the results, as seen in Table 1. The best models are the largest we were able to fit into a GPU memory. Our largest model was a 2-layer LSTM with 8192+1024 dimensional recurrent state in each of the layers. Increasing the embedding and projection size also helps but causes a large increase in the number of parameters, which is less desirable. Lastly, training an RNN instead of an LSTM yields poorer results (about 5 perplexity worse) for a comparable model size. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_49", "text": " As shown in Table 1, using dropout improves the results. To our surprise, even relatively small models (e.g., single layer LSTM with 2048 units projected to 512 dimensional outputs) can over-fit the training set if trained long enough, eventually yielding holdout set degradation. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_50", "text": " Using dropout on non-recurrent connections largely mitigates these issues. While over-fitting still occurs, there is no more need for early stopping. For models that had 4096 or less units in the LSTM layer, we used 10% dropout probability. For larger models, 25% was significantly better. Even with such regularization, perplexities on the training set can be as much as 6 points below test. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_51", "text": " In one experiment we tried to use a smaller vocabulary comprising of the 100,000 most frequent words and found the difference between train and test to be smaller – which suggests that too much capacity is given to rare words. This is less of an issue with character CNN embedding models as the embeddings are shared across all words. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_52", "text": " Table 3 shows the test perplexities of NCE vs IS loss after a few epochs of 2048 unit LSTM with 512 projection. The IS objective significantly improves the speed and the overall performance of the model when compared to NCE. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_53", "text": " Replacing the embedding layer with a parametrized neural network that process characters of a given word allows the model to consume arbitrary words and is not restricted to a fixed vocabulary. This property is useful for data sets with conversational or informal text as well as for morphologically rich languages. Our experiments show that using character-level embeddings is feasible and does not degrade performance – in fact, our best single model uses a Character CNN embedding. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_54", "text": " An additional advantage is that the number of parameters of the input layer is reduced by a factor of 11 (though training speed is slightly worse). For inference, the embeddings can be precomputed so there is no speed penalty. Overall, the embedding of the best model is parametrized by 72M weights (down from 820M weights). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_55", "text": " Table 4 shows a few examples of nearest neighbor embeddings for some out-of-vocabulary words when character CNNs are used. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_56", "text": " Even with character-level embeddings, the model is still fairly large (though much smaller than the best competing models from previous work). Most of the parameters are in the linear layer before the Softmax: 820M versus a total of 1.04B parameters. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_57", "text": " In one of the experiments we froze the word-LSTM after convergence and replaced the Softmax layer with the CNN Softmax sub-network. Without any fine-tuning that model was able to reach 39.8 perplexity with only 293M weights (as seen in Table 1). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_58", "text": " As described in Section 3.2, adding a “correction” word embedding term alleviates the gap between regular and CNN Softmax. Indeed, we can trade-off model size versus perplexity. For instance, by adding 100M weights (through a 128 dimensional bottleneck embedding) we achieve 35.8 perplexity (see Table 1). ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_59", "text": " To contrast with the CNN Softmax, we also evaluated a model that replaces the Softmax layer with a smaller LSTM that predicts one character at a time (see Section 3.3). Such a model does not have to learn long dependencies because the base LSTM still operates at the word-level (see Figure 1(c)). With a single-layer LSTM of 1024 units we reached 49.0 test perplexity, far below the best model. In order to make the comparisons more fair, we performed a very expensive marginalization over the words in the vocabulary (to rule out words not in the dictionary which the character LSTM would assign some probability). When doing this marginalization, the perplexity improved a bit down to 47.9. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_60", "text": " We used 32 Tesla K40 GPUs to train our models. The smaller version of the LSTM model with 2048 units and 512 projections needs less than 10 hours to reach below 45 perplexity and after only 2 hours of training the model beats previous state-of-the art on this data set. The best model needs about 5 days to get to 35 perplexity and 10 days to 32.5. The best results were achieved after 3 weeks of training. See Table 3 for more details. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_61", "text": " We averaged several of our best models and we were able to reach 23.7 test perplexity (more details and results can be seen in Table 2), which is more than 40% improvement over previous work. Interestingly, including the best N-gram model reduces the perplexity by 1.2 point even though the model is rather weak on its own (67.6 perplexity). Most previous work had to either ensemble with the best N-gram model (as their RNN only used a limited output vocabulary of a few thousand words), or use N-gram features as additional input to the RNN. Our results, on the contrary, suggest that N-grams are of limited benefit, and suggest that a carefully trained LSTM LM is the most competitive model. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_62", "text": " Figure 2 shows the difference in log probabilities between our best model (at 30.0 perplexity) and the KN-5. As can be seen from the plot, the LSTM is better across all the buckets and significantly outperforms KN-5 on the rare words. This is encouraging as it seems to suggest that LSTM LMs may fare even better for languages or data sets where the number of rare words is larger than traditional N-gram models. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_63", "text": " To qualitatively evaluate the model, we sampled many sentences. We discarded short and politically incorrect ones, but the sample shown below is otherwise “raw” (i.e., not hand picked). The samples are of high quality – which is not a surprise, given the perplexities attained – but there are still some occasional mistakes. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_64", "text": " Sentences generated by the ensemble (about 26 perplexity): <S>expectation𝑆<S> With even more new technologies coming onto the market quickly during the past three years , an increasing number of companies now must tackle the ever-changing and ever-changing environmental challenges online . <S>expectation𝑆<S> Check back for updates on this breaking news story . <S>expectation𝑆<S> About 800 people gathered at Hever Castle on Long Beach from noon to 2pm , three to four times that of the funeral cortège . <S>expectation𝑆<S> We are aware of written instructions from the copyright holder not to , in any way , mention Rosenberg ’s negative comments if they are relevant as indicated in the documents , ” eBay said in a statement . <S>expectation𝑆<S> It is now known that coffee and cacao products can do no harm on the body . <S>expectation𝑆<S> Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second half but neither Drogba nor Malouda was able to push on through the Barcelona defence . ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_65", "text": " In this paper we have shown that RNN LMs can be trained on large amounts of data, and outperform competing models including carefully tuned N-grams. The reduction in perplexity from 51.3 to 30.0 is due to several key components which we studied in this paper. Thus, a large, regularized LSTM LM, with projection layers and trained with an approximation to the true Softmax with importance sampling performs much better than N-grams. Unlike previous work, we do not require to interpolate both the RNN LM and the N-gram, and the gains of doing so are rather marginal. ", "title": "Exploring the Limits of Language Modeling" }, { "id": "1602.02410_all_66", "text": " By exploring recent advances in model architectures (e.g. LSTMs), exploiting small character CNNs, and by sharing our findings in this paper and accompanying code and models (to be released upon publication), we hope to inspire research on large scale Language Modeling, a problem we consider crucial towards language understanding. We hope for future research to focus on reasonably sized datasets taking inspiration from recent advances seen in the computer vision community thanks to efforts such as Imagenet (Deng et al., 2009). ", "title": "Exploring the Limits of Language Modeling" } ]
How ShuffleNet allowed more feature maps for a given computational complexity?
The ShuffleNet uses pointwise group convolution with channel shuffling, thus design-wise it has less complexity (requires hw(2cm/g+9m) FLOPs) [1]. This means it allows wider feature maps for a given computational budget [13]. And the effect seems to increase the performance better as the model gets smaller [21].
[ 1, 13, 21 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computation at billions of FLOPs. This report examines the opposite extreme: pursuing the best accuracy in very limited computational budgets at tens or hundreds of MFLOPs, focusing on common mobile platforms such as drones, robots, and smartphones. Note that many existing works (16, 22, 43, 42, 38, 27) focus on pruning, compressing, or low-bit representing a “basic” network architecture. Here we aim to explore a highly efficient basic architecture specially designed for our desired computing ranges. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_1", "text": " We notice that state-of-the-art basic architectures such as Xception  and ResNeXt  become less efficient in extremely small networks because of the costly dense 1×1111\\times 1 convolutions. We propose using pointwise group convolutions to reduce computation complexity of 1×1111\\times 1 convolutions. To overcome the side effects brought by group convolutions, we come up with a novel channel shuffle operation to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient architecture called ShuffleNet. Compared with popular structures like  (30, 9, 40), for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_2", "text": " We evaluate our models on the challenging ImageNet classification (4, 29) and MS COCO object detection  tasks. A series of controlled experiments shows the effectiveness of our design principles and the better performance over other structures. Compared with the state-of-the-art architecture MobileNet , ShuffleNet achieves superior performance by a significant margin, e.g. absolute 7.8% lower ImageNet top-1 error at level of 40 MFLOPs. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_3", "text": " We also examine the speedup on real hardware, i.e. an off-the-shelf ARM-based computing core. The ShuffleNet model achieves ∼similar-to\\sim13×\\times actual speedup (theoretical speedup is 18×\\times) over AlexNet  while maintaining comparable accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_4", "text": " The last few years have seen the success of deep neural networks in computer vision tasks (21, 36, 28), in which model designs play an important role. The increasing needs of running high quality deep neural networks on embedded devices encourage the study on efficient model designs . For example, GoogLeNet  increases the depth of networks with much lower complexity compared to simply stacking convolution layers. SqueezeNet  reduces parameters and computation significantly while maintaining accuracy. ResNet (9, 10) utilizes the efficient bottleneck structure to achieve impressive performance. SENet  introduces an architectural unit that boosts performance at slight computation cost. Concurrent with us, a very recent work  employs reinforcement learning and model search to explore efficient model designs. The proposed mobile NASNet model achieves comparable performance with our counterpart ShuffleNet model (26.0% @ 564 MFLOPs vs. 26.3% @ 524 MFLOPs for ImageNet classification error). But  do not report results on extremely tiny models (e.g. complexity less than 150 MFLOPs), nor evaluate the actual inference time on mobile devices. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_5", "text": " The concept of group convolution, which was first introduced in AlexNet  for distributing the model over two GPUs, has been well demonstrated its effectiveness in ResNeXt . Depthwise separable convolution proposed in Xception  generalizes the ideas of separable convolutions in Inception series (34, 32). Recently, MobileNet  utilizes the depthwise separable convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolution and depthwise separable convolution in a novel form. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_6", "text": " To the best of our knowledge, the idea of channel shuffle operation is rarely mentioned in previous work on efficient model design, although CNN library cuda-convnet  supports “random sparse convolution” layer, which is equivalent to random channel shuffle followed by a group convolutional layer. Such “random shuffle” operation has different purpose and been seldom exploited later. Very recently, another concurrent work   also adopt this idea for a two-stage convolution. However,   did not specially investigate the effectiveness of channel shuffle itself and its usage in tiny model design. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_7", "text": " This direction aims to accelerate inference while preserving accuracy of a pre-trained model. Pruning network connections (6, 7) or channels  reduces redundant connections in a pre-trained model while maintaining performance. Quantization (31, 27, 39, 45, 44) and factorization (22, 16, 18, 37) are proposed in literature to reduce redundancy in calculations to speed up inference. Without modifying the parameters, optimized convolution algorithms implemented by FFT (25, 35) and other methods  decrease time consumption in practice. Distilling  transfers knowledge from large models into small ones, which makes training small models easier. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_8", "text": " Modern convolutional neural networks (30, 33, 34, 32, 9, 10) usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception  and ResNeXt  introduce efficient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1×1111\\times 1 convolutions (also called pointwise convolutions in  ) into account, which require considerable complexity. For example, in ResNeXt  only 3×3333\\times 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in  ). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might significantly damage the accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_9", "text": " To address the issue, a straightforward solution is to apply channel sparse connections, for example group convolutions, also on 1×1111\\times 1 layers. By ensuring that each convolution operates only on the corresponding input channel group, group convolution significantly reduces computation cost. However, if multiple group convolutions stack together, there is one side effect: outputs from a certain channel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convolution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information flow between channel groups and weakens representation. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_10", "text": " If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups. This can be efficiently and elegantly implemented by a channel shuffle operation (Fig 1 (c)): suppose a convolutional layer with g𝑔g groups whose output has g×n𝑔𝑛g\\times n channels; we first reshape the output channel dimension into (g,n)𝑔𝑛(g,n), transposing and then flattening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_11", "text": " Channel shuffle operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efficient network unit with channel shuffle and group convolution. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_12", "text": " Taking advantage of the channel shuffle operation, we propose a novel ShuffleNet unit specially designed for small networks. We start from the design principle of bottleneck unit  in Fig 2 (a). It is a residual block. In its residual branch, for the 3×3333\\times 3 layer, we apply a computational economical 3×3333\\times 3 depthwise convolution  on the bottleneck feature map. Then, we replace the first 1×1111\\times 1 layer with pointwise group convolution followed by a channel shuffle operation, to form a ShuffleNet unit, as shown in Fig 2 (b). The purpose of the second pointwise group convolution is to recover the channel dimension to match the shortcut path. For simplicity, we do not apply an extra channel shuffle operation after the second pointwise layer as it results in comparable scores. The usage of batch normalization (BN)  and nonlinearity is similar to  (9, 40), except that we do not use ReLU after depthwise convolution as suggested by  . As for the case where ShuffleNet is applied with stride, we simply make two modifications (see Fig 2 (c)): (i) add a 3×3333\\times 3 average pooling on the shortcut path; (ii) replace the element-wise addition with channel concatenation, which makes it easy to enlarge channel dimension with little extra computation cost. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_13", "text": " Thanks to pointwise group convolution with channel shuffle, all components in ShuffleNet unit can be computed efficiently. Compared with ResNet  (bottleneck design) and ResNeXt , our structure has less complexity under the same settings. For example, given the input size c×h×w𝑐ℎ𝑤c\\times h\\times w and the bottleneck channels m𝑚m, ResNet unit requires h​w​(2​c​m+9​m2)ℎ𝑤2𝑐𝑚9superscript𝑚2hw(2cm+9m^{2}) FLOPs and ResNeXt has h​w​(2​c​m+9​m2/g)ℎ𝑤2𝑐𝑚9superscript𝑚2𝑔hw(2cm+9m^{2}/g) FLOPs, while our ShuffleNet unit requires only h​w​(2​c​m/g+9​m)ℎ𝑤2𝑐𝑚𝑔9𝑚hw(2cm/g+9m) FLOPs, where g𝑔g means the number of groups for convolutions. In other words, given a computational budget, ShuffleNet can use wider feature maps. We find this is critical for small networks, as tiny networks usually have an insufficient number of channels to process the information. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_14", "text": " In addition, in ShuffleNet depthwise convolution only performs on bottleneck feature maps. Even though depthwise convolution usually has very low theoretical complexity, we find it difficult to efficiently implement on low-power mobile devices, which may result from a worse computation/memory access ratio compared with other dense operations. Such drawback is also referred in  , which has a runtime library based on TensorFlow . In ShuffleNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possible. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_15", "text": " Built on ShuffleNet units, we present the overall ShuffleNet architecture in Table 1. The proposed network is mainly composed of a stack of ShuffleNet units grouped into three stages. The first building block in each stage is applied with stride = 2. Other hyper-parameters within a stage stay the same, and for the next stage the output channels are doubled. Similar to  , we set the number of bottleneck channels to 1/4 of the output channels for each ShuffleNet unit. Our intent is to provide a reference design as simple as possible, although we find that further hyper-parameter tunning might generate better results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_16", "text": " In ShuffleNet units, group number g𝑔g controls the connection sparsity of pointwise convolutions. Table 1 explores different group numbers and we adapt the output channels to ensure overall computation cost roughly unchanged (∼similar-to\\sim140 MFLOPs). Obviously, larger group numbers result in more output channels (thus more convolutional filters) for a given complexity constraint, which helps to encode more information, though it might also lead to degradation for an individual convolutional filter due to limited corresponding input channels. In Sec 4.1.1 we will study the impact of this number subject to different computational constrains. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_17", "text": " To customize the network to a desired complexity, we can simply apply a scale factor s𝑠s on the number of channels. For example, we denote the networks in Table 1 as ”ShuffleNet 1×\\times”, then ”ShuffleNet s×s\\times” means scaling the number of filters in ShuffleNet 1×\\times by s𝑠s times thus overall complexity will be roughly s2superscript𝑠2s^{2} times of ShuffleNet 1×\\times. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_18", "text": " We mainly evaluate our models on the ImageNet 2012 classification dataset (29, 4). We follow most of the training settings and hyper-parameters used in  , with two exceptions: (i) we set the weight decay to 4e-5 instead of 1e-4 and use linear-decay learning rate policy (decreased from 0.5 to 0); (ii) we use slightly less aggressive scale augmentation for data preprocessing. Similar modifications are also referenced in   because such small networks usually suffer from underfitting rather than overfitting. It takes 1 or 2 days to train a model for 3×1053superscript1053\\times 10^{5} iterations on 4 GPUs, whose batch size is set to 1024. To benchmark, we compare single crop top-1 performance on ImageNet validation set, i.e. cropping 224×224224224224\\times 224 center view from 256×256\\times input image and evaluating classification accuracy. We use exactly the same settings for all models to ensure fair comparisons. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_19", "text": " The core idea of ShuffleNet lies in pointwise group convolution and channel shuffle operation. In this subsection we evaluate them respectively. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_20", "text": " To evaluate the importance of pointwise group convolutions, we compare ShuffleNet models of the same complexity whose numbers of groups range from 1 to 8. If the group number equals 1, no pointwise group convolution is involved and then the ShuffleNet unit becomes an ”Xception-like”  structure. For better understanding, we also scale the width of the networks to 3 different complexities and compare their classification performance respectively. Results are shown in Table 2. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_21", "text": " From the results, we see that models with group convolutions (g>1𝑔1g>1) consistently perform better than the counterparts without pointwise group convolutions (g=1𝑔1g=1). Smaller models tend to benefit more from groups. For example, for ShuffleNet 1×\\times the best entry (g=8𝑔8g=8) is 1.2% better than the counterpart, while for ShuffleNet 0.5×\\times and 0.25×\\times the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it benefits more from enlarged feature maps. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_22", "text": " Table 2 also shows that for some models (e.g. ShuffleNet 0.5×\\times) when group numbers become relatively large (e.g. g=8𝑔8g=8), the classification score saturates or even drops. With an increase in group number (thus wider feature maps), input channels for each convolutional filter become fewer, which may harm representation capability. Interestingly, we also notice that for smaller models such as ShuffleNet 0.25×\\times larger group numbers tend to better results consistently, which suggests wider feature maps bring more benefits for smaller models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_23", "text": " The purpose of shuffle operation is to enable cross-group information flow for multiple group convolution layers. Table 3 compares the performance of ShuffleNet structures (group number is set to 3 or 8 for instance) with/without channel shuffle. The evaluations are performed under three different scales of complexity. It is clear that channel shuffle consistently boosts classification scores for different settings. Especially, when group number is relatively large (e.g. g=8𝑔8g=8), models with channel shuffle outperform the counterparts by a significant margin, which shows the importance of cross-group information interchange. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_24", "text": " Recent leading convolutional units in VGG , ResNet , GoogleNet , ResNeXt  and Xception  have pursued state-of-the-art results with large models (e.g. ≥1absent1\\geq 1GFLOPs), but do not fully explore low-complexity conditions. In this section we survey a variety of building blocks and make comparisons with ShuffleNet under the same complexity constraint. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_25", "text": " For fair comparison, we use the overall network architecture as shown in Table 1. We replace the ShuffleNet units in Stage 2-4 with other structures, then adapt the number of channels to ensure the complexity remains unchanged. The structures we explored include: ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_26", "text": " • VGG-like. Following the design principle of VGG net , we use a two-layer 3×\\times3 convolutions as the basic building block. Different from  , we add a Batch Normalization layer  after each of the convolutions to make end-to-end training easier. • ResNet. We adopt the ”bottleneck” design in our experiment, which has been demonstrated more efficient in   . Same as  , the bottleneck ratio111In the bottleneck-like units (like ResNet, ResNeXt or ShuffleNet) bottleneck ratio implies the ratio of bottleneck channels to output channels. For example, bottleneck ratio = 1:4:141:4 means the output feature map is 4 times the width of the bottleneck feature map. is also 1:4:141:4. • Xception-like. The original structure proposed in   involves fancy designs or hyper-parameters for different stages, which we find difficult for fair comparison on small models. Instead, we remove the pointwise group convolutions and channel shuffle operation from ShuffleNet (also equivalent to ShuffleNet with g=1𝑔1g=1). The derived structure shares the same idea of “depthwise separable convolution” as in  , which is called an Xception-like structure here. • ResNeXt. We use the settings of cardinality =16absent16=16 and bottleneck ratio =1:2:absent12=1:2 as suggested in  . We also explore other settings, e.g. bottleneck ratio =1:4:absent14=1:4, and get similar results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_27", "text": " We use exactly the same settings to train these models. Results are shown in Table 4. Our ShuffleNet models outperform most others by a significant margin under different complexities. Interestingly, we find an empirical relationship between feature map channels and classification accuracy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, ShuffleNet models are 50, 192, 192, 288, 576 respectively, which is consistent with the increase of accuracy. Since the efficient design of ShuffleNet, we can use more channels for a given computation budget, thus usually resulting in better performance. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_28", "text": " Note that the above comparisons do not include GoogleNet or Inception series (33, 34, 32). We find it nontrivial to generate such Inception structures to small networks because the original design of Inception module involves too many hyper-parameters. As a reference, the first GoogleNet version  has 31.3% top-1 error at the cost of 1.5 GFLOPs (See Table 6). More sophisticated Inception versions (34, 32) are more accurate, however, involve significantly increased complexity. Recently, Kim et al. propose a lightweight network structure named PVANET  which adopts Inception units. Our reimplemented PVANET (with 224×\\times224 input size) has 29.7% classification error with a computation complexity of 557 MFLOPs, while our ShuffleNet 2x model (g=3𝑔3g=3) gets 26.3% with 524 MFLOPs (see Table 6). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_29", "text": " Recently Howard et al. have proposed MobileNets  which mainly focus on efficient network architecture for mobile devices. MobileNet takes the idea of depthwise separable convolution from   and achieves state-of-the-art results on small models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_30", "text": " Table 5 compares classification scores under a variety of complexity levels. It is clear that our ShuffleNet models are superior to MobileNet for all the complexities. Though our ShuffleNet network is specially designed for small models (<150absent150<150 MFLOPs), we find it is still better than MobileNet for higher computation cost, e.g. 3.1% more accurate than MobileNet 1×\\times at the cost of 500 MFLOPs. For smaller networks (∼similar-to\\sim40 MFLOPs) ShuffleNet surpasses MobileNet by 7.8%. Note that our ShuffleNet architecture contains 50 layers while MobileNet only has 28 layers. For better understanding, we also try ShuffleNet on a 26-layer architecture by removing half of the blocks in Stage 2-4 (see ”ShuffleNet 0.5×\\times shallow (g=3𝑔3g=3)” in Table 5). Results show that the shallower model is still significantly better than the corresponding MobileNet, which implies that the effectiveness of ShuffleNet mainly results from its efficient structure, not the depth. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_31", "text": " Table 6 compares our ShuffleNet with a few popular models. Results show that with similar accuracy ShuffleNet is much more efficient than others. For example, ShuffleNet 0.5×\\times is theoretically 18×\\times faster than AlexNet  with comparable classification score. We will evaluate the actual running time in Sec 4.5. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_32", "text": " It is also worth noting that the simple architecture design makes it easy to equip ShuffeNets with the latest advances such as (13, 26). For example, in the authors propose Squeeze-and-Excitation (SE) blocks which achieve state-of-the-art results on large ImageNet models. We find SE modules also take effect in combination with the backbone ShuffleNets, for instance, boosting the top-1 error of ShuffleNet 2×\\times to 24.7% (shown in Table 5). Interestingly, though negligible increase of theoretical complexity, we find ShuffleNets with SE modules are usually 25∼40%similar-to25percent4025\\sim 40\\% slower than the “raw” ShuffleNets on mobile devices, which implies that actual speedup evaluation is critical on low-cost architecture design. In Sec 4.5 we will make further discussion. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_33", "text": " To evaluate the generalization ability for transfer learning, we test our ShuffleNet model on the task of MS COCO object detection . We adopt Faster-RCNN  as the detection framework and use the publicly released Caffe code (28, 17) for training with default settings. Similar to  , the models are trained on the COCO train+val dataset excluding 5000 minival images and we conduct testing on the minival set. Table 7 shows the comparison of results trained and evaluated on two input resolutions. Comparing ShuffleNet 2×\\times with MobileNet whose complexity are comparable (524 vs. 569 MFLOPs), our ShuffleNet 2×\\times surpasses MobileNet by a significant margin on both resolutions; our ShuffleNet 1×\\times also achieves comparable results with MobileNet on 600×\\times resolution, but has ∼similar-to\\sim4×\\times complexity reduction. We conjecture that this significant gain is partly due to ShuffleNet’s simple design of architecture without bells and whistles. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_34", "text": " Finally, we evaluate the actual inference speed of ShuffleNet models on a mobile device with an ARM platform. Though ShuffleNets with larger group numbers (e.g. g=4𝑔4g=4 or g=8𝑔8g=8) usually have better performance, we find it less efficient in our current implementation. Empirically g=3𝑔3g=3 usually has a proper trade-off between accuracy and actual inference time. As shown in Table 8, three input resolutions are exploited for the test. Due to memory access and other overheads, we find every 4×\\times theoretical complexity reduction usually results in ∼similar-to\\sim2.6×\\times actual speedup in our implementation. Nevertheless, compared with AlexNet  our ShuffleNet 0.5×\\times model still achieves ∼similar-to\\sim13×\\times actual speedup under comparable classification accuracy (the theoretical speedup is 18×\\times), which is much faster than previous AlexNet-level models or speedup approaches such as  (14, 16, 22, 42, 43, 38). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" } ]
What are the total number of encoders and decoders used in SegNet?
4 encoders and 4 decoders are used in SegNet [1].
[ 1 ]
[ { "id": "1505.07293_all_0", "text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to autonomous vehicle driving. Early methods that relied on low-level vision cues have fast been superseded by popular machine learning algorithms. In particular, deep learning has seen huge success lately in handwritten digit recognition, speech, categorising whole images and detecting objects in images (37, 34) also seen growing interest in semantic pixel-wise labelling problems (7, 14, 35). However, these recent approaches have tried to directly adopt deep architectures designed for category prediction to pixel-wise labelling. The results, although very encouraging, have not been quite satisfactory. Primarily, the deepest layer representations/feature maps are of a small resolution as compared to input image dimensions due to several pooling layers e.g. if 2×2222\\times 2 non-overlapping max-pooling-subsampling layers are used three times, the resulting feature map is 1/8t​h1superscript8𝑡ℎ1/8^{th} of the input dimension. Therefore, an ad hoc technique is used to upsample the deepest layer feature map to match the input image dimensions by replicating features within a block i.e. all pixels within a block (8×8888\\times 8 in our example) have the same features. This often results in predictions that appear blocky222see http://david.grangier.info/scene_parsing/. This is exactly what we improve using our proposed SegNet architecture, wherein the decoders learn to map the deepest layer features to full image dimensions. Learning to decode has two other advantages. First, deeper layers each with pooling-subsampling can be introduced which increases the spatial context for pixel labelling. This results in smooth predictions unlike patch based classifiers (36, 2). Second, ablation studies to understand the effects of features such as in can be performed using the decoder stack. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_1", "text": " We draw inspiration of our encoder-decoder type architectures from probabilistic auto-encoders used to build generative models and unsupervised learning of feature hierarchies . Our main contribution is to learn an encoder-decoder stack trained in a modular and fully supervised manner for pixel-wise labelling. The addition of each deeper encoder-decoder pair results in an increased spatial context i.e., a 444 layer SegNet with 7×7777\\times 7 kernels and 2×2222\\times 2 non-overlapping max pooling in each layer has a spatial context of 106×106106106106\\times 106 pixels when a feature-map is backtracked to the input image. The SegNet predictions get smoother as more layers are added and demonstrate high accuracy, comparable to or even exceeding methods which use CRFs . SegNet maintains a constant number of features per layer which is typically set to 646464. This has a practical advantage that the computational cost successively decreases for each additional/deeper encoder-decoder pair. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_2", "text": " In Sec. 2 we review related recent literature. We describe in detail the SegNet architecture in Sec. 3 along with its qualitative analysis. Our quantitative experiments with SegNet on several well known benchmark datasets are described in Sec. 4. We also discuss the advantages and drawbacks of our approach including computational times. We conclude with pointers to future work in Sec. 5. For most of our experiments, we use outdoor RGB road scene analysis (1, 9) and indoor RGBD scene analysis datasets to measure the quantitative performance. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_3", "text": " Semantic pixel-wise segmentation is an ongoing topic of research, fuelled by challenging datasets (1, 33, 9). Current best performing methods all mostly rely on hand engineered features generally used for per-pixel independent classification. Typically, a patch is fed into a classifier e.g. Random Forest (32, 2) or Boosting (36, 20) to predict the class probabilities of the center pixel. Features based on appearance , SfM and appearance (2, 36, 20) have been explored for the CamVid test. These per-pixel noisy predictions (often called unary terms) from the classifiers are then smoothed by using a pair-wise or higher order CRF (36, 20) to improve the accuracy. More recent approaches have aimed to produce high quality unaries by trying to predict the labels for all the pixels in a patch as opposed to only the center pixel. This improves the results of Random Forest based unaries but thin structured classes are classfied poorly. Dense depth maps computed from the CamVid video have also been used as input for classification using Random Forests . Another approach argues for the use of a combination of popular hand designed features and spatio temporal super-pixelization to obtain higher accuracy . Recent top performing technique on the CamVid test addresses the imbalance among label frequencies by using additional training data from the PASCAL VOC dataset to learn object detectors. The result of all these techniques indicates the need for improved classification as increases in accuracy have mostly come from adding new features or modalities to the classifier. Post-processing using CRF models of various orders has mainly resulted in improving the accuracy of dominant classes such as sky, road, buildings with little effect on the accuracy of thin structured but equally important classes such as signs, poles, pedestrians. This highlights the need for better pixel-wise classification when imbalanced label frequencies exist. Meanwhile, indoor RGBD pixel-wise semantic segmentation has also gained popularity since the release of the NYU dataset which showed the usefulness of the depth channel to improve segmentation. Their approach used features such as RGB-SIFT, depth-SIFT, location as input to a neural network classifier to predict pixel unaries. The noisy unaries are then smoothed using a CRF. Improvements were made using a richer feature set including LBP and region segmentation to obtain higher accuracy followed by a CRF. In more recent work , both class segmentation and support relationships are inferred together using a combination of RGB and depth based cues. Another approach focusses on real-time joint reconstruction and semantic segmentation, where Random Forests are used as the classifier . Gupta et al. use boundary detection and hierarchical grouping before performing category segmentation. The common attribute along all these approaches is the use of hand engineered features for pixel-wise classifiction of either RGB or RGBD images. The application of deep learning for scene segmentation has only just begun. There have also been a few attempts to apply networks designed for categorization to segmentation, particularly by replicating the deepest layer features in blocks to match image dimensions (7, 6, 11, 8). However, the resulting classification is blocky . Another approach using recurrent neural networks merges several low resolution predictions to create input image resolution predictions. On the whole, although some of these techniques already present improvements over hand engineered features . ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_4", "text": " Our work is inspired by the unsupervised feature learning architecture proposed by Ranzato et. al . The key learning module is an encoder-decoder network where the encoder consists of a filter bank convolution, tanh squashing function, max pooling followed by sub-sampling to obtain the feature maps. For each sample, the indices of the max locations computed during pooling are stored and passed to the decoder. The decoder upsamples the feature maps by using the already stored pooled indices, also called switches, and learns a decoder filter bank to reconstruct the input image. This architecture was used for unsupervised pre-training of feature hierarchies. A similar decoding technique is used for visualizing trained convolutional networks for object classification; the transposed encoder kernels are set as the decoder kernels which are followed by a non-linearity and the pooling indices are used for upsampling. The architecture of Ranzato mainly concentrated on layer wise feature learning using small input patches although during test time a full sized image was the input. This discrepancy was corrected for by Kavukcuoglu et. al. by using test size images/feature maps to learn hierarchical encoders. Both these approaches however did not attempt to use deep encoder-decoder networks for unsupervised feature training as they discarded the decoders after each encoder training. Here, the SegNet architecture differs from these approaches as the objective used for training all the encoder-decoder pairs is the same, i.e., to minimise the cross-entropy label loss. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_5", "text": " Other applications where pixel wise predictions are made using deep networks are image super-resolution and depth map prediction from a single image . The authors in discuss the need for learning to upsample from low resolution feature maps which is the central topic of this paper. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_6", "text": " A four layer SegNet architecture used in our experiments is illustrated in Fig. 1. Each encoder performs dense convolutions, ReLU non-linearity, a non-overlapping max pooling with a 2×2222\\times 2 window and finally down-sampling. Each decoder upsamples its input using the memorized pooled indices and convolves it with a trainable filter bank. No ReLU non-linearity is used in the decoder unlike the deconvolution network (41, 42). This makes it easier to optimize the filters in each pair. The encoder and decoder filters are also untied to provide additional degrees of freedom to minimize the objective. The final layer is a soft-max classifier (with no bias term) which classifies each pixel independently. The output of the soft-max is a K channel image where K is the number of classes. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_7", "text": " SegNet uses a “flat” architecture, i.e, the number of features in each layer remains the same (646464 in our case) but with full connectivity. This choice is motivated by two reasons. First, it avoids parameter explosion, unlike an expanding deep encoder network with full feature connectivity (same for decoder). Second, the training time remains the same (in our experiments it slightly decreases) for each additional/deeper encoder-decoder pair as the feature map resolution is smaller which makes convolutions faster. Note that the decoder corresponding to the first encoder (closest to the input image) produces a multi-channel feature map although the encoder input is either 3 or 4 channels (RGB or RGBD) (see Fig. 1). This high dimensional feature representation is fed to the soft-max classifier. This is unlike the other decoders which produce feature maps the same size as their encoder inputs. A fixed pooling window of 2×2222\\times 2 with a stride of non-overlapping 222 pixels is used. This small size preserves thin structures in the scene. Further, a constant kernel size of 7×7777\\times 7 over all the layers was chosen to provide a wide context for smooth labelling i.e. a pixel in the deepest layer feature map can be traced back to a context window in the input image of 106×106106106106\\times 106 pixels. The trade-off here is between the size of the context window and retaining thin structures. Smaller kernels decrease context and larger ones potentially destroy thin structures. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_8", "text": " The input to the SegNet can be any arbitrary multi-channel image or feature map(s), e.g., RGB, RGBD, map of normals, depth etc. We perform local contrast normalization (LCN) as a pre-processing step to the input (23, 15). The advantage of this step are many, (i) to correct for non-uniform scene illumination thus reducing the dynamic range (increases contrast in shadowed parts). (ii) highlighting edges which leads the network to learn category shape, (iii) improves convergence as it decorrelates the input dimensions . LCN is performed independently for each modality, i.e., RGB is contrast normalized as a three channel input and depth as a single channel for RGBD inputs. This avoids highlighting pseudo depth edges due to RGB edges and vice-versa. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_9", "text": " Most deep learning methods use stochastic gradient descent (SGD) for training . SGD needs sufficient expertise to initialize weights with appropriate magnitudes, adapting appropriately learning rates and momentum parameters which both control the step sizes. Therefore, we adopt L-BFGS based on the comparative study by Ngiam et. al who advocate the use of L-BFGS particularly for auto-encoders. L-BFGS has faster and more stable convergence than SGD. It also works well in large batches which is useful to maximize the throughput of powerful GPUs. We initialize the weights in all the layers and the soft-max weights from a zero mean unit variance Gaussian 𝒩​(0,1)𝒩01\\mathcal{N}(0,1) and normalized the kernels to unit L2 norm. We obtained good predictive performance from the network without the need for special layer-wise weight initialization or any learning rate tuning. We also use inverse frequency weighting for the classes to correct for any label imbalances in the training set . ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_10", "text": " We use mini-batches that maximize GPU usage and avoid GPU-CPU memory transfers. Typically, 25−50255025-50 randomly chosen images (with replacement) per mini-batch. The optimizer is run for 202020 iterations per mini-batch and 101010 epochs for each layer. We empirically observe that the objective plateaus after 5−6565-6 epochs and so we run another 444 epochs as a margin. Note that, after 101010 epochs, each input sample approximately “influences” the optimizer 200200200 times. We train the encoder-decoder pair weights closest to the input layer. The soft-max layer can be trained first or randomly initialised. It then remains fixed throughout the experiment. Next, we introduce a deeper layer of encoder-decoder (see Fig. 2) and train their weights while holding the shallower layer encoder-decoder weights fixed. Note that the objective remains the same, i.e., to minimize label cross-entropy loss over the mini-batch. This is unlike unsupervised feature learning approaches which reconstruct the input of the layer in question (27, 16), thus varying the objective with each layer. The deconvolution network on the other hand optimizes the same reconstruction objective with each deeper layer. The difference to our approach is (i) the objective is unsupervised, (ii) there is no encoder to learn a feed-forward representation thus requiring an optimisation step during test time to produce features for recognition. We successively add deeper encoder-decoder pairs and train them while holding the preceeding pair’s weights fixed. In total, we use 4 layer networks, i.e., 4 encoders and 4 decoders in our experiments. Once the encoder-decoder stack is trained, we find that there is no advantage to training the soft-max layer as it only relies on a linear discriminant function. We wrote our own Matlab GPU compatible implementation of SegNet that uses the minFunc optimization library . Our code has been tested on NVIDIA Tesla K40, GTX GeForce 880M and GTXGeForce780 GPUs. We will make our light-weight Matlab code available publicly soon. With the current state of code optimisation, training a 4 layer deep SegNet on the CamVid dataset (367 training images of 360×480360480360\\times 480) takes about a week. The unoptimized test time is in the order of 222secs/frame: bulk of the computation time is spent performing tensor convolutions in the feedforward path and FFT based convolutions during backpropagation 333more speedup can be gained https://developer.nvidia.com/cuDNN. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_11", "text": " We perform an ablation study to gain some insight into about the SegNet features. The work of Zeiler et al. study the effects of feature activations in each layer of a trained network . The feature activations are mapped back to image pixel space using a deconvolutional network. The SegNet architecture by construction is trained to decode the encoder activations and we use this to visualize the effect of feature activations (which layer) in the pixel label space. A recent study has shown that in each layer of a deep network it is the “direction” or “space” (ensemble of feature activations) which encodes useful class information rather than individual units (feature activations). We therefore focus our study on the predictive effect of a subset of feature activations at each layer. For a given layer, we compute the feature activations/maps for each sample in the training set. We then compute the root mean square value of each map i.e. ∀j∈{1..64}for-all𝑗1..64\\forall j\\in\\{1..64\\} 1N​∑i∈ℐ(fji)21𝑁subscript𝑖ℐsuperscriptsuperscriptsubscript𝑓𝑗𝑖2\\sqrt{\\frac{1}{N}\\sum_{i\\in\\mathcal{I}}(f_{j}^{i})^{2}} where fjisuperscriptsubscript𝑓𝑗𝑖f_{j}^{i} is jt​hsuperscript𝑗𝑡ℎj^{th} feature map value at pixel i𝑖i at a given layer. This assigns each map a single value e.g., the CamVid training set would have a 646464 dimensional vector for each training sample for layer 4 of the SegNet. We now compute a histogram of the top ‘N’ elements of each such vector over all the samples. This histogram shows the most activated features in that layer over the training set. For any ‘N’, we set the remainder of feature maps to zero (ablation) and decode the pixel-wise labelling for a given input sample. Note that since our training is modular, this can be done after each deeper layer has been added. Some results of the top ’N’ feature activations based labelling across all the layers are shown in Fig. 3. We observe firstly that the predictions get smoother as depth is increased which is a consequence of larger spatial context in the input space. More interestingly, the top-1 4th layer features predict almost entirely the static scene classes and “fill in” the missing cars e.g. with sidewalk. Given the feature(s) which get activated for cars are zeroed out, this prediction is reasonable and indicates the network is able to learn spatial context/class location information. Similarly, trees are filled in with buildings and bollards are extended to poles. In contrast, this effect is less clear and gets worse for shallower layers. This suggests subsets of features in the deeper layers are more “tuned” to certain scene categories in agreement with earlier work . We would like to add here that our efforts to perform an ablation study by choosing each feature map in turn and setting the remaining to zero produced results which were not clearly interpretable. It is also interesting to note that for shallower layers to produce qualitatively better predictions ’N’ has to be set to about 5 or 10. The corresponding histogram has atleast 50%percent5050\\% of the features activated as opposed to about 15%percent1515\\% for the top-1 in layer 4, indicating deeper features are tuned to groups of related categories. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_12", "text": " A number of outdoor scene datasets are available for semantic parsing (10, 30, 1, 9). Out of these, we chose the CamVid and KITTI datasets which contains 11 semantic classes such as road, building, cars, pedestrians etc.. There is a large imbalance in their frequencies . Road, Sky, Building pixels are approximately 40−50405040-50 times more than pedestrian, poles, sign-symbols, cars, bicyclists in the dataset making it very challenging to label smaller categories. This dataset contains video sequences, thus we are able to benchmark our approach with those which use motion and structure (20, 36, 2) and video segments . Other datasets have more balanced label frequencies and are still image datasets. Another reason for choosing CamVid as compared to SIFT-flow, LabelMe is that the size of the training set is small (367367367) making it feasible to train the SegNet given a standard GPU in reasonable time. The CamVid dataset also contains train and test images (233233233) in day and dusk (poor lighting) conditions. The qualitative comparisons of SegNet predictions with several well known algorithms (unaries, unaries+CRF) are shown in Fig. 4. The qualitative results show the ability of the SegNet to segment small (cars, pedestrians, bicyclist) classes while producing a smooth segmentation of the overall scene. The other methods shown in Fig. 4 use structure from motion based cues. Lacking this cue, the SegNet misses some labels (cars) but fills it in with other reasonable context related classes. The CRF based results are smooth but do not retain small classes. More dense models can be better but with additional cost of inference. Table 1 compares the algorithms numerically and demonstrates its superiority over recent competing methods. The KITTI dataset is the largest publicly available road scene dataset. Recently, some images from this dataset have been hand-labelled (888 classes) for inferring dense 3D semantic maps . Note that the image sizes are approximately, 376×12413761241376\\times 1241, and so we cropped the centre 360×480360480360\\times 480 to make it compatible with the CamVid dataset. We use this dataset to analyse the effect of supervised pre-training using the CamVid data on the KITTI test set. First, we add here that testing on the KITTI samples with only the pre-trained SegNet (using CamVid data) resulted in poor performance. This is because of illumination related differences between the datasets. Therefore, we experimented with three other training variants for the KITTI dataset; (i) training all the layers of the SegNet from a random initialization, denoted SegNet(R), (ii) initializing the parameters with CamVid trained values and training only a soft-max classifier with a hidden layer, denoted SegNet(SM), and (iii) initializing the parameters with CamVid trained values and training only the 4th layer of the SegNet for just 222 epochs, denoted SegNet(L4). High quality predictions are obtained in scenario SegNet(R) as expected (Fig. 5). The good performance with CamVid pre-training and layer 4 training shows that, (i) useful semantic cues can be transferred across datasets using the shallower layers, and (ii) it is beneficial to train the deepest layer of the SegNet first given a small computational budget. Table 3 shows the SegNet(R) is competitive even when temporal cues are not used. For indoor RGBD scenes, the NYU dataset (version 2) is the largest benchmark dataset containing 795795795 training and 654654654 testing images with 141414 class (objects, furniture, wall, ceiling etc.) labelling comparison. The NYU dataset has been used to benchmark Farabet et. al’s multi-scale deep learning approach to scene parsing. This benchmark is therefore useful to compare their method, which uses ad hoc feature upsampling, with our learning to upsample based approach. We also note that they learn approximately 1.2​M1.2𝑀1.2M parameters as compared to SegNet’s 1.4​M1.4𝑀1.4M parameters. Other methods either use the smaller NYU dataset , different performance measures or test on a small set of classes citeraey. The quantitative analysis shown in Table 2 show that the SegNet predictions are better the multi-scale convnet (2 pooling layers only) in 9 out of 13 classes. This suggests the SegNet can deal with scale changes by increasing context using deeper layers. The overall results are still far from satisfactory and the lack of cues such as height from ground, depth normalization (used in ) are needed to achieve better performance. The qualitative results in Fig. 6 show that the predictions are largely correct but lack sharp edges. This is due to low input resolution of 320×240320240320\\times 240, lack of ground truth around class edges,and errors in depth interpolation. Another reason is that over the different datasets we tested on, the parameters of the SegNet remained the same. We plan to study the NYU dataset in more detail in the future. Additional results can be viewed in the supplementary material. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_13", "text": " We presented SegNet, a fully trainable deep architecture for joint feature learning and mapping an input image in a feed-forward manner to its pixel-wise semantic labels. A highlight of the proposed architecture is its ability to produce smooth segment labels when compared with local patch based classifiers. This is due to deep layers of feature encoding that employ a large spatial context for pixel-wise labelling. To the best of our knowledge this is the first deep learning method to learn to map low resolution encoder feature maps to semantic labels. Both qualitative and numerical accuracy of the SegNet for outdoor and indoor scenes is very competitive, even without use of any CRF post-processing. We have also demonstrated the use of pre-trained SegNet for obtaining good performance on other datasets with a small extra computational effort. The encoder-decoder architecture of the SegNet can also be trained unsupervised and to handle missing data in the input during test time. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" } ]
Why does the network receive not only input of images but also a list of RoI as the input value?
RoI extracts a fixed-length feature vector from the feature map in the model [15].
[ 15 ]
[ { "id": "1504.08083_all_0", "text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, current approaches (e.g., (9, 11, 19, 25)) train models in multi-stage pipelines that are slow and inelegant. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_1", "text": " Complexity arises because detection requires the accurate localization of objects, creating two primary challenges. First, numerous candidate object locations (often called “proposals”) must be processed. Second, these candidates provide only rough localization that must be refined to achieve precise localization. Solutions to these problems often compromise speed, accuracy, or simplicity. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_2", "text": " In this paper, we streamline the training process for state-of-the-art ConvNet-based object detectors (9, 11). We propose a single-stage training algorithm that jointly learns to classify object proposals and refine their spatial locations. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_3", "text": " The resulting method can train a very deep detection network (VGG16 ) 9×\\times faster than R-CNN and 3×\\times faster than SPPnet . At runtime, the detection network processes images in 0.3s (excluding object proposal time) while achieving top accuracy on PASCAL VOC 2012 with a mAP of 66% (vs. 62% for R-CNN).111All timings use one Nvidia K40 GPU overclocked to 875 MHz. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_4", "text": " The Region-based Convolutional Network method (R-CNN) achieves excellent object detection accuracy by using a deep ConvNet to classify object proposals. R-CNN, however, has notable drawbacks: 1. Training is a multi-stage pipeline. R-CNN first fine-tunes a ConvNet on object proposals using log loss. Then, it fits SVMs to ConvNet features. These SVMs act as object detectors, replacing the softmax classifier learnt by fine-tuning. In the third training stage, bounding-box regressors are learned. 2. Training is expensive in space and time. For SVM and bounding-box regressor training, features are extracted from each object proposal in each image and written to disk. With very deep networks, such as VGG16, this process takes 2.5 GPU-days for the 5k images of the VOC07 trainval set. These features require hundreds of gigabytes of storage. 3. Object detection is slow. At test-time, features are extracted from each object proposal in each test image. Detection with VGG16 takes 47s / image (on a GPU). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_5", "text": " R-CNN is slow because it performs a ConvNet forward pass for each object proposal, without sharing computation. Spatial pyramid pooling networks (SPPnets) were proposed to speed up R-CNN by sharing computation. The SPPnet method computes a convolutional feature map for the entire input image and then classifies each object proposal using a feature vector extracted from the shared feature map. Features are extracted for a proposal by max-pooling the portion of the feature map inside the proposal into a fixed-size output (e.g., 6×6666\\times 6). Multiple output sizes are pooled and then concatenated as in spatial pyramid pooling . SPPnet accelerates R-CNN by 10 to 100×\\times at test time. Training time is also reduced by 3×\\times due to faster proposal feature extraction. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_6", "text": " SPPnet also has notable drawbacks. Like R-CNN, training is a multi-stage pipeline that involves extracting features, fine-tuning a network with log loss, training SVMs, and finally fitting bounding-box regressors. Features are also written to disk. But unlike R-CNN, the fine-tuning algorithm proposed in cannot update the convolutional layers that precede the spatial pyramid pooling. Unsurprisingly, this limitation (fixed convolutional layers) limits the accuracy of very deep networks. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_7", "text": " We propose a new training algorithm that fixes the disadvantages of R-CNN and SPPnet, while improving on their speed and accuracy. We call this method Fast R-CNN because it’s comparatively fast to train and test. The Fast R-CNN method has several advantages: 1. Higher detection quality (mAP) than R-CNN, SPPnet 2. Training is single-stage, using a multi-task loss 3. Training can update all network layers 4. No disk storage is required for feature caching ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_8", "text": " Fast R-CNN is written in Python and C++ (Caffe ) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_9", "text": " Fig. 1 illustrates the Fast R-CNN architecture. A Fast R-CNN network takes as input an entire image and a set of object proposals. The network first processes the whole image with several convolutional (conv) and max pooling layers to produce a conv feature map. Then, for each object proposal a region of interest (RoI) pooling layer extracts a fixed-length feature vector from the feature map. Each feature vector is fed into a sequence of fully connected (fc) layers that finally branch into two sibling output layers: one that produces softmax probability estimates over K𝐾K object classes plus a catch-all “background” class and another layer that outputs four real-valued numbers for each of the K𝐾K object classes. Each set of 444 values encodes refined bounding-box positions for one of the K𝐾K classes. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_10", "text": " The RoI pooling layer uses max pooling to convert the features inside any valid region of interest into a small feature map with a fixed spatial extent of H×W𝐻𝑊H\\times W (e.g., 7×7777\\times 7), where H𝐻H and W𝑊W are layer hyper-parameters that are independent of any particular RoI. In this paper, an RoI is a rectangular window into a conv feature map. Each RoI is defined by a four-tuple (r,c,h,w)𝑟𝑐ℎ𝑤(r,c,h,w) that specifies its top-left corner (r,c)𝑟𝑐(r,c) and its height and width (h,w)ℎ𝑤(h,w). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_11", "text": " RoI max pooling works by dividing the h×wℎ𝑤h\\times w RoI window into an H×W𝐻𝑊H\\times W grid of sub-windows of approximate size h/H×w/Wℎ𝐻𝑤𝑊h/H\\times w/W and then max-pooling the values in each sub-window into the corresponding output grid cell. Pooling is applied independently to each feature map channel, as in standard max pooling. The RoI layer is simply the special-case of the spatial pyramid pooling layer used in SPPnets in which there is only one pyramid level. We use the pooling sub-window calculation given in . ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_12", "text": " We experiment with three pre-trained ImageNet networks, each with five max pooling layers and between five and thirteen conv layers (see Section 4.1 for network details). When a pre-trained network initializes a Fast R-CNN network, it undergoes three transformations. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_13", "text": " First, the last max pooling layer is replaced by a RoI pooling layer that is configured by setting H𝐻H and W𝑊W to be compatible with the net’s first fully connected layer (e.g., H=W=7𝐻𝑊7H=W=7 for VGG16). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_14", "text": " Second, the network’s last fully connected layer and softmax (which were trained for 1000-way ImageNet classification) are replaced with the two sibling layers described earlier (a fully connected layer and softmax over K+1𝐾1K+1 categories and category-specific bounding-box regressors). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_15", "text": " Third, the network is modified to take two data inputs: a list of images and a list of RoIs in those images. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_16", "text": " Training all network weights with back-propagation is an important capability of Fast R-CNN. First, let’s elucidate why SPPnet is unable to update weights below the spatial pyramid pooling layer. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_17", "text": " The root cause is that back-propagation through the SPP layer is highly inefficient when each training sample (i.e. RoI) comes from a different image, which is exactly how R-CNN and SPPnet networks are trained. The inefficiency stems from the fact that each RoI may have a very large receptive field, often spanning the entire input image. Since the forward pass must process the entire receptive field, the training inputs are large (often the entire image). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_18", "text": " We propose a more efficient training method that takes advantage of feature sharing during training. In Fast R-CNN training, stochastic gradient descent (SGD) mini-batches are sampled hierarchically, first by sampling N𝑁N images and then by sampling R/N𝑅𝑁R/N RoIs from each image. Critically, RoIs from the same image share computation and memory in the forward and backward passes. Making N𝑁N small decreases mini-batch computation. For example, when using N=2𝑁2N=2 and R=128𝑅128R=128, the proposed training scheme is roughly 64×\\times faster than sampling one RoI from 128128128 different images (i.e., the R-CNN and SPPnet strategy). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_19", "text": " One concern over this strategy is it may cause slow training convergence because RoIs from the same image are correlated. This concern does not appear to be a practical issue and we achieve good results with N=2𝑁2N=2 and R=128𝑅128R=128 using fewer SGD iterations than R-CNN. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_20", "text": " In addition to hierarchical sampling, Fast R-CNN uses a streamlined training process with one fine-tuning stage that jointly optimizes a softmax classifier and bounding-box regressors, rather than training a softmax classifier, SVMs, and regressors in three separate stages (9, 11). The components of this procedure (the loss, mini-batch sampling strategy, back-propagation through RoI pooling layers, and SGD hyper-parameters) are described below. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_21", "text": " A Fast R-CNN network has two sibling output layers. The first outputs a discrete probability distribution (per RoI), p=(p0,…,pK)𝑝subscript𝑝0…subscript𝑝𝐾p=(p_{0},\\ldots,p_{K}), over K+1𝐾1K+1 categories. As usual, p𝑝p is computed by a softmax over the K+1𝐾1K+1 outputs of a fully connected layer. The second sibling layer outputs bounding-box regression offsets, tk=(txk,tyk,twk,thk)superscript𝑡𝑘subscriptsuperscript𝑡𝑘xsubscriptsuperscript𝑡𝑘ysubscriptsuperscript𝑡𝑘wsubscriptsuperscript𝑡𝑘ht^{k}=\\left(t^{k}_{\\textrm{x}},t^{k}_{\\textrm{y}},t^{k}_{\\textrm{w}},t^{k}_{\\textrm{h}}\\right), for each of the K𝐾K object classes, indexed by k𝑘k. We use the parameterization for tksuperscript𝑡𝑘t^{k} given in , in which tksuperscript𝑡𝑘t^{k} specifies a scale-invariant translation and log-space height/width shift relative to an object proposal. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_22", "text": " Each training RoI is labeled with a ground-truth class u𝑢u and a ground-truth bounding-box regression target v𝑣v. We use a multi-task loss L𝐿L on each labeled RoI to jointly train for classification and bounding-box regression: L​(p,u,tu,v)=Lcls​(p,u)+λ​(u≥1)​Lloc​(tu,v),𝐿𝑝𝑢superscript𝑡𝑢𝑣subscript𝐿cls𝑝𝑢𝜆delimited-()𝑢1subscript𝐿locsuperscript𝑡𝑢𝑣L(p,u,t^{u},v)=L_{\\textrm{cls}}(p,u)+\\lambda(u\\geq 1)L_{\\textrm{loc}}(t^{u},v), (1) in which Lcls​(p,u)=−log⁡pusubscript𝐿cls𝑝𝑢subscript𝑝𝑢L_{\\textrm{cls}}(p,u)=-\\log p_{u} is log loss for true class u𝑢u. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_23", "text": " The second task loss, Llocsubscript𝐿locL_{\\textrm{loc}}, is defined over a tuple of true bounding-box regression targets for class u𝑢u, v=(vx,vy,vw,vh)𝑣subscript𝑣xsubscript𝑣ysubscript𝑣wsubscript𝑣hv=(v_{\\textrm{x}},v_{\\textrm{y}},v_{\\textrm{w}},v_{\\textrm{h}}), and a predicted tuple tu=(txu,tyu,twu,thu)superscript𝑡𝑢subscriptsuperscript𝑡𝑢xsubscriptsuperscript𝑡𝑢ysubscriptsuperscript𝑡𝑢wsubscriptsuperscript𝑡𝑢ht^{u}=(t^{u}_{\\textrm{x}},t^{u}_{\\textrm{y}},t^{u}_{\\textrm{w}},t^{u}_{\\textrm{h}}), again for class u𝑢u. The Iverson bracket indicator function (u≥1)delimited-()𝑢1(u\\geq 1) evaluates to 1 when u≥1𝑢1u\\geq 1 and 0 otherwise. By convention the catch-all background class is labeled u=0𝑢0u=0. For background RoIs there is no notion of a ground-truth bounding box and hence Llocsubscript𝐿locL_{\\textrm{loc}} is ignored. For bounding-box regression, we use the loss Lloc​(tu,v)=∑i∈{x,y,w,h}smoothL1​(tiu−vi),subscript𝐿locsuperscript𝑡𝑢𝑣subscript𝑖xywhsubscriptsmoothsubscript𝐿1subscriptsuperscript𝑡𝑢𝑖subscript𝑣𝑖L_{\\textrm{loc}}(t^{u},v)=\\sum_{i\\in\\{\\textrm{x},\\textrm{y},\\textrm{w},\\textrm{h}\\}}\\textrm{smooth}_{L_{1}}(t^{u}_{i}-v_{i}), (2) in which smoothL1​(x)={0.5​x2if ​|x|<1|x|−0.5otherwise,subscriptsmoothsubscript𝐿1𝑥cases0.5superscript𝑥2if 𝑥1𝑥0.5otherwise\\textrm{smooth}_{L_{1}}(x)=\\begin{cases}0.5x^{2}&\\text{if }|x|<1\\\\ |x|-0.5&\\text{otherwise},\\end{cases} (3) is a robust L1subscript𝐿1L_{1} loss that is less sensitive to outliers than the L2subscript𝐿2L_{2} loss used in R-CNN and SPPnet. When the regression targets are unbounded, training with L2subscript𝐿2L_{2} loss can require careful tuning of learning rates in order to prevent exploding gradients. Eq. 3 eliminates this sensitivity. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_24", "text": " The hyper-parameter λ𝜆\\lambda in Eq. 1 controls the balance between the two task losses. We normalize the ground-truth regression targets visubscript𝑣𝑖v_{i} to have zero mean and unit variance. All experiments use λ=1𝜆1\\lambda=1. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_25", "text": " We note that uses a related loss to train a class-agnostic object proposal network. Different from our approach, advocates for a two-network system that separates localization and classification. OverFeat , R-CNN , and SPPnet also train classifiers and bounding-box localizers, however these methods use stage-wise training, which we show is suboptimal for Fast R-CNN (Section 5.1). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_26", "text": " During fine-tuning, each SGD mini-batch is constructed from N=2𝑁2N=2 images, chosen uniformly at random (as is common practice, we actually iterate over permutations of the dataset). We use mini-batches of size R=128𝑅128R=128, sampling 646464 RoIs from each image. As in , we take 25% of the RoIs from object proposals that have intersection over union (IoU) overlap with a ground-truth bounding box of at least 0.50.50.5. These RoIs comprise the examples labeled with a foreground object class, i.e. u≥1𝑢1u\\geq 1. The remaining RoIs are sampled from object proposals that have a maximum IoU with ground truth in the interval (0.1,0.5)0.10.5(0.1,0.5), following . These are the background examples and are labeled with u=0𝑢0u=0. The lower threshold of 0.10.10.1 appears to act as a heuristic for hard example mining . During training, images are horizontally flipped with probability 0.50.50.5. No other data augmentation is used. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_27", "text": " Back-propagation routes derivatives through the RoI pooling layer. For clarity, we assume only one image per mini-batch (N=1𝑁1N=1), though the extension to N>1𝑁1N>1 is straightforward because the forward pass treats all images independently. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_28", "text": " Let xi∈ℝsubscript𝑥𝑖ℝx_{i}\\in\\mathbb{R} be the i𝑖i-th activation input into the RoI pooling layer and let yr​jsubscript𝑦𝑟𝑗y_{rj} be the layer’s j𝑗j-th output from the r𝑟r-th RoI. The RoI pooling layer computes yr​j=xi∗​(r,j)subscript𝑦𝑟𝑗subscript𝑥superscript𝑖𝑟𝑗y_{rj}=x_{i^{*}(r,j)}, in which i∗​(r,j)=argmaxi′∈ℛ​(r,j)xi′superscript𝑖𝑟𝑗subscriptargmaxsuperscript𝑖′ℛ𝑟𝑗subscript𝑥superscript𝑖′i^{*}(r,j)=\\operatorname*{argmax}_{i^{\\prime}\\in\\mathcal{R}(r,j)}x_{i^{\\prime}}. ℛ​(r,j)ℛ𝑟𝑗\\mathcal{R}(r,j) is the index set of inputs in the sub-window over which the output unit yr​jsubscript𝑦𝑟𝑗y_{rj} max pools. A single xisubscript𝑥𝑖x_{i} may be assigned to several different outputs yr​jsubscript𝑦𝑟𝑗y_{rj}. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_29", "text": " The RoI pooling layer’s backwards function computes partial derivative of the loss function with respect to each input variable xisubscript𝑥𝑖x_{i} by following the argmax switches: ∂L∂xi=∑r∑j(i=i∗​(r,j))​∂L∂yr​j.𝐿subscript𝑥𝑖subscript𝑟subscript𝑗delimited-()𝑖superscript𝑖𝑟𝑗𝐿subscript𝑦𝑟𝑗\\frac{\\partial L}{\\partial x_{i}}=\\sum_{r}\\sum_{j}\\left(i=i^{*}(r,j)\\right)\\frac{\\partial L}{\\partial y_{rj}}. (4) In words, for each mini-batch RoI r𝑟r and for each pooling output unit yr​jsubscript𝑦𝑟𝑗y_{rj}, the partial derivative ∂L/∂yr​j𝐿subscript𝑦𝑟𝑗\\partial L/\\partial y_{rj} is accumulated if i𝑖i is the argmax selected for yr​jsubscript𝑦𝑟𝑗y_{rj} by max pooling. In back-propagation, the partial derivatives ∂L/∂yr​j𝐿subscript𝑦𝑟𝑗\\partial L/\\partial y_{rj} are already computed by the backwards function of the layer on top of the RoI pooling layer. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_30", "text": " The fully connected layers used for softmax classification and bounding-box regression are initialized from zero-mean Gaussian distributions with standard deviations 0.010.010.01 and 0.0010.0010.001, respectively. Biases are initialized to 00. All layers use a per-layer learning rate of 1 for weights and 2 for biases and a global learning rate of 0.0010.0010.001. When training on VOC07 or VOC12 trainval we run SGD for 30k mini-batch iterations, and then lower the learning rate to 0.00010.00010.0001 and train for another 10k iterations. When we train on larger datasets, we run SGD for more iterations, as described later. A momentum of 0.90.90.9 and parameter decay of 0.00050.00050.0005 (on weights and biases) are used. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_31", "text": " We explore two ways of achieving scale invariant object detection: (1) via “brute force” learning and (2) by using image pyramids. These strategies follow the two approaches in . In the brute-force approach, each image is processed at a pre-defined pixel size during both training and testing. The network must directly learn scale-invariant object detection from the training data. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_32", "text": " The multi-scale approach, in contrast, provides approximate scale-invariance to the network through an image pyramid. At test-time, the image pyramid is used to approximately scale-normalize each object proposal. During multi-scale training, we randomly sample a pyramid scale each time an image is sampled, following , as a form of data augmentation. We experiment with multi-scale training for smaller networks only, due to GPU memory limits. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_33", "text": " Once a Fast R-CNN network is fine-tuned, detection amounts to little more than running a forward pass (assuming object proposals are pre-computed). The network takes as input an image (or an image pyramid, encoded as a list of images) and a list of R𝑅R object proposals to score. At test-time, R𝑅R is typically around 200020002000, although we will consider cases in which it is larger (≈\\approx 454545k). When using an image pyramid, each RoI is assigned to the scale such that the scaled RoI is closest to 2242superscript2242224^{2} pixels in area . ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_34", "text": " For each test RoI r𝑟r, the forward pass outputs a class posterior probability distribution p𝑝p and a set of predicted bounding-box offsets relative to r𝑟r (each of the K𝐾K classes gets its own refined bounding-box prediction). We assign a detection confidence to r𝑟r for each object class k𝑘k using the estimated probability Pr​(class=k|r)=ΔpksuperscriptΔPrclassconditional𝑘𝑟subscript𝑝𝑘\\textrm{Pr}(\\textrm{class}=k~{}|~{}r)\\stackrel{{\\scriptstyle\\Delta}}{{=}}p_{k}. We then perform non-maximum suppression independently for each class using the algorithm and settings from R-CNN . ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_35", "text": " For whole-image classification, the time spent computing the fully connected layers is small compared to the conv layers. On the contrary, for detection the number of RoIs to process is large and nearly half of the forward pass time is spent computing the fully connected layers (see Fig. 2). Large fully connected layers are easily accelerated by compressing them with truncated SVD (5, 23). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_36", "text": " In this technique, a layer parameterized by the u×v𝑢𝑣u\\times v weight matrix W𝑊W is approximately factorized as W≈U​Σt​VT𝑊𝑈subscriptΣ𝑡superscript𝑉𝑇W\\approx U\\Sigma_{t}V^{T} (5) using SVD. In this factorization, U𝑈U is a u×t𝑢𝑡u\\times t matrix comprising the first t𝑡t left-singular vectors of W𝑊W, ΣtsubscriptΣ𝑡\\Sigma_{t} is a t×t𝑡𝑡t\\times t diagonal matrix containing the top t𝑡t singular values of W𝑊W, and V𝑉V is v×t𝑣𝑡v\\times t matrix comprising the first t𝑡t right-singular vectors of W𝑊W. Truncated SVD reduces the parameter count from u​v𝑢𝑣uv to t​(u+v)𝑡𝑢𝑣t(u+v), which can be significant if t𝑡t is much smaller than min⁡(u,v)𝑢𝑣\\min(u,v). To compress a network, the single fully connected layer corresponding to W𝑊W is replaced by two fully connected layers, without a non-linearity between them. The first of these layers uses the weight matrix Σt​VTsubscriptΣ𝑡superscript𝑉𝑇\\Sigma_{t}V^{T} (and no biases) and the second uses U𝑈U (with the original biases associated with W𝑊W). This simple compression method gives good speedups when the number of RoIs is large. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_37", "text": " Three main results support this paper’s contributions: 1. State-of-the-art mAP on VOC07, 2010, and 2012 2. Fast training and testing compared to R-CNN, SPPnet 3. Fine-tuning conv layers in VGG16 improves mAP ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_38", "text": " Our experiments use three pre-trained ImageNet models that are available online.222https://github.com/BVLC/caffe/wiki/Model-Zoo The first is the CaffeNet (essentially AlexNet ) from R-CNN . We alternatively refer to this CaffeNet as model S, for “small.” The second network is VGG_CNN_M_1024 from , which has the same depth as S, but is wider. We call this network model M, for “medium.” The final network is the very deep VGG16 model from . Since this model is the largest, we call it model L. In this section, all experiments use single-scale training and testing (s=600𝑠600s=600; see Section 5.2 for details). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_39", "text": " On these datasets, we compare Fast R-CNN (FRCN, for short) against the top methods on the comp4 (outside data) track from the public leaderboard (Table 2, Table 3).333http://host.robots.ox.ac.uk:8080/leaderboard (accessed April 18, 2015) For the NUS_NIN_c2000 and BabyLearning methods, there are no associated publications at this time and we could not find exact information on the ConvNet architectures used; they are variants of the Network-in-Network design . All other methods are initialized from the same pre-trained VGG16 network. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_40", "text": " Fast R-CNN achieves the top result on VOC12 with a mAP of 65.7% (and 68.4% with extra data). It is also two orders of magnitude faster than the other methods, which are all based on the “slow” R-CNN pipeline. On VOC10, SegDeepM achieves a higher mAP than Fast R-CNN (67.2% vs. 66.1%). SegDeepM is trained on VOC12 trainval plus segmentation annotations; it is designed to boost R-CNN accuracy by using a Markov random field to reason over R-CNN detections and segmentations from the O2P semantic-segmentation method. Fast R-CNN can be swapped into SegDeepM in place of R-CNN, which may lead to better results. When using the enlarged 07++12 training set (see Table 2 caption), Fast R-CNN’s mAP increases to 68.8%, surpassing SegDeepM. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_41", "text": " On VOC07, we compare Fast R-CNN to R-CNN and SPPnet. All methods start from the same pre-trained VGG16 network and use bounding-box regression. The VGG16 SPPnet results were computed by the authors of . SPPnet uses five scales during both training and testing. The improvement of Fast R-CNN over SPPnet illustrates that even though Fast R-CNN uses single-scale training and testing, fine-tuning the conv layers provides a large improvement in mAP (from 63.1% to 66.9%). R-CNN achieves a mAP of 66.0%. As a minor point, SPPnet was trained without examples marked as “difficult” in PASCAL. Removing these examples improves Fast R-CNN mAP to 68.1%. All other experiments use “difficult” examples. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_42", "text": " Fast training and testing times are our second main result. Table 4 compares training time (hours), testing rate (seconds per image), and mAP on VOC07 between Fast R-CNN, R-CNN, and SPPnet. For VGG16, Fast R-CNN processes images 146×\\times faster than R-CNN without truncated SVD and 213×\\times faster with it. Training time is reduced by 9×\\times, from 84 hours to 9.5. Compared to SPPnet, Fast R-CNN trains VGG16 2.7×\\times faster (in 9.5 vs. 25.5 hours) and tests 7×\\times faster without truncated SVD or 10×\\times faster with it. Fast R-CNN also eliminates hundreds of gigabytes of disk storage, because it does not cache features. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_43", "text": " Truncated SVD can reduce detection time by more than 30% with only a small (0.3 percentage point) drop in mAP and without needing to perform additional fine-tuning after model compression. Fig. 2 illustrates how using the top 102410241024 singular values from the 25088×409625088409625088\\times 4096 matrix in VGG16’s fc6 layer and the top 256256256 singular values from the 4096×4096409640964096\\times 4096 fc7 layer reduces runtime with little loss in mAP. Further speed-ups are possible with smaller drops in mAP if one fine-tunes again after compression. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_44", "text": " For the less deep networks considered in the SPPnet paper , fine-tuning only the fully connected layers appeared to be sufficient for good accuracy. We hypothesized that this result would not hold for very deep networks. To validate that fine-tuning the conv layers is important for VGG16, we use Fast R-CNN to fine-tune, but freeze the thirteen conv layers so that only the fully connected layers learn. This ablation emulates single-scale SPPnet training and decreases mAP from 66.9% to 61.4% (Table 5). This experiment verifies our hypothesis: training through the RoI pooling layer is important for very deep nets. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_45", "text": " Does this mean that all conv layers should be fine-tuned? In short, no. In the smaller networks (S and M) we find that conv1 is generic and task independent (a well-known fact ). Allowing conv1 to learn, or not, has no meaningful effect on mAP. For VGG16, we found it only necessary to update layers from conv3_1 and up (9 of the 13 conv layers). This observation is pragmatic: (1) updating from conv2_1 slows training by 1.3×\\times (12.5 vs. 9.5 hours) compared to learning from conv3_1; and (2) updating from conv1_1 over-runs GPU memory. The difference in mAP when learning from conv2_1 up was only +0.30.3+0.3 points (Table 5, last column). All Fast R-CNN results in this paper using VGG16 fine-tune layers conv3_1 and up; all experiments with models S and M fine-tune layers conv2 and up. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_46", "text": " We conducted experiments to understand how Fast R-CNN compares to R-CNN and SPPnet, as well as to evaluate design decisions. Following best practices, we performed these experiments on the PASCAL VOC07 dataset. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_47", "text": " Multi-task training is convenient because it avoids managing a pipeline of sequentially-trained tasks. But it also has the potential to improve results because the tasks influence each other through a shared representation (the ConvNet) . Does multi-task training improve object detection accuracy in Fast R-CNN? ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_48", "text": " To test this question, we train baseline networks that use only the classification loss, Lclssubscript𝐿clsL_{\\textrm{cls}}, in Eq. 1 (i.e., setting λ=0𝜆0\\lambda=0). These baselines are printed for models S, M, and L in the first column of each group in Table 6. Note that these models do not have bounding-box regressors. Next (second column per group), we take networks that were trained with the multi-task loss (Eq. 1, λ=1𝜆1\\lambda=1), but we disable bounding-box regression at test time. This isolates the networks’ classification accuracy and allows an apples-to-apples comparison with the baseline networks. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_49", "text": " Across all three networks we observe that multi-task training improves pure classification accuracy relative to training for classification alone. The improvement ranges from +0.80.8+0.8 to +1.11.1+1.1 mAP points, showing a consistent positive effect from multi-task learning. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_50", "text": " Finally, we take the baseline models (trained with only the classification loss), tack on the bounding-box regression layer, and train them with Ll​o​csubscript𝐿𝑙𝑜𝑐L_{loc} while keeping all other network parameters frozen. The third column in each group shows the results of this stage-wise training scheme: mAP improves over column one, but stage-wise training underperforms multi-task training (forth column per group). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_51", "text": " We compare two strategies for achieving scale-invariant object detection: brute-force learning (single scale) and image pyramids (multi-scale). In either case, we define the scale s𝑠s of an image to be the length of its shortest side. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_52", "text": " All single-scale experiments use s=600𝑠600s=600 pixels; s𝑠s may be less than 600600600 for some images as we cap the longest image side at 100010001000 pixels and maintain the image’s aspect ratio. These values were selected so that VGG16 fits in GPU memory during fine-tuning. The smaller models are not memory bound and can benefit from larger values of s𝑠s; however, optimizing s𝑠s for each model is not our main concern. We note that PASCAL images are 384×473384473384\\times 473 pixels on average and thus the single-scale setting typically upsamples images by a factor of 1.6. The average effective stride at the RoI pooling layer is thus ≈10absent10\\approx 10 pixels. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_53", "text": " In the multi-scale setting, we use the same five scales specified in (s∈{480,576,688,864,1200}𝑠4805766888641200s\\in\\{480,576,688,864,1200\\}) to facilitate comparison with SPPnet. However, we cap the longest side at 200020002000 pixels to avoid exceeding GPU memory. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_54", "text": " Table 7 shows models S and M when trained and tested with either one or five scales. Perhaps the most surprising result in was that single-scale detection performs almost as well as multi-scale detection. Our findings confirm their result: deep ConvNets are adept at directly learning scale invariance. The multi-scale approach offers only a small increase in mAP at a large cost in compute time (Table 7). In the case of VGG16 (model L), we are limited to using a single scale by implementation details. Yet it achieves a mAP of 66.9%, which is slightly higher than the 66.0% reported for R-CNN , even though R-CNN uses “infinite” scales in the sense that each proposal is warped to a canonical size. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_55", "text": " Since single-scale processing offers the best tradeoff between speed and accuracy, especially for very deep models, all experiments outside of this sub-section use single-scale training and testing with s=600𝑠600s=600 pixels. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_56", "text": " A good object detector should improve when supplied with more training data. Zhu et al. found that DPM mAP saturates after only a few hundred to thousand training examples. Here we augment the VOC07 trainval set with the VOC12 trainval set, roughly tripling the number of images to 16.5k, to evaluate Fast R-CNN. Enlarging the training set improves mAP on VOC07 test from 66.9% to 70.0% (Table 1). When training on this dataset we use 60k mini-batch iterations instead of 40k. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_57", "text": " We perform similar experiments for VOC10 and 2012, for which we construct a dataset of 21.5k images from the union of VOC07 trainval, test, and VOC12 trainval. When training on this dataset, we use 100k SGD iterations and lower the learning rate by 0.1×0.1\\times each 40k iterations (instead of each 30k). For VOC10 and 2012, mAP improves from 66.1% to 68.8% and from 65.7% to 68.4%, respectively. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_58", "text": " Fast R-CNN uses the softmax classifier learnt during fine-tuning instead of training one-vs-rest linear SVMs post-hoc, as was done in R-CNN and SPPnet. To understand the impact of this choice, we implemented post-hoc SVM training with hard negative mining in Fast R-CNN. We use the same training algorithm and hyper-parameters as in R-CNN. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_59", "text": " Table 8 shows softmax slightly outperforming SVM for all three networks, by +0.10.1+0.1 to +0.80.8+0.8 mAP points. This effect is small, but it demonstrates that “one-shot” fine-tuning is sufficient compared to previous multi-stage training approaches. We note that softmax, unlike one-vs-rest SVMs, introduces competition between classes when scoring a RoI. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_60", "text": " There are (broadly) two types of object detectors: those that use a sparse set of object proposals (e.g., selective search ) and those that use a dense set (e.g., DPM ). Classifying sparse proposals is a type of cascade in which the proposal mechanism first rejects a vast number of candidates leaving the classifier with a small set to evaluate. This cascade improves detection accuracy when applied to DPM detections . We find evidence that the proposal-classifier cascade also improves Fast R-CNN accuracy. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_61", "text": " Using selective search’s quality mode, we sweep from 1k to 10k proposals per image, each time re-training and re-testing model M. If proposals serve a purely computational role, increasing the number of proposals per image should not harm mAP. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_62", "text": " We find that mAP rises and then falls slightly as the proposal count increases (Fig. 3, solid blue line). This experiment shows that swamping the deep classifier with more proposals does not help, and even slightly hurts, accuracy. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_63", "text": " This result is difficult to predict without actually running the experiment. The state-of-the-art for measuring object proposal quality is Average Recall (AR) . AR correlates well with mAP for several proposal methods using R-CNN, when using a fixed number of proposals per image. Fig. 3 shows that AR (solid red line) does not correlate well with mAP as the number of proposals per image is varied. AR must be used with care; higher AR due to more proposals does not imply that mAP will increase. Fortunately, training and testing with model M takes less than 2.5 hours. Fast R-CNN thus enables efficient, direct evaluation of object proposal mAP, which is preferable to proxy metrics. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_64", "text": " We also investigate Fast R-CNN when using densely generated boxes (over scale, position, and aspect ratio), at a rate of about 45k boxes / image. This dense set is rich enough that when each selective search box is replaced by its closest (in IoU) dense box, mAP drops only 1 point (to 57.7%, Fig. 3, blue triangle). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_65", "text": " The statistics of the dense boxes differ from those of selective search boxes. Starting with 2k selective search boxes, we test mAP when adding a random sample of 1000×{2,4,6,8,10,32,45}100024681032451000\\times\\{2,4,6,8,10,32,45\\} dense boxes. For each experiment we re-train and re-test model M. When these dense boxes are added, mAP falls more strongly than when adding more selective search boxes, eventually reaching 53.0%. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_66", "text": " We also train and test Fast R-CNN using only dense boxes (45k / image). This setting yields a mAP of 52.9% (blue diamond). Finally, we check if SVMs with hard negative mining are needed to cope with the dense box distribution. SVMs do even worse: 49.3% (blue circle). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_67", "text": " We applied Fast R-CNN (with VGG16) to the MS COCO dataset to establish a preliminary baseline. We trained on the 80k image training set for 240k iterations and evaluated on the “test-dev” set using the evaluation server. The PASCAL-style mAP is 35.9%; the new COCO-style AP, which also averages over IoU thresholds, is 19.7%. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_68", "text": " This paper proposes Fast R-CNN, a clean and fast update to R-CNN and SPPnet. In addition to reporting state-of-the-art detection results, we present detailed experiments that we hope provide new insights. Of particular note, sparse object proposals appear to improve detector quality. This issue was too costly (in time) to probe in the past, but becomes practical with Fast R-CNN. Of course, there may exist yet undiscovered techniques that allow dense boxes to perform as well as sparse proposals. Such methods, if developed, may help further accelerate object detection. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_69", "text": " I thank Kaiming He, Larry Zitnick, and Piotr Dollár for helpful discussions and encouragement. ", "title": "Fast R-CNN" } ]
Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) (Liu et al., 2019a; Brown et al., 2020).
Unsupervised learning enables networks to learn from orders of magnitude more data [0]. This large quantity of data is important to learn representations of more subtle, less common concepts in the world [1]. Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) [2].
[ 0, 1, 2 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, video) dataset cannot be easily collected. It would be wasteful to train Text-to-Video (T2V) models from scratch when there already exist models that can generate images. Moreover, unsupervised learning enables networks to learn from orders of magnitude more data. This large quantity of data is important to learn representations of more subtle, less common concepts in the world. Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) (Liu et al., 2019a; Brown et al., 2020). Models pre-trained this way yield considerably higher performance than when solely trained in a supervised manner. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_1", "text": " Inspired by these motivations, we propose Make-A-Video. Make-A-Video leverages T2I models to learn the correspondence between text and the visual world, and uses unsupervised learning on unlabeled (unpaired) video data, to learn realistic motion. Together, Make-A-Video generates videos from text without leveraging paired text-video data. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_2", "text": " Clearly, text describing images does not capture the entirety of phenomena observed in videos. That said, one can often infer actions and events from static images (e.g. a woman drinking coffee, or an elephant kicking a football) as done in image-based action recognition systems (Girish et al., 2020). Moreover, even without text descriptions, unsupervised videos are sufficient to learn how different entities in the world move and interact (e.g. the motion of waves at the beach, or of an elephant’s trunk). As a result, a model that has only seen text describing images is surprisingly effective at generating short videos, as demonstrated by our temporal diffusion-based method. Make-A-Video sets the new state-of-the-art in T2V generation. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_3", "text": " Using function-preserving transformations, we extend the spatial layers at the model initialization stage, to include temporal information. The extended spatial-temporal network includes new attention modules that learn temporal world dynamics from a collection of videos. This procedure significantly accelerates the T2V training process by instantaneously transferring the knowledge from a previously trained T2I network to a new T2V one. To enhance the visual quality, we train spatial super-resolution models as well as frame interpolation models. This increases the resolution of the generated videos, as well as enables a higher (controllable) frame rate. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_4", "text": " Our main contributions are: • We present Make-A-Video – an effective method that extends a diffusion-based T2I model to T2V through a spatiotemporally factorized diffusion model. • We leverage joint text-image priors to bypass the need for paired text-video data, which in turn allows us to potentially scale to larger quantities of video data. • We present super-resolution strategies in space and time that, for the first time, generate high-definition, high frame-rate videos given a user-provided textual input. • We evaluate Make-A-Video against existing T2V systems and present: (a) State-of-the-art results in quantitative as well as qualitative measures, and (b) A more thorough evaluation than existing literature in T2V. We also collect a test set of 300 prompts for zero-shot T2V human evaluation which we plan to release. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_5", "text": " Text-to-Image Generation.  (Reed et al., 2016) is among the first methods to extend unconditional Generative Adversairal Network (GAN) (Goodfellow et al., 2014) to T2I generation. Later GAN variants have focused on progressive generation (Zhang et al., 2017; Hong et al., 2018), or better text-image alignment (Xu et al., 2018; Zhang et al., 2021). The pioneering work of DALL-E (Ramesh et al., 2021) considers T2I generation as a sequence-to-sequence translation problem using a discrete variational auto-encoder (VQVAE) and Transformer (Vaswani et al., 2017). Additional variants (Ding et al., 2022) have been proposed since then. For example, Make-A-Scene (Gafni et al., 2022) explores controllable T2I generation using semantic maps. Parti (Yu et al., 2022a) aims for more diverse content generation through an encoder-decoder architecture and an improved image tokenizer (Yu et al., 2021). On the other hand, Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) are successfully leveraged for T2I generation. GLIDE (Nichol et al., 2021) trained a T2I and an upsampling diffusion model for cascade generation. GLIDE’s proposed classifier-free guidance has been widely adopted in T2I generation to improve image quality and text faithfulness. DALLE-2 (Ramesh et al., 2022) leverages the CLIP (Radford et al., 2021) latent space and a prior model. VQ-diffusion (Gu et al., 2022) and stable diffusion (Rombach et al., 2022) performs T2I generation in the latent space instead of pixel space to improve efficiency. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_6", "text": " Text-to-Video Generation. While there is remarkable progress in T2I generation, the progress of T2V generation lags behind largely due to two main reasons: the lack of large-scale datasets with high-quality text-video pairs, and the complexity of modeling higher-dimensional video data. Early works (Mittal et al., 2017; Pan et al., 2017; Marwah et al., 2017; Li et al., 2018; Gupta et al., 2018; Liu et al., 2019b) are mainly focused on video generation in simple domains, such as moving digits or specific human actions. To our knowledge, Sync-DRAW (Mittal et al., 2017) is the first T2V generation approach that leverages a VAE with recurrent attention. (Pan et al., 2017) and (Li et al., 2018) extend GANs from image generation to T2V generation. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_7", "text": " More recently, GODIVA (Wu et al., 2021a) is the first to use 2D VQVAE and sparse attention for T2V generation supporting more realistic scenes. NÜWA (Wu et al., 2021b) extends GODIVA, and presents a unified representation for various generation tasks in a multitask learning scheme. To further improve the performance of T2V generation, CogVideo (Hong et al., 2022) is built on top of a frozen CogView-2 (Ding et al., 2022) T2I model by adding additional temporal attention modules. Video Diffusion Models (VDM) (Ho et al., 2022) uses a space-time factorized U-Net with joint image and video data training. While both CogVideo and VDM collected 10M private text-video pairs for training, our work uses solely open-source datasets, making it easier to reproduce. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_8", "text": " Leveraging Image Priors for Video Generation. Due to the complexity of modeling videos and the challenges in high-quality video data collection, it is natural to consider leveraging image priors for videos to simplifying the learning process. After all, an image is a video with a single frame (Bain et al., 2021). In unconditional video generation, MoCoGAN-HD (Tian et al., 2021) formulates video generation as the task of finding a trajectory in the latent space of a pre-trained and fixed image generation model. In T2V generation, NÜWA (Wu et al., 2021b) combines image and video datasets in a multitask pre-training stage to improve model generalization for fine-tuning. CogVideo (Hong et al., 2022) uses a pre-trained and fixed T2I model for T2V generation with only a small number of trainable parameters to reduce memory usage during training. But the fixed autoencoder and T2I models can be restrictive for T2V generation. The architecture of VDM (Ho et al., 2022) can enable joint image and video generation. However, they sample random independent images from random videos as their source of images, and do not leverage the massive text-image datasets. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_9", "text": " Make-A-Video differs from previous works in several aspects. First, our architecture breaks the dependency on text-video pairs for T2V generation. This is a significant advantage compared to prior work, that has to be restricted to narrow domains (Mittal et al., 2017; Gupta et al., 2018; Ge et al., 2022; Hayes et al., 2022), or require large-scale paired text-video data (Hong et al., 2022; Ho et al., 2022). Second, we fine-tune the T2I model for video generation, gaining the advantage of adapting the model weights effectively, compared to freezing the weights as in CogVideo (Hong et al., 2022). Third, motivated from prior work on efficient architectures for video and 3D vision tasks (Ye et al., 2019; Qiu et al., 2017; Xie et al., 2018), our use of pseudo-3D convolution (Qiu et al., 2017) and temporal attention layers not only better leverage a T2I architecture, it also allows for better temporal information fusion compared to VDM (Ho et al., 2022). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_10", "text": " Make-A-Video consists of three main components: (i) A base T2I model trained on text-image pairs (Sec. 3.1), (ii) spatiotemporal convolution and attention layers that extend the networks’ building blocks to the temporal dimension (Sec. 3.2), and (iii) spatiotemporal networks that consist of both spatiotemporal layers, as well as another crucial element needed for T2V generation - a frame interpolation network for high frame rate generation (Sec. 3.3). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_11", "text": " Make-A-Video’s final T2V inference scheme (depicted in Fig. 2) can be formulated as: yt^=SRh∘SRlt∘↑F∘Dt∘P∘(x^,Cx(x)),\\hat{y_{t}}=\\operatorname{SR}_{h}\\circ\\operatorname{SR}_{l}^{t}\\circ\\uparrow_{F}\\circ\\operatorname{D}^{t}\\circ\\operatorname{P}\\circ(\\hat{x},\\operatorname{C}_{x}(x)), (1) where yt^^subscript𝑦𝑡\\hat{y_{t}} is the generated video, SRh,SRlsubscriptSRℎsubscriptSR𝑙\\operatorname{SR}_{h},\\operatorname{SR}_{l} are the spatial and spatiotemporal super-resolution networks (Sec. 3.2), ↑Fsubscript↑𝐹\\uparrow_{F} is a frame interpolation network (Sec. 3.3), DtsuperscriptD𝑡\\operatorname{D}^{t} is the spatiotemporal decoder (Sec. 3.2), PP\\operatorname{P} is the prior (Sec. 3.1), x^^𝑥\\hat{x} is the BPE-encoded text, CxsubscriptC𝑥\\operatorname{C}_{x} is the CLIP text encoder (Radford et al., 2021), and x𝑥x is the input text. The three main components are described in detail in the following sections. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_12", "text": " Prior to the addition of the temporal components, we train the backbone of our method: a T2I model trained on text-image pairs, sharing the core components with the work of (Ramesh et al., 2022). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_13", "text": " We use the following networks to produce high-resolution images from text: (i) A prior network PP\\operatorname{\\textbf{P}}, that during inference generates image embeddings yesubscript𝑦𝑒y_{e} given text embeddings xesubscript𝑥𝑒x_{e} and BPE encoded text tokens x^^𝑥\\hat{x}, (ii) a decoder network DD\\operatorname{\\textbf{D}} that generates a low-resolution 64×64646464\\times 64 RGB image y^lsubscript^𝑦𝑙\\hat{y}_{l}, conditioned on the image embeddings yesubscript𝑦𝑒y_{e}, and (iii) two super-resolution networks SRlsubscriptSRl\\operatorname{\\textbf{SR}}_{\\textbf{l}},SRhsubscriptSRh\\operatorname{\\textbf{SR}}_{\\textbf{h}} that increase the generated image y^lsubscript^𝑦𝑙\\hat{y}_{l} resolution to 256×256256256256\\times 256 and 768×768768768768\\times 768 pixels respectively, resulting in the final222We then downsample to 512 using bicubic interpolation for a cleaner aesthetic. Maintaining a clean aesthetic for high definition videos is part of future work. generated image y^^𝑦\\hat{y}. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_14", "text": " In order to expand the two-dimensional (2D) conditional network into the temporal dimension, we modify the two key building blocks that now require not just spatial but also temporal dimensions in order to generate videos: (i) Convolutional layers (Sec. 3.2.1), and (ii) attention layers (Sec. 3.2.2), discussed in the following two subsections. Other layers, such as fully-connected layers, do not require specific handling when adding an additional dimension, as they are agnostic to structured spatial and temporal information. Temporal modifications are made in most U-Net-based diffusion networks: the spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} now generating 161616 RGB frames, each of size 64×64646464\\times 64, the newly added frame interpolation network ↑Fsubscript↑𝐹\\uparrow_{F}, increasing the effective frame rate by interpolating between the 161616 generated frames (as depicted in Fig. 2), and the super-resolution networks SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t}. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_15", "text": " Note that super resolution involves hallucinating information. In order to not have flickering artifacts, the hallucination must be consistent across frames. As a result, our SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} module operates across spatial and temporal dimensions. In qualitative inspection we found this to significantly outperform per-frame super resolution. It is challenging to extend SRhsubscriptSRℎ\\operatorname{SR}_{h} to the temporal dimension due to memory and compute constraints, as well as a scarcity of high resolution video data. So SRhsubscriptSRℎ\\operatorname{SR}_{h} operates only along the spatial dimensions. But to encourage consistent detail hallucination across frames, we use the same noise initialization for each frame. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_16", "text": " Motivated by separable convolutions (Chollet, 2017), we stack a 1D convolution following each 2D convolutional (conv) layer, as shown in Fig. 3. This facilitates information sharing between the spatial and temporal axes, without succumbing to the heavy computational load of 3D conv layers. In addition, it creates a concrete partition between the pre-trained 2D conv layers and the newly initialized 1D conv layers, allowing us to train the temporal convolutions from scratch, while retaining the previously learned spatial knowledge in the spatial convolutions’ weights. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_17", "text": " Given an input tensor h∈ℝB×C×F×H×Wℎsuperscriptℝ𝐵𝐶𝐹𝐻𝑊h\\in\\mathbb{R}^{B\\times C\\times F\\times H\\times W}, where B𝐵B, C𝐶C, F𝐹F, H𝐻H, W𝑊W are the batch, channels, frames, height, and width dimensions respectively, the Pseudo-3D convolutional layer is defined as: ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_18", "text": " C​o​n​vP​3​D​(h):=C​o​n​v1​D​(C​o​n​v2​D​(h)∘T)∘T,assign𝐶𝑜𝑛subscript𝑣𝑃3𝐷ℎ𝐶𝑜𝑛subscript𝑣1𝐷𝐶𝑜𝑛subscript𝑣2𝐷ℎ𝑇𝑇Conv_{P3D}(h):=Conv_{1D}(Conv_{2D}(h)\\circ T)\\circ T, (2) ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_19", "text": " where the transpose operator ∘Tabsent𝑇\\circ T swaps between the spatial and temporal dimensions. For smooth initialization, while the C​o​n​v2​D𝐶𝑜𝑛subscript𝑣2𝐷Conv_{2D} layer is initialized from the pre-trained T2I model, the C​o​n​v1​D𝐶𝑜𝑛subscript𝑣1𝐷Conv_{1D} layer is initialized as the identity function, enabling a seamless transition from training spatial-only layers, to spatiotemporal layers. Note that at initialization, the network will generate K different images (due to random noise), each faithful to the input text but lacking temporal coherence. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_20", "text": " A crucial component of T2I networks is the attention layer, where in addition to self-attending to extracted features, text information is injected to several network hierarchies, alongside other relevant information, such as the diffusion time-step. While using 3D convolutional layers is computationally heavy, adding the temporal dimension to attention layers is outright infeasible in terms of memory consumption. Inspired by the work of (Ho et al., 2022), we extend our dimension decomposition strategy to attention layers as well. Following each (pre-trained) spatial attention layer, we stack a temporal attention layer, which as with the convolutional layers, approximates a full spatiotemporal attention layer. Specifically, given an input tensor hℎh, we define f​l​a​t​t​e​n𝑓𝑙𝑎𝑡𝑡𝑒𝑛flatten as a matrix operator that flattens the spatial dimension into h′∈RB×C×F×H​Wsuperscriptℎ′superscript𝑅𝐵𝐶𝐹𝐻𝑊h^{\\prime}\\in R^{B\\times C\\times F\\times HW}. u​n​f​l​a​t​t​e​n𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛unflatten is defined as the inverse matrix operator. The Pseudo-3D attention layer therefore is therefore defined as: ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_21", "text": " A​T​T​NP​3​D​(h)=u​n​f​l​a​t​t​e​n​(A​T​T​N1​D​(A​T​T​N2​D​(f​l​a​t​t​e​n​(h))∘T)∘T).𝐴𝑇𝑇subscript𝑁𝑃3𝐷ℎ𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛𝐴𝑇𝑇subscript𝑁1𝐷𝐴𝑇𝑇subscript𝑁2𝐷𝑓𝑙𝑎𝑡𝑡𝑒𝑛ℎ𝑇𝑇ATTN_{P3D}(h)=unflatten(ATTN_{1D}(ATTN_{2D}(flatten(h))\\circ T)\\circ T). (3) ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_22", "text": " Similarly to C​o​n​vP​3​D𝐶𝑜𝑛subscript𝑣𝑃3𝐷Conv_{P3D}, to allow for smooth spatiotemporal initialization, the A​T​T​N2​D𝐴𝑇𝑇subscript𝑁2𝐷ATTN_{2D} layer is initialized from the pre-trained T2I model and the A​T​T​N1​D𝐴𝑇𝑇subscript𝑁1𝐷ATTN_{1D} layer is initialized as the identity function. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_23", "text": " Factorized space-time attention layers have also been used in VDM (Ho et al., 2022) and CogVideo (Hong et al., 2022). CogVideo has added temporal layers to each (frozen) spatial layers whereas we train them jointly. In order to force their network to train for images and videos interchangeably, VDM has extended their 2D U-Net to 3D through unflattened 1x3x3 convolution filters, such that the subsequent spatial attention remains 2D, and added 1D temporal attention through relative position embeddings. In contrast, we apply an additional 3x1x1 convolution projection (after each 1x3x3) such that the temporal information will also be passed through each convolution layer. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_24", "text": " Frame rate conditioning. In addition to the T2I conditionings, similar to CogVideo (Hong et al., 2022), we add an additional conditioning parameter f​p​s𝑓𝑝𝑠fps, representing the number of frames-per-second in a generated video. Conditioning on a varying number of frames-per-second, enables an additional augmentation method to tackle the limited volume of available videos at training time, and provides additional control on the generated video at inference time. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_25", "text": " In addition to the spatiotemporal modifications discussed in Sec. 3.2, we train a new masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F}, capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video, or by pre/post frame extrapolation for extending the video length. In order to increase the frame rate within memory and compute constraints, we fine-tune a spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} on the task of masked frame interpolation, by zero-padding the masked input frames, enabling video upsampling. When fine-tuning on masked frame interpolation, we add an additional 4 channels to the input of the U-Net: 3 channels for the RGB masked video input and an additional binary channel indicating which frames are masked. We fine-tune with variable frame-skips and f​p​s𝑓𝑝𝑠fps conditioning to enable multiple temporal upsample rates at inference time. We denote ↑Fsubscript↑𝐹\\uparrow_{F} as the operator that expands the given video tensor through masked frame interpolation. For all of our experiments we applied ↑Fsubscript↑𝐹\\uparrow_{F} with frame skip 5 to upsample a 16 frame video to 76 frames ((16-1)×\\times5+1). Note that we can use the same architecture for video extrapolation or image animation by masking frames at the beginning or end of a video. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_26", "text": " The different components of Make-A-Video described above are trained independently. The only component that receives text as input is the prior PP\\operatorname{P}. We train it on paired text-image data and do not fine-tune it on videos. The decoder, prior, and two super-resolution components are first trained on images alone (no aligned text). Recall that the decoder receives CLIP image embedding as input, and the super-resolution components receive downsampled images as input during training. After training on images, we add and initialize the new temporal layers and fine-tune them over unlabeled video data. 16 frames are sampled from the original video with random f​p​s𝑓𝑝𝑠fps ranging from 111 to 303030. We use the beta function for sampling and while training the decoder, start from higher FPS ranges (less motion) and then transition to lower FPS ranges (more motion). The masked-frame-interpolation component is fine-tuned from the temporal decoder. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_27", "text": " Datasets. To train the image models, we use a 2.32.32.3B subset of the dataset from  (Schuhmann et al., ) where the text is English. We filter out sample pairs with NSFW images 333We used this model: https://github.com/GantMan/nsfw_model, toxic words in the text, or images with a watermark probability larger than 0.50.50.5. We use WebVid-10M (Bain et al., 2021) and a 101010M subset from HD-VILA-100M (Xue et al., 2022) 444These 100100100M clips are sourced from 3.13.13.1M videos. We randomly downloaded 333 clips per video to form our HD-VILA-10M subset. to train our video generation models. Note that only the videos (no aligned text) are used. The decoder DtsuperscriptD𝑡\\operatorname{D}^{t} and the interpolation model is trained on WebVid-10M. SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} is trained on both WebVid-10M and HD-VILA-10M. While prior work (Hong et al., 2022; Ho et al., 2022) have collected private text-video pairs for T2V generation, we use only public datasets (and no paired text for videos). We conduct automatic evaluation on UCF-101 (Soomro et al., 2012) and MSR-VTT (Xu et al., 2016) in a zero-shot setting. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_28", "text": " Automatic Metrics. For UCF-101, we write one template sentence for each class (without generating any video) and fix it for evaluation. We report Frechet Video Distance (FVD) and Inception Score (IS) on 101010K samples following (Ho et al., 2022). We generate samples that follow the same class distribution as the training set. For MSR-VTT, we report Frechet Inception Distance (FID) (Parmar et al., 2022) and CLIPSIM (average CLIP similarity between video frames and text) (Wu et al., 2021a), where all 59,7945979459,794 captions from the test set are used, following (Wu et al., 2021b). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_29", "text": " Human Evaluation Set and Metrics. We collect an evaluation set from Amazon Mechanical Turk (AMT) that consists of 300300300 prompts. We asked annotators what they would be interested in generating if there were a T2V system. We filtered out prompts that were incomplete (e.g., “jump into water”), too abstract (e.g., “climate change”), or offensive. We then identified 555 categories (animals, fantasy, people, nature and scenes, food and beverage) and selected prompts for these categories. These prompts were selected without generating any videos for them, and were kept fixed. In addition, we also used the DrawBench prompts from Imagen (Saharia et al., 2022) for human evaluation. We evaluate video quality and text-video faithfulness. For video quality, we show two videos in random order and ask annotators which one is of higher quality. For faithfulness, we additionally show the text and ask annotators which video has a better correspondence with the text (we suggest them to ignore quality issues). In addition, we also conducted human evaluation to compare video motion realism of our interpolation model and FILM (Reda et al., 2022). For each comparison, we use the majority vote from 555 different annotators as the final result. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_30", "text": " Automatic Evaluation on MSR-VTT. In addition to GODIVA and NÜWA that report on MSR-VTT, we also perform inference on the officially released CogVideo model with both Chinese and English inputs for comparison. For CogVideo and Make-A-Video, we only generate one sample for each prompt in a zero-shot setting. We only generate videos that are at 16×256×2561625625616\\times 256\\times 256 as the evaluation models do not expect higher resolutions and frame rate. The results are shown in Table 1. Make-A-Video’s zero-shot performance is much better than GODIVA and NÜWA which are trained on MSR-VTT. We also outperform CogVideo in both Chinese and English settings. Thus, Make-A-Video has significantly better generalization capabilities than prior work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_31", "text": " Automatic Evaluation on UCF-101. UCF-101 is a popular benchmark to evaluate video generation and has been recently used in T2V models. CogVideo performed finetuning of their pretrained model for class-conditional video generation. VDM (Ho et al., 2022) performed unconditional video generation and trained from scratch on UCF-101. We argue that both settings are not ideal and is not a direct evaluation of the T2V generation capabilities. Moreover, the FVD evaluation model expects the videos to be 0.50.50.5 second (161616 frames), which is too short to be used for video generation in practice. Nevertheless, in order to compare to prior work, we conducted evaluation on UCF-101 in both zero-shot and finetuning settings. As shown in Table 2, Make-A-Video’s zero-shot performance is already competitive than other approaches that are trained on UCF-101, and is much better than CogVideo, which indicates that Make-A-Video can generalize better even to such a specific domain. Our finetuning setting achieves state-of-the-art results with a significant reduction in FVD, which suggests that Make-A-Video can generate more coherent videos than prior work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_32", "text": " Human Evaluation. We compare to CogVideo (the only public zero-shot T2V generation model) on DrawBench and our test set. We also evaluate on the 282828 videos shown on the webpage of VDM (Ho et al., 2022) (which may be biased towards showcasing the model’s strengths). Since this is a very small test set, we randomly generate 888 videos for each input and perform evaluation 888 times and report the average results. We generate videos at 76×256×2567625625676\\times 256\\times 256 resolution for human evaluation. The results are shown in Table 3. Make-A-Video achieves much better performance in both video quality and text-video faithfulness in all benchmarks and comparisons. For CogVideo, the results are similar on DrawBench and our evaluation set. For VDM, it is worth noting that we have achieved significantly better results without any cherry-picking. We also evaluate our frame interpolation network in comparison to FILM (Reda et al., 2022). We first generate low frame rate videos (1 FPS) from text prompts in DrawBench and our evaluation set, then use each method to upsample to 4 FPS. Raters choose our method for more realistic motion 62% of the time on our evaluation set and 54% of the time on DrawBench. We observe that our method excels when there are large differences between frames where having real-world knowledge of how objects move is crucial. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_33", "text": " Examples of Make-A-Video’s generations are shown in Figure 1. In this section, we will show T2V generation comparison to CogVideo (Hong et al., 2022) and VDM (Ho et al., 2022), and video interpolation comparison to FILM (Reda et al., 2022). In addition, our models can be used for a variety of other tasks such as image animation, video variation, etc. Due to space constraint, we only show a single example of each. Figure 4 (a) shows the comparison of Make-A-Video to CogVideo and VDM. Make-A-Video can generate richer content with motion consistency and text correspondence. Figure 4 (b) shows an example of image animation where we condition the masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F} on the image and CLIP image embedding to extrapolate the rest of the video. This allows a user to generate a video using their own image – giving them the opportunity to personalize and directly control the generated video. Figure 4 (c) shows a comparison of our approach to FILM (Reda et al., 2022) on the task of interpolation between two images. We achieve this by using the interpolation model that takes the two images as the beginning and end frames and masks 141414 frames in between for generation. Our model generates more semantically meaningful interpolation while FILM seems to primarily smoothly transition between frames without semantic real-world understanding of what is moving. Figure 4 (d) shows an example for video variation. We take the average CLIP embedding of all frames from a video as the condition to generate a semantically similar video. More video generation examples and applications can be found here: make-a-video.github.io. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_34", "text": " Learning from the world around us is one of the greatest strengths of human intelligence. Just as we quickly learn to recognize people, places, things, and actions through observation, generative systems will be more creative and useful if they can mimic the way humans learn. Learning world dynamics from orders of magnitude more videos using unsupervised learning helps researchers break away from the reliance on labeled data. The presented work has shown how labeled images combined effectively with unlabeled video footage can achieve that. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_35", "text": " As a next step we plan to address several of the technical limitations. As discussed earlier, our approach can not learn associations between text and phenomenon that can only be inferred in videos. How to incorporate these (e.g., generating a video of a person waving their hand left-to-right or right-to-left), along with generating longer videos, with multiple scenes and events, depicting more detailed stories, is left for future work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_36", "text": " As with all large-scale models trained on data from the web, our models have learnt and likely exaggerated social biases, including harmful ones. Our T2I generation model was trained on data that removed NSFW content and toxic words. All our data (image as well as videos) is publicly available, adding a layer of transparency to our models, and making it possible for the community to reproduce our work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_37", "text": " Mustafa Said Mehmetoglu, Jacob Xu, Katayoun Zand, Jia-Bin-Huang, Jiebo Luo, Shelly Sheynin, Angela Fan, Kelly Freed. Thank you for your contributions! ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" } ]
What other differences between CEM and CMA-ES exist that might affect performance?
We also observe that CEM outperforms CMA-ES, which is remarkable as CMA-ES estimates the full covariance matrix [28]. For higher-dimensional policy parameterizations, the computational complexity and memory requirement for CMA-ES become noticeable [44].
[ 28, 44 ]
[ { "id": "1604.06778_all_0", "text": " Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumulative reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representations, which are usually hand-engineered. Recently, significant progress has been made by combining advances in deep learning for learning feature representations (Krizhevsky et al., 2012; Hinton et al., 2012) with reinforcement learning, tracing back to much earlier work of Tesauro (1995) and Bertsekas & Tsitsiklis (1995). Notable examples are training agents to play Atari games based on raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015a) and to acquire advanced manipulation skills using raw sensory inputs (Levine et al., 2015; Lillicrap et al., 2015; Watter et al., 2015). Impressive results have also been obtained in training deep neural network policies for 3D locomotion and manipulation tasks (Schulman et al., 2015a, b; Heess et al., 2015b). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_1", "text": " Along with this recent progress, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a popular benchmark for evaluating algorithms designed for tasks with high-dimensional state inputs and discrete actions. However, these algorithms do not always generalize straightforwardly to tasks with continuous actions, leading to a gap in our understanding. For instance, algorithms based on Q-learning quickly become infeasible when naive discretization of the action space is performed, due to the curse of dimensionality (Bellman, 1957; Lillicrap et al., 2015). In the continuous control domain, where actions are continuous and often high-dimensional, we argue that the existing control benchmarks fail to provide a comprehensive set of challenging problems (see Section 7 for a review of existing benchmarks). Benchmarks have played a significant role in other areas such as computer vision and speech recognition. Examples include MNIST (LeCun et al., 1998), Caltech101 (Fei-Fei et al., 2006), CIFAR (Krizhevsky & Hinton, 2009), ImageNet (Deng et al., 2009), PASCAL VOC (Everingham et al., 2010), BSDS500 (Martin et al., 2001), SWITCHBOARD (Godfrey et al., 1992), TIMIT (Garofolo et al., 1993), Aurora (Hirsch & Pearce, 2000), and VoiceSearch (Yu et al., 2007). The lack of a standardized and challenging testbed for reinforcement learning and continuous control makes it difficult to quantify scientific progress. Systematic evaluation and comparison will not only further our understanding of the strengths of existing algorithms, but also reveal their limitations and suggest directions for future research. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_2", "text": " We attempt to address this problem and present a benchmark consisting of 31 continuous control tasks. These tasks range from simple tasks, such as cart-pole balancing, to challenging tasks such as high-DOF locomotion, tasks with partial observations, and hierarchically structured tasks. Furthermore, a range of reinforcement learning algorithms are implemented on which we report novel findings based on a systematic evaluation of their effectiveness in training deep neural network policies. The benchmark and reference implementations are available at https://github.com/rllab/rllab, allowing for the development, implementation, and evaluation of new algorithms and tasks. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_3", "text": " In this section, we define the notation used in subsequent sections. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_4", "text": " The implemented tasks conform to the standard interface of a finite-horizon discounted Markov decision process (MDP), defined by the tuple (𝒮,𝒜,P,r,ρ0,γ,T)𝒮𝒜𝑃𝑟subscript𝜌0𝛾𝑇(\\mathcal{S},\\mathcal{A},P,r,\\rho_{0},\\gamma,T), where 𝒮𝒮\\mathcal{S} is a (possibly infinite) set of states, 𝒜𝒜\\mathcal{A} is a set of actions, P:𝒮×𝒜×𝒮→ℝ≥0:𝑃→𝒮𝒜𝒮subscriptℝabsent0P:\\mathcal{S}\\times\\mathcal{A}\\times\\mathcal{S}\\rightarrow\\mathbb{R}_{\\geq 0} is the transition probability distribution, r:𝒮×𝒜→ℝ:𝑟→𝒮𝒜ℝr:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\mathbb{R} is the reward function, ρ0:𝒮→ℝ≥0:subscript𝜌0→𝒮subscriptℝabsent0\\rho_{0}:\\mathcal{S}\\to\\mathbb{R}_{\\geq 0} is the initial state distribution, γ∈(0,1)𝛾01\\gamma\\in(0,1) is the discount factor, and T𝑇T is the horizon. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_5", "text": " For partially observable tasks, which conform to the interface of a partially observable Markov decision process (POMDP), two more components are required, namely ΩΩ\\Omega, a set of observations, and 𝒪:𝒮×Ω→ℝ≥0:𝒪→𝒮Ωsubscriptℝabsent0\\mathcal{O}:\\mathcal{S}\\times\\Omega\\to\\mathbb{R}_{\\geq 0}, the observation probability distribution. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_6", "text": " Most of our implemented algorithms optimize a stochastic policy πθ:𝒮×𝒜→ℝ≥0:subscript𝜋𝜃→𝒮𝒜subscriptℝabsent0\\pi_{\\theta}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\mathbb{R}_{\\geq 0}. Let η​(π)𝜂𝜋\\eta(\\pi) denote its expected discounted reward: η​(π)=𝔼τ​(∑t=0Tγt​r​(st,at))𝜂𝜋subscript𝔼𝜏delimited-()superscriptsubscript𝑡0𝑇superscript𝛾𝑡𝑟subscript𝑠𝑡subscript𝑎𝑡\\eta(\\pi)=\\mathbb{E}_{\\tau}\\left(\\sum_{t=0}^{T}\\gamma^{t}r(s_{t},a_{t})\\right), where τ=(s0,a0,…)𝜏subscript𝑠0subscript𝑎0…\\tau=(s_{0},a_{0},\\ldots) denotes the whole trajectory, s0∼ρ0​(s0)similar-tosubscript𝑠0subscript𝜌0subscript𝑠0\\displaystyle s_{0}\\sim\\rho_{0}(s_{0}), at∼π​(at|st)similar-tosubscript𝑎𝑡𝜋conditionalsubscript𝑎𝑡subscript𝑠𝑡a_{t}\\sim\\pi(a_{t}|s_{t}), and st+1∼P​(st+1|st,at)similar-tosubscript𝑠𝑡1𝑃conditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡s_{t+1}\\sim P(s_{t+1}|s_{t},a_{t}). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_7", "text": " For deterministic policies, we use the notation μθ:𝒮→𝒜:subscript𝜇𝜃→𝒮𝒜\\mu_{\\theta}:\\mathcal{S}\\rightarrow\\mathcal{A} to denote the policy instead. The objective for it has the same form as above, except that now we have at=μ​(st)subscript𝑎𝑡𝜇subscript𝑠𝑡a_{t}=\\mu(s_{t}). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_8", "text": " The tasks in the presented benchmark can be divided into four categories: basic tasks, locomotion tasks, partially observable tasks, and hierarchical tasks. We briefly describe them in this section. More detailed specifications are given in the supplementary materials and in the source code. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_9", "text": " We choose to implement all tasks using physics simulators rather than symbolic equations, since the former approach is less error-prone and permits easy modification of each task. Tasks with simple dynamics are implemented using Box2D (Catto, 2011), an open-source, freely available 2D physics simulator. Tasks with more complicated dynamics, such as locomotion, are implemented using MuJoCo (Todorov et al., 2012), a 3D physics simulator with better modeling of contacts. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_10", "text": " We implement five basic tasks that have been widely analyzed in reinforcement learning and control literature: Cart-Pole Balancing (Stephenson, 1908; Donaldson, 1960; Widrow, 1964; Michie & Chambers, 1968), Cart-Pole Swing Up (Kimura & Kobayashi, 1999; Doya, 2000), Mountain Car (Moore, 1990), Acrobot Swing Up (DeJong & Spong, 1994; Murray & Hauser, 1991; Doya, 2000), and Double Inverted Pendulum Balancing (Furuta et al., 1978). These relatively low-dimensional tasks provide quick evaluations and comparisons of RL algorithms. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_11", "text": " In this category, we implement six locomotion tasks of varying dynamics and difficulty: Swimmer (Purcell, 1977; Coulom, 2002; Levine & Koltun, 2013; Schulman et al., 2015a), Hopper (Murthy & Raibert, 1984; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Walker (Raibert & Hodgins, 1991; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Half-Cheetah (Wawrzyński, 2007; Heess et al., 2015b), Ant (Schulman et al., 2015b), Simple Humanoid (Tassa et al., 2012; Schulman et al., 2015b), and Full Humanoid (Tassa et al., 2012). The goal for all the tasks is to move forward as quickly as possible. These tasks are more challenging than the basic tasks due to high degrees of freedom. In addition, a great amount of exploration is needed to learn to move forward without getting stuck at local optima. Since we penalize for excessive controls as well as falling over, during the initial stage of learning, when the robot is not yet able to move forward for a sufficient distance without falling, apparent local optima exist including staying at the origin or diving forward slowly. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_12", "text": " In real-life situations, agents are often not endowed with perfect state information. This can be due to sensor noise, sensor occlusions, or even sensor limitations that result in partial observations. To evaluate algorithms in more realistic settings, we implement three variations of partially observable tasks for each of the five basic tasks described in Section 3.1, leading to a total of 151515 additional tasks. These variations are described below. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_13", "text": " Limited Sensors: For this variation, we restrict the observations to only provide positional information (including joint angles), excluding velocities. An agent now has to learn to infer velocity information in order to recover the full state. Similar tasks have been explored in Gomez & Miikkulainen (1998); Schäfer & Udluft (2005); Heess et al. (2015a); Wierstra et al. (2007). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_14", "text": " Noisy Observations and Delayed Actions: In this case, sensor noise is simulated through the addition of Gaussian noise to the observations. We also introduce a time delay between taking an action and the action being in effect, accounting for physical latencies (Hester & Stone, 2013). Agents now need to learn to integrate both past observations and past actions to infer the current state. Similar tasks have been proposed in Bakker (2001). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_15", "text": " System Identification: For this category, the underlying physical model parameters are varied across different episodes (Szita et al., 2003). The agents must learn to generalize across different models, as well as to infer the model parameters from its observation and action history. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_16", "text": " Many real-world tasks exhibit hierarchical structure, where higher level decisions can reuse lower level skills (Parr & Russell, 1998; Sutton et al., 1999; Dietterich, 2000). For instance, robots can reuse locomotion skills when exploring the environment. We propose several tasks where both low-level motor controls and high-level decisions are needed. These two components each operates on a different time scale and calls for a natural hierarchy in order to efficiently learn the task. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_17", "text": " Locomotion + Food Collection: For this task, the agent needs to learn to control either the swimmer or the ant robot to collect food and avoid bombs in a finite region. The agent receives range sensor readings about nearby food and bomb units. It is given a positive reward when it reaches a food unit, or a negative reward when it reaches a bomb. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_18", "text": " Locomotion + Maze: For this task, the agent needs to learn to control either the swimmer or the ant robot to reach a goal position in a fixed maze. The agent receives range sensor readings about nearby obstacles as well as its goal (when visible). A positive reward is given only when the robot reaches the goal region. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_19", "text": " In this section, we briefly summarize the algorithms implemented in our benchmark, and note any modifications made to apply them to general parametrized policies. We implement a range of gradient-based policy search methods, as well as two gradient-free methods for comparison with the gradient-based approaches. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_20", "text": " Most of the implemented algorithms are batch algorithms. At each iteration, N𝑁N trajectories {τi}i=1Nsuperscriptsubscriptsubscript𝜏𝑖𝑖1𝑁\\{\\tau_{i}\\}_{i=1}^{N} are generated, where τi={(sti,ati,rti)}t=0Tsubscript𝜏𝑖superscriptsubscriptsuperscriptsubscript𝑠𝑡𝑖superscriptsubscript𝑎𝑡𝑖superscriptsubscript𝑟𝑡𝑖𝑡0𝑇\\tau_{i}=\\{(s_{t}^{i},a_{t}^{i},r_{t}^{i})\\}_{t=0}^{T} contains data collected along the i𝑖ith trajectory. For on-policy gradient-based methods, all the trajectories are sampled under the current policy. For gradient-free methods, they are sampled under perturbed versions of the current policy. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_21", "text": " REINFORCE (Williams, 1992): This algorithm estimates the gradient of expected return ∇θη​(πθ)subscript∇𝜃𝜂subscript𝜋𝜃\\nabla_{\\theta}\\eta(\\pi_{\\theta}) using the likelihood ratio trick: ∇θη​(πθ)^=1N​T​∑i=1N∑t=0T∇θlog⁡π​(ati|sti;θ)​(Rti−bti),^subscript∇𝜃𝜂subscript𝜋𝜃1𝑁𝑇superscriptsubscript𝑖1𝑁superscriptsubscript𝑡0𝑇subscript∇𝜃𝜋conditionalsuperscriptsubscript𝑎𝑡𝑖superscriptsubscript𝑠𝑡𝑖𝜃superscriptsubscript𝑅𝑡𝑖superscriptsubscript𝑏𝑡𝑖\\widehat{\\nabla_{\\theta}\\eta(\\pi_{\\theta})}=\\frac{1}{NT}\\sum_{i=1}^{N}\\sum_{t=0}^{T}\\nabla_{\\theta}\\log\\pi(a_{t}^{i}|s_{t}^{i};\\theta)(R_{t}^{i}-b_{t}^{i}), where Rti=∑t′=tTγt′−t​rt′isuperscriptsubscript𝑅𝑡𝑖superscriptsubscriptsuperscript𝑡′𝑡𝑇superscript𝛾superscript𝑡′𝑡superscriptsubscript𝑟superscript𝑡′𝑖R_{t}^{i}=\\sum_{t^{\\prime}=t}^{T}\\gamma^{t^{\\prime}-t}r_{t^{\\prime}}^{i} and btisuperscriptsubscript𝑏𝑡𝑖b_{t}^{i} is a baseline that only depends on the state stisuperscriptsubscript𝑠𝑡𝑖s_{t}^{i} to reduce variance. Hereafter, an ascent step is taken in the direction of the estimated gradient. This process continues until θksubscript𝜃𝑘\\theta_{k} converges. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_22", "text": " Truncated Natural Policy Gradient (TNPG) (Kakade, 2002; Peters et al., 2003; Bagnell & Schneider, 2003; Schulman et al., 2015a): Natural Policy Gradient improves upon REINFORCE by computing an ascent direction that approximately ensures a small change in the policy distribution. This direction is derived to be I​(θ)−1​∇θη​(πθ)𝐼superscript𝜃1subscript∇𝜃𝜂subscript𝜋𝜃I(\\theta)^{-1}\\nabla_{\\theta}\\eta(\\pi_{\\theta}), where I​(θ)𝐼𝜃I(\\theta) is the Fisher information matrix (FIM). We use the step size suggested by Peters & Schaal (2008): α=δKL​(∇θη​(πθ)T​I​(θ)−1​∇θη​(πθ))−1𝛼subscript𝛿KLsuperscriptsubscript∇𝜃𝜂superscriptsubscript𝜋𝜃𝑇𝐼superscript𝜃1subscript∇𝜃𝜂subscript𝜋𝜃1\\alpha=\\sqrt{\\delta_{\\text{KL}}\\left(\\nabla_{\\theta}\\eta(\\pi_{\\theta})^{T}I(\\theta)^{-1}\\nabla_{\\theta}\\eta(\\pi_{\\theta})\\right)^{-1}}. Finally, we replace ∇θη​(πθ)subscript∇𝜃𝜂subscript𝜋𝜃\\nabla_{\\theta}\\eta(\\pi_{\\theta}) and I​(θ)𝐼𝜃I(\\theta) by their empirical estimates. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_23", "text": " For neural network policies with tens of thousands of parameters or more, generic Natural Policy Gradient incurs prohibitive computation cost by forming and inverting the empirical FIM. Instead, we study Truncated Natural Policy Gradient (TNPG) in this paper, which computes the natural gradient direction without explicitly forming the matrix inverse, using a conjugate gradient algorithm that only requires computing I​(θ)​v𝐼𝜃𝑣I(\\theta)v for arbitrary vector v𝑣v. TNPG makes it practical to apply natural gradient in policy search setting with high-dimensional parameters, and we refer the reader to Schulman et al. (2015a) for more details. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_24", "text": " Reward-Weighted Regression (RWR) (Peters & Schaal, 2007; Kober & Peters, 2009): This algorithm formulates the policy optimization as an Expectation-Maximization problem to avoid the need to manually choose learning rate, and the method is guaranteed to converge to a locally optimal solution. At each iteration, this algorithm optimizes a lower bound of the log-expected return: θ=arg⁡maxθ′⁡ℒ​(θ′)𝜃subscriptsuperscript𝜃′ℒsuperscript𝜃′\\theta=\\arg\\max_{\\theta^{\\prime}}\\mathcal{L}(\\theta^{\\prime}), where ℒ​(θ)=1N​T​∑i=1N∑t=0Tlog⁡π​(ati|sti;θ)​ρ​(Rti−bti)ℒ𝜃1𝑁𝑇superscriptsubscript𝑖1𝑁superscriptsubscript𝑡0𝑇𝜋conditionalsuperscriptsubscript𝑎𝑡𝑖superscriptsubscript𝑠𝑡𝑖𝜃𝜌superscriptsubscript𝑅𝑡𝑖superscriptsubscript𝑏𝑡𝑖\\mathcal{L}(\\theta)=\\frac{1}{NT}\\sum_{i=1}^{N}\\sum_{t=0}^{T}\\log\\pi(a_{t}^{i}|s_{t}^{i};\\theta)\\rho(R_{t}^{i}-b_{t}^{i}) Here, ρ:ℝ→ℝ≥0:𝜌→ℝsubscriptℝabsent0\\rho:\\mathbb{R}\\rightarrow\\mathbb{R}_{\\geq 0} is a function that transforms raw returns to nonnegative values. Following Deisenroth et al. (2013), we choose ρ𝜌\\rho to be ρ​(R)=R−Rmin𝜌𝑅𝑅subscript𝑅min\\rho(R)=R-R_{\\text{min}}, where Rminsubscript𝑅minR_{\\text{min}} is the minimum return among all trajectories collected in the current iteration. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_25", "text": " Relative Entropy Policy Search (REPS) (Peters et al., 2010): This algorithm limits the loss of information per iteration and aims to ensure a smooth learning progress (Deisenroth et al., 2013). At each iteration, we collect all trajectories into a dataset 𝒟={(si,ai,ri,si′)}i=1M𝒟superscriptsubscriptsubscript𝑠𝑖subscript𝑎𝑖subscript𝑟𝑖superscriptsubscript𝑠𝑖′𝑖1𝑀\\mathcal{D}=\\{(s_{i},a_{i},r_{i},s_{i}^{\\prime})\\}_{i=1}^{M}, where M𝑀M is the total number of samples. Then, we first solve for the dual parameters (η∗,ν∗)=arg⁡minη′,ν′⁡g​(η′,ν′)superscript𝜂superscript𝜈subscriptsuperscript𝜂′superscript𝜈′𝑔superscript𝜂′superscript𝜈′(\\eta^{*},\\nu^{*})=\\arg\\min_{\\eta^{\\prime},\\nu^{\\prime}}g(\\eta^{\\prime},\\nu^{\\prime}) s.t. η>0𝜂0\\eta>0, where g​(η,ν)=η​δKL+η​log⁡(1M​∑i=1Meδi​(ν)/η).𝑔𝜂𝜈𝜂subscript𝛿KL𝜂1𝑀superscriptsubscript𝑖1𝑀superscript𝑒subscript𝛿𝑖𝜈𝜂g(\\eta,\\nu)=\\eta\\delta_{\\text{KL}}+\\eta\\log\\left(\\frac{1}{M}\\sum_{i=1}^{M}e^{\\delta_{i}(\\nu)/\\eta}\\right). Here δKL>0subscript𝛿KL0\\delta_{\\text{KL}}>0 controls the step size of the policy, and δi​(ν)=ri+νT​(ϕ​(si′)−ϕ​(si))subscript𝛿𝑖𝜈subscript𝑟𝑖superscript𝜈𝑇italic-ϕsuperscriptsubscript𝑠𝑖′italic-ϕsubscript𝑠𝑖\\delta_{i}(\\nu)=r_{i}+\\nu^{T}(\\phi(s_{i}^{\\prime})-\\phi(s_{i})) is the sample Bellman error. We then solve for the new policy parameters: θk+1=arg⁡maxθ1M​∑i=1Meδi​(ν∗)/η∗​log⁡π​(ai|si;θ).subscript𝜃𝑘1subscript𝜃1𝑀superscriptsubscript𝑖1𝑀superscript𝑒subscript𝛿𝑖superscript𝜈superscript𝜂𝜋conditionalsubscript𝑎𝑖subscript𝑠𝑖𝜃\\theta_{k+1}=\\mathop{\\arg\\max}_{\\theta}\\frac{1}{M}\\sum_{i=1}^{M}e^{\\delta_{i}(\\nu^{*})/\\eta^{*}}\\log\\pi(a_{i}|s_{i};\\theta). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_26", "text": " Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a): This algorithm allows more precise control on the expected policy improvement than TNPG through the introduction of a surrogate loss. At each iteration, we solve the following constrained optimization problem (replacing expectations with samples): maximizeθsubscriptmaximize𝜃\\displaystyle\\mathop{\\textrm{maximize}}_{\\theta} 𝔼s∼ρθk,a∼πθk​(πθ​(a|s)πθk​(a|s)​Aθk​(s,a))subscript𝔼formulae-sequencesimilar-to𝑠subscript𝜌subscript𝜃𝑘similar-to𝑎subscript𝜋subscript𝜃𝑘delimited-()subscript𝜋𝜃conditional𝑎𝑠subscript𝜋subscript𝜃𝑘conditional𝑎𝑠subscript𝐴subscript𝜃𝑘𝑠𝑎\\displaystyle\\mathbb{E}_{s\\sim\\rho_{\\theta_{k}},a\\sim\\pi_{\\theta_{k}}}\\left(\\frac{\\pi_{\\theta}(a|s)}{\\pi_{\\theta_{k}}(a|s)}A_{\\theta_{k}}(s,a)\\right) s.t. Es∼ρθk(DKL(πθk(⋅|s)∥πθ(⋅|s)))≤δKL\\displaystyle E_{s\\sim\\rho_{{\\theta_{k}}}}(D_{\\text{KL}}(\\pi_{\\theta_{k}}(\\cdot|s)\\|\\pi_{\\theta}(\\cdot|s)))\\leq\\delta_{\\text{KL}} where ρθ=ρπθsubscript𝜌𝜃subscript𝜌subscript𝜋𝜃\\rho_{\\theta}=\\rho_{\\pi_{\\theta}} is the discounted state-visitation frequencies induced by πθsubscript𝜋𝜃\\pi_{\\theta}, Aθk​(s,a)subscript𝐴subscript𝜃𝑘𝑠𝑎A_{\\theta_{k}}(s,a), known as the advantage function, is estimated by the empirical return minus the baseline, and δKLsubscript𝛿KL\\delta_{\\text{KL}} is a step size parameter which controls how much the policy is allowed to change per iteration. We follow the procedure described in the original paper for solving the optimization, which results in the same descent direction as TNPG with an extra line search in the objective and KL constraint. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_27", "text": " Cross Entropy Method (CEM) (Rubinstein, 1999; Szita & Lőrincz, 2006): Unlike previously mentioned methods, which perform exploration through stochastic actions, CEM performs exploration directly in the policy parameter space. At each iteration, we produce N𝑁N perturbations of the policy parameter: θi∼𝒩​(μk,Σk)similar-tosubscript𝜃𝑖𝒩subscript𝜇𝑘subscriptΣ𝑘\\theta_{i}\\sim\\mathcal{N}(\\mu_{k},\\Sigma_{k}), and perform a rollout for each sampled parameter. Then, we compute the new mean and diagonal covariance using the parameters that correspond to the top q𝑞q-quantile returns. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_28", "text": " Covariance Matrix Adaption Evolution Strategy (CMA-ES) (Hansen & Ostermeier, 2001): Similar to CEM, CMA-ES is a gradient-free evolutionary approach for optimizing nonconvex objective functions. In our case, this objective function equals the average sampled return. In contrast to CEM, CMA-ES estimates the covariance matrix of a multivariate normal distribution through incremental adaption along evolution paths, which contain information about the correlation between consecutive updates. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_29", "text": " Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015): Compared to batch algorithms, the DDPG algorithm continuously improves the policy as it explores the environment. It applies gradient descent to the policy with minibatch data sampled from a replay pool, where the gradient is computed via ∇θη​(μθ)^=∑i=1B∇aQϕ​(si,a)|a=μθ​(si)​∇θμθ​(si)^subscript∇𝜃𝜂subscript𝜇𝜃evaluated-atsuperscriptsubscript𝑖1𝐵subscript∇𝑎subscript𝑄italic-ϕsubscript𝑠𝑖𝑎𝑎subscript𝜇𝜃subscript𝑠𝑖subscript∇𝜃subscript𝜇𝜃subscript𝑠𝑖\\widehat{\\nabla_{\\theta}\\eta(\\mu_{\\theta})}=\\sum_{i=1}^{B}\\left.\\nabla_{a}Q_{\\phi}(s_{i},a)\\right|_{a=\\mu_{\\theta}(s_{i})}\\nabla_{\\theta}\\mu_{\\theta}(s_{i}) where B𝐵B is the batch size. The critic Q𝑄Q is trained via gradient descent on the ℓ2superscriptℓ2\\ell^{2} loss of the Bellman error L=1B​∑i=1B(yi−Qϕ​(si,ai))2𝐿1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑦𝑖subscript𝑄italic-ϕsubscript𝑠𝑖subscript𝑎𝑖2L=\\frac{1}{B}\\sum_{i=1}^{B}(y_{i}-Q_{\\phi}(s_{i},a_{i}))^{2}, where yi=ri+γ​Qϕ′′​(si′,μθ′′​(si′))subscript𝑦𝑖subscript𝑟𝑖𝛾subscriptsuperscript𝑄′superscriptitalic-ϕ′subscriptsuperscript𝑠′𝑖subscriptsuperscript𝜇′superscript𝜃′subscriptsuperscript𝑠′𝑖y_{i}=r_{i}+\\gamma Q^{\\prime}_{\\phi^{\\prime}}(s^{\\prime}_{i},\\mu^{\\prime}_{\\theta^{\\prime}}(s^{\\prime}_{i})). To improve stability of the algorithm, we use target networks for both the critic and the policy when forming the regression target yisubscript𝑦𝑖y_{i}. We refer the reader to Lillicrap et al. (2015) for a more detailed description of the algorithm. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_30", "text": " We implement direct applications of the aforementioned batch-based algorithms to recurrent policies. The only modification required is to replace π​(ati|sti)𝜋conditionalsuperscriptsubscript𝑎𝑡𝑖superscriptsubscript𝑠𝑡𝑖\\pi(a_{t}^{i}|s_{t}^{i}) by π​(ati|o1:ti,a1:t−1i)𝜋conditionalsuperscriptsubscript𝑎𝑡𝑖superscriptsubscript𝑜:1𝑡𝑖superscriptsubscript𝑎:1𝑡1𝑖\\pi(a_{t}^{i}|o_{1:t}^{i},a_{1:t-1}^{i}), where o1:tisuperscriptsubscript𝑜:1𝑡𝑖o_{1:t}^{i} and a1:t−1subscript𝑎:1𝑡1a_{1:t-1} are the histories of past and current observations and past actions. Recurrent versions of reinforcement learning algorithms have been studied in many existing works, such as Bakker (2001), Schäfer & Udluft (2005), Wierstra et al. (2007), and Heess et al. (2015a). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_31", "text": " In this section, we elaborate on the experimental setup used to generate the results. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_32", "text": " Performance Metrics: For each report unit (a particular algorithm running on a particular task), we define its performance as 1∑i=1INi​∑i=1I∑n=1NiRi​n1superscriptsubscript𝑖1𝐼subscript𝑁𝑖superscriptsubscript𝑖1𝐼superscriptsubscript𝑛1subscript𝑁𝑖subscript𝑅𝑖𝑛\\frac{1}{\\sum_{i=1}^{I}N_{i}}\\sum_{i=1}^{I}\\sum_{n=1}^{N_{i}}R_{in}, where I𝐼I is the number of training iterations, Nisubscript𝑁𝑖N_{i} is the number of trajectories collected in the i𝑖ith iteration, and Ri​nsubscript𝑅𝑖𝑛R_{in} is the undiscounted return for the n𝑛nth trajectory of the i𝑖ith iteration, ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_33", "text": " Hyperparameter Tuning: For the DDPG algorithm, we used the hyperparametes reported in Lillicrap et al. (2015). For the other algorithms, we follow the approach in (Mnih et al., 2015), and we select two tasks in each category, on which a grid search of hyperparameters is performed. Each choice of hyperparameters is executed under five random seeds. The criterion for the best hyperparameters is defined as mean​(returns)−std​(returns)meanreturnsstdreturns\\mathrm{mean}(\\mathrm{returns})-\\mathrm{std}(\\mathrm{returns}). This metric selects against large fluctuations of performance due to overly large step sizes. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_34", "text": " For the other tasks, we try both of the best hyperparameters found in the same category, and report the better performance of the two. This gives us insights into both the maximum possible performance when extensive hyperparameter tuning is performed, and the robustness of the best hyperparameters across different tasks. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_35", "text": " Policy Representation: For basic, locomotion, and hierarchical tasks and for batch algorithms, we use a feed-forward neural network policy with 3 hidden layers, consisting of 100100100, 505050, and 252525 hidden units with tanh nonlinearity at the first two hidden layers, which map each state to the mean of a Gaussian distribution. The log-standard deviation is parameterized by a global vector independent of the state, as done in Schulman et al. (2015a). For all partially observable tasks, we use a recurrent neural network with a single hidden layer consisting of 323232 LSTM hidden units (Hochreiter & Schmidhuber, 1997). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_36", "text": " For the DDPG algorithm which trains a deterministic policy, we follow Lillicrap et al. (2015). For both the policy and the Q𝑄Q function, we use the same architecture of a feed-forward neural network with 2 hidden layers, consisting of 400400400 and 300300300 hidden units with relu activations. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_37", "text": " Baseline: For all gradient-based algorithms except REPS, we can subtract a baseline from the empirical return to reduce variance of the optimization. We use a linear function as the baseline with a time-varying feature vector. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_38", "text": " The main evaluation results are presented in Table 1. The tasks on which the grid search is performed are marked with (*). In each entry, the pair of numbers shows the mean and standard deviation of the normalized cumulative return using the best possible hyperparameters. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_39", "text": " REINFORCE: Despite its simplicity, REINFORCE is an effective algorithm in optimizing deep neural network policies in most basic and locomotion tasks. Even for high-DOF tasks like Ant, REINFORCE can achieve competitive results. However we observe that REINFORCE sometimes suffers from premature convergence to local optima as noted by Peters & Schaal (2008), which explains the performance gaps between REINFORCE and TNPG on tasks such as Walker (Figure 3). By visualizing the final policies, we can see that REINFORCE results in policies that tend to jump forward and fall over to maximize short-term return instead of acquiring a stable walking gait to maximize long-term return. In Figure 3, we can observe that even with a small learning rate, steps taken by REINFORCE can sometimes result in large changes to policy distribution, which may explain the fast convergence to local optima. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_40", "text": " TNPG and TRPO: Both TNPG and TRPO outperform other batch algorithms by a large margin on most tasks, confirming that constraining the change in the policy distribution results in more stable learning (Peters & Schaal, 2008). ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_41", "text": " Compared to TNPG, TRPO offers better control over each policy update by performing a line search in the natural gradient direction to ensure an improvement in the surrogate loss function. We observe that hyperparameter grid search tends to select conservative step sizes (δKLsubscript𝛿KL\\delta_{\\text{KL}}) for TNPG, which alleviates the issue of performance collapse caused by a large update to the policy. By contrast, TRPO can robustly enforce constraints with larger a δKLsubscript𝛿KL\\delta_{\\text{KL}} value and hence speeds up learning in some cases. For instance, grid search on the Swimmer task reveals that the best step size for TNPG is δKL=0.05subscript𝛿KL0.05\\delta_{\\text{KL}}=0.05, whereas TRPO’s best step-size is larger: δKL=0.1subscript𝛿KL0.1\\delta_{\\text{KL}}=0.1. As shown in Figure 3, this larger step size enables slightly faster learning. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_42", "text": " RWR: RWR is the only gradient-based algorithm we implemented that does not require any hyperparameter tuning. It can solve some basic tasks to a satisfactory degree, but fails to solve more challenging tasks such as locomotion. We observe empirically that RWR shows fast initial improvement followed by significant slow-down, as shown in Figure 3. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_43", "text": " REPS: Our main observation is that REPS is especially prone to early convergence to local optima in case of continuous states and actions. Its final outcome is greatly affected by the performance of the initial policy, an observation that is consistent with the original work of Peters et al. (2010). This leads to a bad performance on average, although under particular initial settings the algorithm can perform on par with others. Moreover, the tasks presented here do not assume the existence of a stationary distribution, which is assumed in Peters et al. (2010). In particular, for many of our tasks, transient behavior is of much greater interest than steady-state behavior, which agrees with previous observation by van Hoof et al. (2015), ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_44", "text": " Gradient-free methods: Surprisingly, even when training deep neural network policies with thousands of parameters, CEM achieves very good performance on certain basic tasks such as Cart-Pole Balancing and Mountain Car, suggesting that the dimension of the searching parameter is not always the limiting factor of the method. However, the performance degrades quickly as the system dynamics becomes more complicated. We also observe that CEM outperforms CMA-ES, which is remarkable as CMA-ES estimates the full covariance matrix. For higher-dimensional policy parameterizations, the computational complexity and memory requirement for CMA-ES become noticeable. On tasks with high-dimensional observations, such as the Full Humanoid, the CMA-ES algorithm runs out of memory and fails to yield any results, denoted as N/A in Table 1. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_45", "text": " DDPG: Compared to batch algorithms, we found that DDPG was able to converge significantly faster on certain tasks like Half-Cheetah due to its greater sample efficiency. However, it was less stable than batch algorithms, and the performance of the policy can degrade significantly during training. We also found it to be more susceptible to scaling of the reward. In our experiment for DDPG, we rescaled the reward of all tasks by a factor of 0.10.10.1, which seems to improve the stability. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_46", "text": " Partially Observable Tasks: We experimentally verify that recurrent policies can find better solutions than feed-forward policies in Partially Observable Tasks but recurrent policies are also more difficult to train. As shown in Table 1, derivative-free algorithms like CEM and CMA-ES work considerably worse with recurrent policies. Also we note that the performance gap between REINFORCE and TNPG widens when they are applied to optimize recurrent policies, which can be explained by the fact that a small change in parameter space can result in a bigger change in policy distribution with recurrent policies than with feedforward policies. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_47", "text": " Hierarchical Tasks: We observe that all of our implemented algorithms achieve poor performance on the hierarchical tasks, even with extensive hyperparameter search and 500500500 iterations of training. It is an interesting direction to develop algorithms that can automatically discover and exploit the hierarchical structure in these tasks. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_48", "text": " In this section, we review existing benchmarks of continuous control tasks. The earliest efforts of evaluating reinforcement learning algorithms started in the form of individual control problems described in symbolic form. Some widely adopted tasks include the inverted pendulum (Stephenson, 1908; Donaldson, 1960; Widrow, 1964), mountain car (Moore, 1990), and Acrobot (DeJong & Spong, 1994). These problems are frequently incorporated into more comprehensive benchmarks. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_49", "text": " Some reinforcement learning benchmarks contain low-dimensional continuous control tasks, such as the ones introduced above, including RLLib (Abeyruwan, 2013), MMLF (Metzen & Edgington, 2011), RL-Toolbox (Neumann, 2006), JRLF (Kochenderfer, 2006), Beliefbox (Dimitrakakis et al., 2007), Policy Gradient Toolbox (Peters, 2002), and ApproxRL (Busoniu, 2010). A series of RL competitions has also been held in recent years (Dutech et al., 2005; Dimitrakakis et al., 2014), again with relatively low-dimensional actions. In contrast, our benchmark contains a wider range of tasks with high-dimensional continuous state and action spaces. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_50", "text": " Previously, other benchmarks have been proposed for high-dimensional control tasks. Tdlearn (Dann et al., 2014) includes a 20-link pole balancing task, DotRL (Papis & Wawrzyński, 2013) includes a variable-DOF octopus arm and a 6-DOF planar cheetah model, PyBrain (Schaul et al., 2010) includes a 16-DOF humanoid robot with standing and jumping tasks, RoboCup Keepaway (Stone et al., 2005) is a multi-agent game which can have a flexible dimension of actions by varying the number of agents, and SkyAI (Yamaguchi & Ogasawara, 2010) includes a 17-DOF humanoid robot with crawling and turning tasks. Other libraries such as CL-Square (Riedmiller et al., 2012) and RLPark (Degris et al., 2013) provide interfaces to actual hardware, e.g., Bioloid and iRobot Create. In contrast to these aforementioned testbeds, our benchmark makes use of simulated environments to reduce computation time and to encourage experimental reproducibility. Furthermore, it provides a much larger collection of tasks of varying difficulty. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_51", "text": " In this work, a benchmark of continuous control problems for reinforcement learning is presented, covering a wide variety of challenging tasks. We implemented several reinforcement learning algorithms, and presented them in the context of general policy parameterizations. Results show that among the implemented algorithms, TNPG, TRPO, and DDPG are effective methods for training deep neural network policies. Still, the poor performance on the proposed hierarchical tasks calls for new algorithms to be developed. Implementing and evaluating existing and newly proposed algorithms will be our continued effort. By providing an open-source release of the benchmark, we encourage other researchers to evaluate their algorithms on the proposed tasks. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_52", "text": " Cart-Pole Balancing: In this task, an inverted pendulum is mounted on a pivot point on a cart. The cart itself is restricted to linear movement, achieved by applying horizontal forces. Due to the system’s inherent instability, continuous cart movement is needed to keep the pendulum upright. The observation consists of the cart position x𝑥x, pole angle θ𝜃\\theta, the cart velocity x˙˙𝑥\\dot{x}, and the pole velocity θ˙˙𝜃\\dot{\\theta}. The 1D action consists of the horizontal force applied to the cart body. The reward function is given by r​(s,a):=10−(1−cos⁡(θ))−10−5​‖a‖22assign𝑟𝑠𝑎101𝜃superscript105superscriptsubscriptnorm𝑎22r(s,a):=10-(1-\\cos(\\theta))-10^{-5}\\|a\\|_{2}^{2}. The episode terminates when |x|>2.4𝑥2.4|x|>2.4 or |θ|>0.2𝜃0.2|\\theta|>0.2. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_53", "text": " Cart-Pole Swing Up: This is a more complicated version of the previous task, in which the system should not only be able to balance the pole, but first succeed in swinging it up into an upright position. This task extends the working range of the inverted pendulum to 360 °times360degree360\\text{\\,}\\mathrm{\\SIUnitSymbolDegree}. This is a nonlinear extension of the previous task. It has the same observation and action as in balancing. The reward function is given by r​(s,a):=cos⁡(θ)assign𝑟𝑠𝑎𝜃r(s,a):=\\cos(\\theta). The episode terminates when |x|>3𝑥3|x|>3, with a penalty of −100100-100. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_54", "text": " Mountain Car: In this task, a car has to escape a valley by repetitive application of tangential forces. Because the maximal tangential force is limited, the car has to alternately drive up along the two slopes of the valley in order to build up enough inertia to overcome gravity. This brings a challenge of exploration, since before first reaching the goal among all trials, a locally optimal solution exists, which is to drive to the point closest to the target and stay there for the rest of the episode. The observation is given by the horizontal position x𝑥x and the horizontal velocity x˙˙𝑥\\dot{x} of the car. The reward is given by r​(s,a):=−1+heightassign𝑟𝑠𝑎1heightr(s,a):=-1+\\textrm{height}, with height the car’s vertical offset. The episode terminates when the car reaches a target height of 0.60.60.6. Hence the goal is to reach the target as soon as possible. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_55", "text": " Acrobot Swing Up: In this task, an under-actuated, two-link robot has to swing itself into an upright position. It consists of two joints of which the first one has a fixed position and only the second one can exert torque. The goal is to swing the robot into an upright position and stabilize around that position. The controller not only has to swing the pendulum in order to build up inertia, similar to the Mountain Car task, but also has to decelerate it in order to prevent it from tipping over. The observation includes the two joint angles, θ1subscript𝜃1\\theta_{1} and θ2subscript𝜃2\\theta_{2}, and their velocities, θ˙1subscript˙𝜃1\\dot{\\theta}_{1} and θ˙2subscript˙𝜃2\\dot{\\theta}_{2}. The action is the torque applied at the second joint. The reward is defined as r​(s,a):=−‖tip​(s)−tiptarget‖2assign𝑟𝑠𝑎subscriptnormtip𝑠subscripttiptarget2r(s,a):=-\\|\\mathrm{tip}(s)-\\mathrm{tip}_{\\mathrm{target}}\\|_{2}, where tip​(s)tip𝑠\\mathrm{tip}(s) computes the Cartesian position of the tip of the robot given the joint angles. No termination condition is applied. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_56", "text": " Double Inverted Pendulum Balancing: This task extends the Cart-Pole Balancing task by replacing the single-link pole by a two-link rigid structure. As in the former task, the goal is to stabilize the two-link pole near the upright position. This task is more difficult than single-pole balancing, since the system is even more unstable and requires the controller to actively maintain balance. The observation includes the cart position x𝑥x, joint angles (θ1subscript𝜃1\\theta_{1} and θ2subscript𝜃2\\theta_{2}), and joint velocities (θ˙1subscript˙𝜃1\\dot{\\theta}_{1} and θ˙2subscript˙𝜃2\\dot{\\theta}_{2}). We encode each joint angle as its sine and cosine values. The action is the same as in cart-pole tasks. The reward is given by r​(s,a)=10−0.01​xtip2−(ytip−2)2−10−3⋅θ˙12−5⋅10−3⋅θ˙22𝑟𝑠𝑎100.01superscriptsubscript𝑥tip2superscriptsubscript𝑦tip22⋅superscript103superscriptsubscript˙𝜃12⋅5superscript103superscriptsubscript˙𝜃22r(s,a)=10-0.01x_{\\mathrm{tip}}^{2}-(y_{\\mathrm{tip}}-2)^{2}-10^{-3}\\cdot\\dot{\\theta}_{1}^{2}-5\\cdot 10^{-3}\\cdot\\dot{\\theta}_{2}^{2}, where xtip,ytipsubscript𝑥tipsubscript𝑦tipx_{\\mathrm{tip}},y_{\\mathrm{tip}} are the coordinates of the tip of the pole. No termination condition is applied. The episode is terminated when ytip≤1subscript𝑦tip1y_{\\text{tip}}\\leq 1. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_57", "text": " Swimmer: The swimmer is a planar robot with 3 links and 2 actuated joints. Fluid is simulated through viscosity forces, which apply drag on each link, allowing the swimmer to move forward. This task is the simplest of all locomotion tasks, since there are no irrecoverable states in which the swimmer can get stuck, unlike other robots which may fall down or flip over. This places less burden on exploration. The 131313-dim observation includes the joint angles, joint velocities, as well as the coordinates of the center of mass. The reward is given by r​(s,a)=vx−0.005​‖a‖22𝑟𝑠𝑎subscript𝑣𝑥0.005superscriptsubscriptnorm𝑎22r(s,a)=v_{x}-0.005\\|a\\|_{2}^{2}, where vxsubscript𝑣𝑥v_{x} is the forward velocity. No termination condition is applied. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_58", "text": " Hopper: The hopper is a planar monopod robot with 4 rigid links, corresponding to the torso, upper leg, lower leg, and foot, along with 3 actuated joints. More exploration is needed than the swimmer task, since a stable hopping gait has to be learned without falling. Otherwise, it may get stuck in a local optimum of diving forward. The 202020-dim observation includes joint angles, joint velocities, the coordinates of center of mass, and constraint forces. The reward is given by r​(s,a):=vx−0.005⋅‖a‖22+1assign𝑟𝑠𝑎subscript𝑣𝑥⋅0.005superscriptsubscriptnorm𝑎221r(s,a):=v_{x}-0.005\\cdot\\|a\\|_{2}^{2}+1, where the last term is a bonus for being “alive.” The episode is terminated when zb​o​d​y<0.7subscript𝑧𝑏𝑜𝑑𝑦0.7z_{body}<0.7 where zb​o​d​ysubscript𝑧𝑏𝑜𝑑𝑦z_{body} is the z𝑧z-coordinate of the body, or when |θy|<0.2subscript𝜃𝑦0.2|\\theta_{y}|<0.2, where θysubscript𝜃𝑦\\theta_{y} is the forward pitch of the body. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_59", "text": " Walker: The walker is a planar biped robot consisting of 7 links, corresponding to two legs and a torso, along with 6 actuated joints. This task is more challenging than hopper, since it has more degrees of freedom, and is also prone to falling. The 212121-dim observation includes joint angles, joint velocities, and the coordinates of center of mass. The reward is given by r​(s,a):=vx−0.005⋅‖a‖22assign𝑟𝑠𝑎subscript𝑣𝑥⋅0.005superscriptsubscriptnorm𝑎22r(s,a):=v_{x}-0.005\\cdot\\|a\\|_{2}^{2}. The episode is terminated when zb​o​d​y<0.8subscript𝑧𝑏𝑜𝑑𝑦0.8z_{body}<0.8, zb​o​d​y>2.0subscript𝑧𝑏𝑜𝑑𝑦2.0z_{body}>2.0, or when |θy|>1.0subscript𝜃𝑦1.0|\\theta_{y}|>1.0. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_60", "text": " Half-Cheetah: The half-cheetah is a planar biped robot with 9 rigid links, including two legs and a torso, along with 6 actuated joints. The 202020-dim observation includes joint angles, joint velocities, and the coordinates of the center of mass. The reward is given by r​(s,a)=vx−0.05⋅‖a‖22𝑟𝑠𝑎subscript𝑣𝑥⋅0.05superscriptsubscriptnorm𝑎22r(s,a)=v_{x}-0.05\\cdot\\|a\\|_{2}^{2}. No termination condition is applied. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_61", "text": " Ant: The ant is a quadruped with 13 rigid links, including four legs and a torso, along with 8 actuated joints. This task is more challenging than the previous tasks due to the higher degrees of freedom. The 125125125-dim observation includes joint angles, joint velocities, coordinates of the center of mass, a (usually sparse) vector of contact forces, as well as the rotation matrix for the body. The reward is given by r​(s,a)=vx−0.005⋅‖a‖22−Ccontact+0.05𝑟𝑠𝑎subscript𝑣𝑥⋅0.005superscriptsubscriptnorm𝑎22subscript𝐶contact0.05r(s,a)=v_{x}-0.005\\cdot\\|a\\|_{2}^{2}-C_{\\mathrm{contact}}+0.05, where Ccontactsubscript𝐶contactC_{\\mathrm{contact}} penalizes contacts to the ground, and is given by 5⋅10−4⋅‖Fcontact‖22⋅5superscript104superscriptsubscriptnormsubscript𝐹contact225\\cdot 10^{-4}\\cdot\\|F_{\\mathrm{contact}}\\|_{2}^{2}, where Fcontactsubscript𝐹contactF_{\\mathrm{contact}} is the contact force vector clipped to values between −11-1 and 111. The episode is terminated when zb​o​d​y<0.2subscript𝑧𝑏𝑜𝑑𝑦0.2z_{body}<0.2 or when zb​o​d​y>1.0subscript𝑧𝑏𝑜𝑑𝑦1.0z_{body}>1.0. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_62", "text": " Simple Humanoid: This is a simplified humanoid model with 13 rigid links, including the head, body, arms, and legs, along with 10 actuated joints. The increased difficulty comes from the increased degrees of freedom as well as the need to maintain balance. The 102102102-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward is given by r​(s,a)=vx−5⋅10−4​‖a‖22−Ccontact−Cdeviation+0.2𝑟𝑠𝑎subscript𝑣𝑥⋅5superscript104superscriptsubscriptnorm𝑎22subscript𝐶contactsubscript𝐶deviation0.2r(s,a)=v_{x}-5\\cdot 10^{-4}\\|a\\|_{2}^{2}-C_{\\mathrm{contact}}-C_{\\mathrm{deviation}}+0.2, where Ccontact=5⋅10−6⋅‖Fcontact‖subscript𝐶contact⋅5superscript106normsubscript𝐹contactC_{\\mathrm{contact}}=5\\cdot 10^{-6}\\cdot\\|F_{\\mathrm{contact}}\\|, and Cdeviation=5⋅10−3⋅(vy2+vz2)subscript𝐶deviation⋅5superscript103superscriptsubscript𝑣𝑦2superscriptsubscript𝑣𝑧2C_{\\mathrm{deviation}}=5\\cdot 10^{-3}\\cdot(v_{y}^{2}+v_{z}^{2}) to penalize deviation from the forward direction. The episode is terminated when zb​o​d​y<0.8subscript𝑧𝑏𝑜𝑑𝑦0.8z_{body}<0.8 or when zb​o​d​y>2.0subscript𝑧𝑏𝑜𝑑𝑦2.0z_{body}>2.0. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_63", "text": " Full Humanoid: This is a humanoid model with 19 rigid links and 28 actuated joints. It has more degrees of freedom below the knees and elbows, which makes the system higher-dimensional and harder for learning. The 142142142-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward and termination condition is the same as in the Simple Humanoid model. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_64", "text": " Limited Sensors: The full description is included in the main text. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_65", "text": " Noisy Observations and Delayed Actions: For all tasks, we use a Gaussan noise with σ=0.1𝜎0.1\\sigma=0.1. The time delay is as follows: Cart-Pole Balancing 0.15 sec, Cart-Pole Swing Up 0.15 sec, Mountain Car 0.15 sec, Acrobot Swing Up 0.06 sec, and Double Inverted Pendulum Balancing 0.06 sec. This corresponds to 333 discretization frames for each task. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_66", "text": " System Identifications: For Cart-Pole Balancing and Cart-Pole Swing Up, the pole length is varied uniformly between, 50% and 150%. For Mountain Car, the width of the valley varies uniformly between 75% and 125%. For Acrobot Swing Up, each of the pole length varies uniformly between 50% and 150%. For Double Inverted Pendulum Balancing, each of the pole length varies uniformly between 83% and 167%. Please refer to the benchmark source code for reference values. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_67", "text": " Locomotion + Food Collection: During each episode, 888 food units and 888 bombs are placed in the environment. Collecting a food unit gives +11+1 reward, and collecting a bomb gives −11-1 reward. Hence the best cumulative reward for a given episode is 888. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" }, { "id": "1604.06778_all_68", "text": " Locomotion + Maze: During each episode, a +11+1 reward is given when the robot reaches the goal. Otherwise, the robot receives a zero reward throughout the episode. ", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control" } ]
Wasn't initial training done with 416 x 416 images?
Yes the initial training was done with 416 x 416 images [31].
[ 31 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained to a small set of objects. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_1", "text": " Current object detection datasets are limited compared to datasets for other tasks like classification and tagging. The most common detection datasets contain thousands to hundreds of thousands of images with dozens to hundreds of tags . Classification datasets have millions of images with tens or hundreds of thousands of categories . ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_2", "text": " We would like detection to scale to level of object classification. However, labelling images for detection is far more expensive than labelling for classification or tagging (tags are often user-supplied for free). Thus we are unlikely to see detection datasets on the same scale as classification datasets in the near future. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_3", "text": " We propose a new method to harness the large amount of classification data we already have and use it to expand the scope of current detection systems. Our method uses a hierarchical view of object classification that allows us to combine distinct datasets together. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_4", "text": " We also propose a joint training algorithm that allows us to train object detectors on both detection and classification data. Our method leverages labeled detection images to learn to precisely localize objects while it uses classification images to increase its vocabulary and robustness. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_5", "text": " Using this method we train YOLO9000, a real-time object detector that can detect over 9000 different object categories. First we improve upon the base YOLO detection system to produce YOLOv2, a state-of-the-art, real-time detector. Then we use our dataset combination method and joint training algorithm to train a model on more than 9000 classes from ImageNet as well as detection data from COCO. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_6", "text": " All of our code and pre-trained models are available online at http://pjreddie.com/yolo9000/. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_7", "text": " YOLO suffers from a variety of shortcomings relative to state-of-the-art detection systems. Error analysis of YOLO compared to Fast R-CNN shows that YOLO makes a significant number of localization errors. Furthermore, YOLO has relatively low recall compared to region proposal-based methods. Thus we focus mainly on improving recall and localization while maintaining classification accuracy. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_8", "text": " Computer vision generally trends towards larger, deeper networks . Better performance often hinges on training larger networks or ensembling multiple models together. However, with YOLOv2 we want a more accurate detector that is still fast. Instead of scaling up our network, we simplify the network and then make the representation easier to learn. We pool a variety of ideas from past work with our own novel concepts to improve YOLO’s performance. A summary of results can be found in Table 2. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_9", "text": " Batch Normalization. Batch normalization leads to significant improvements in convergence while eliminating the need for other forms of regularization . By adding batch normalization on all of the convolutional layers in YOLO we get more than 2% improvement in mAP. Batch normalization also helps regularize the model. With batch normalization we can remove dropout from the model without overfitting. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_10", "text": " High Resolution Classifier. All state-of-the-art detection methods use classifier pre-trained on ImageNet . Starting with AlexNet most classifiers operate on input images smaller than 256×256256256256\\times 256 . The original YOLO trains the classifier network at 224×224224224224\\times 224 and increases the resolution to 448448448 for detection. This means the network has to simultaneously switch to learning object detection and adjust to the new input resolution. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_11", "text": " For YOLOv2 we first fine tune the classification network at the full 448×448448448448\\times 448 resolution for 10 epochs on ImageNet. This gives the network time to adjust its filters to work better on higher resolution input. We then fine tune the resulting network on detection. This high resolution classification network gives us an increase of almost 4% mAP. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_12", "text": " Convolutional With Anchor Boxes. YOLO predicts the coordinates of bounding boxes directly using fully connected layers on top of the convolutional feature extractor. Instead of predicting coordinates directly Faster R-CNN predicts bounding boxes using hand-picked priors . Using only convolutional layers the region proposal network (RPN) in Faster R-CNN predicts offsets and confidences for anchor boxes. Since the prediction layer is convolutional, the RPN predicts these offsets at every location in a feature map. Predicting offsets instead of coordinates simplifies the problem and makes it easier for the network to learn. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_13", "text": " We remove the fully connected layers from YOLO and use anchor boxes to predict bounding boxes. First we eliminate one pooling layer to make the output of the network’s convolutional layers higher resolution. We also shrink the network to operate on 416416416 input images instead of 448×448448448448\\times 448. We do this because we want an odd number of locations in our feature map so there is a single center cell. Objects, especially large objects, tend to occupy the center of the image so it’s good to have a single location right at the center to predict these objects instead of four locations that are all nearby. YOLO’s convolutional layers downsample the image by a factor of 32 so by using an input image of 416416416 we get an output feature map of 13×13131313\\times 13. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_14", "text": " When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. Following YOLO, the objectness prediction still predicts the IOU of the ground truth and the proposed box and the class predictions predict the conditional probability of that class given that there is an object. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_15", "text": " Using anchor boxes we get a small decrease in accuracy. YOLO only predicts 98 boxes per image but with anchor boxes our model predicts more than a thousand. Without anchor boxes our intermediate model gets 69.569.569.5 mAP with a recall of 81%percent8181\\%. With anchor boxes our model gets 69.269.269.2 mAP with a recall of 88%percent8888\\%. Even though the mAP decreases, the increase in recall means that our model has more room to improve. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_16", "text": " Dimension Clusters. We encounter two issues with anchor boxes when using them with YOLO. The first is that the box dimensions are hand picked. The network can learn to adjust the boxes appropriately but if we pick better priors for the network to start with we can make it easier for the network to learn to predict good detections. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_17", "text": " Instead of choosing priors by hand, we run k-means clustering on the training set bounding boxes to automatically find good priors. If we use standard k-means with Euclidean distance larger boxes generate more error than smaller boxes. However, what we really want are priors that lead to good IOU scores, which is independent of the size of the box. Thus for our distance metric we use: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_18", "text": " d​(box,centroid)=1−IOU​(box,centroid)𝑑boxcentroid1IOUboxcentroidd(\\text{box},\\text{centroid})=1-\\text{IOU}(\\text{box},\\text{centroid}) ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_19", "text": " We run k-means for various values of k𝑘k and plot the average IOU with closest centroid, see Figure 2. We choose k=5𝑘5k=5 as a good tradeoff between model complexity and high recall. The cluster centroids are significantly different than hand-picked anchor boxes. There are fewer short, wide boxes and more tall, thin boxes. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_20", "text": " We compare the average IOU to closest prior of our clustering strategy and the hand-picked anchor boxes in Table 1. At only 5 priors the centroids perform similarly to 9 anchor boxes with an average IOU of 61.0 compared to 60.9. If we use 9 centroids we see a much higher average IOU. This indicates that using k-means to generate our bounding box starts the model off with a better representation and makes the task easier to learn. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_21", "text": " Direct location prediction. When using anchor boxes with YOLO we encounter a second issue: model instability, especially during early iterations. Most of the instability comes from predicting the (x,y)𝑥𝑦(x,y) locations for the box. In region proposal networks the network predicts values txsubscript𝑡𝑥t_{x} and tysubscript𝑡𝑦t_{y} and the (x,y)𝑥𝑦(x,y) center coordinates are calculated as: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_22", "text": " x𝑥\\displaystyle x =(tx∗wa)−xaabsentsubscript𝑡𝑥subscript𝑤𝑎subscript𝑥𝑎\\displaystyle=(t_{x}*w_{a})-x_{a} y𝑦\\displaystyle y =(ty∗ha)−yaabsentsubscript𝑡𝑦subscriptℎ𝑎subscript𝑦𝑎\\displaystyle=(t_{y}*h_{a})-y_{a} ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_23", "text": " For example, a prediction of tx=1subscript𝑡𝑥1t_{x}=1 would shift the box to the right by the width of the anchor box, a prediction of tx=−1subscript𝑡𝑥1t_{x}=-1 would shift it to the left by the same amount. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_24", "text": " This formulation is unconstrained so any anchor box can end up at any point in the image, regardless of what location predicted the box. With random initialization the model takes a long time to stabilize to predicting sensible offsets. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_25", "text": " Instead of predicting offsets we follow the approach of YOLO and predict location coordinates relative to the location of the grid cell. This bounds the ground truth to fall between 00 and 111. We use a logistic activation to constrain the network’s predictions to fall in this range. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_26", "text": " The network predicts 5 bounding boxes at each cell in the output feature map. The network predicts 5 coordinates for each bounding box, txsubscript𝑡𝑥t_{x}, tysubscript𝑡𝑦t_{y}, twsubscript𝑡𝑤t_{w}, thsubscript𝑡ℎt_{h}, and tosubscript𝑡𝑜t_{o}. If the cell is offset from the top left corner of the image by (cx,cy)subscript𝑐𝑥subscript𝑐𝑦(c_{x},c_{y}) and the bounding box prior has width and height pwsubscript𝑝𝑤p_{w}, phsubscript𝑝ℎp_{h}, then the predictions correspond to: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_27", "text": " bxsubscript𝑏𝑥\\displaystyle b_{x} =σ​(tx)+cxabsent𝜎subscript𝑡𝑥subscript𝑐𝑥\\displaystyle=\\sigma(t_{x})+c_{x} bysubscript𝑏𝑦\\displaystyle b_{y} =σ​(ty)+cyabsent𝜎subscript𝑡𝑦subscript𝑐𝑦\\displaystyle=\\sigma(t_{y})+c_{y} bwsubscript𝑏𝑤\\displaystyle b_{w} =pw​etwabsentsubscript𝑝𝑤superscript𝑒subscript𝑡𝑤\\displaystyle=p_{w}e^{t_{w}} bhsubscript𝑏ℎ\\displaystyle b_{h} =ph​ethabsentsubscript𝑝ℎsuperscript𝑒subscript𝑡ℎ\\displaystyle=p_{h}e^{t_{h}} P​r​(object)∗I​O​U​(b,object)𝑃𝑟object𝐼𝑂𝑈𝑏object\\displaystyle Pr(\\text{object})*IOU(b,\\text{object}) =σ​(to)absent𝜎subscript𝑡𝑜\\displaystyle=\\sigma(t_{o}) ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_28", "text": " Since we constrain the location prediction the parametrization is easier to learn, making the network more stable. Using dimension clusters along with directly predicting the bounding box center location improves YOLO by almost 5% over the version with anchor boxes. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_29", "text": " Fine-Grained Features.This modified YOLO predicts detections on a 13×13131313\\times 13 feature map. While this is sufficient for large objects, it may benefit from finer grained features for localizing smaller objects. Faster R-CNN and SSD both run their proposal networks at various feature maps in the network to get a range of resolutions. We take a different approach, simply adding a passthrough layer that brings features from an earlier layer at 26×26262626\\times 26 resolution. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_30", "text": " The passthrough layer concatenates the higher resolution features with the low resolution features by stacking adjacent features into different channels instead of spatial locations, similar to the identity mappings in ResNet. This turns the 26×26×512262651226\\times 26\\times 512 feature map into a 13×13×20481313204813\\times 13\\times 2048 feature map, which can be concatenated with the original features. Our detector runs on top of this expanded feature map so that it has access to fine grained features. This gives a modest 1% performance increase. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_31", "text": " Multi-Scale Training. The original YOLO uses an input resolution of 448×448448448448\\times 448. With the addition of anchor boxes we changed the resolution to 416×416416416416\\times 416. However, since our model only uses convolutional and pooling layers it can be resized on the fly. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_32", "text": " Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses a new image dimension size. Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320,352,…,608}320352…608\\{320,352,...,608\\}. Thus the smallest option is 320×320320320320\\times 320 and the largest is 608×608608608608\\times 608. We resize the network to that dimension and continue training. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_33", "text": " This regime forces the network to learn to predict well across a variety of input dimensions. This means the same network can predict detections at different resolutions. The network runs faster at smaller sizes so YOLOv2 offers an easy tradeoff between speed and accuracy. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_34", "text": " At low resolutions YOLOv2 operates as a cheap, fairly accurate detector. At 288×288288288288\\times 288 it runs at more than 90 FPS with mAP almost as good as Fast R-CNN. This makes it ideal for smaller GPUs, high framerate video, or multiple video streams. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_35", "text": " At high resolution YOLOv2 is a state-of-the-art detector with 78.6 mAP on VOC 2007 while still operating above real-time speeds. See Table 3 for a comparison of YOLOv2 with other frameworks on VOC 2007. Figure 4 ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_36", "text": " Further Experiments. We train YOLOv2 for detection on VOC 2012. Table 4 shows the comparative performance of YOLOv2 versus other state-of-the-art detection systems. YOLOv2 achieves 73.4 mAP while running far faster than competing methods. We also train on COCO and compare to other methods in Table 5. On the VOC metric (IOU = .5) YOLOv2 gets 44.0 mAP, comparable to SSD and Faster R-CNN. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_37", "text": " We want detection to be accurate but we also want it to be fast. Most applications for detection, like robotics or self-driving cars, rely on low latency predictions. In order to maximize performance we design YOLOv2 to be fast from the ground up. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_38", "text": " Most detection frameworks rely on VGG-16 as the base feature extractor . VGG-16 is a powerful, accurate classification network but it is needlessly complex. The convolutional layers of VGG-16 require 30.69 billion floating point operations for a single pass over a single image at 224×224224224224\\times 224 resolution. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_39", "text": " The YOLO framework uses a custom network based on the Googlenet architecture . This network is faster than VGG-16, only using 8.52 billion operations for a forward pass. However, it’s accuracy is slightly worse than VGG-16. For single-crop, top-5 accuracy at 224×224224224224\\times 224, YOLO’s custom model gets 88.0% ImageNet compared to 90.0% for VGG-16. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_40", "text": " Darknet-19. We propose a new classification model to be used as the base of YOLOv2. Our model builds off of prior work on network design as well as common knowledge in the field. Similar to the VGG models we use mostly 3×3333\\times 3 filters and double the number of channels after every pooling step . Following the work on Network in Network (NIN) we use global average pooling to make predictions as well as 1×1111\\times 1 filters to compress the feature representation between 3×3333\\times 3 convolutions . We use batch normalization to stabilize training, speed up convergence, and regularize the model . ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_41", "text": " Our final model, called Darknet-19, has 19 convolutional layers and 5 maxpooling layers. For a full description see Table 6. Darknet-19 only requires 5.58 billion operations to process an image yet achieves 72.9%percent72.972.9\\% top-1 accuracy and 91.2%percent91.291.2\\% top-5 accuracy on ImageNet. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_42", "text": " Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.10.10.1, polynomial rate decay with a power of 444, weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9 using the Darknet neural network framework . During training we use standard data augmentation tricks including random crops, rotations, and hue, saturation, and exposure shifts. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_43", "text": " As discussed above, after our initial training on images at 224×224224224224\\times 224 we fine tune our network at a larger size, 448448448. For this fine tuning we train with the above parameters but for only 10 epochs and starting at a learning rate of 10−3superscript10310^{-3}. At this higher resolution our network achieves a top-1 accuracy of 76.5%percent76.576.5\\% and a top-5 accuracy of 93.3%percent93.393.3\\%. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_44", "text": " Training for detection. We modify this network for detection by removing the last convolutional layer and instead adding on three 3×3333\\times 3 convolutional layers with 102410241024 filters each followed by a final 1×1111\\times 1 convolutional layer with the number of outputs we need for detection. For VOC we predict 5 boxes with 5 coordinates each and 20 classes per box so 125 filters. We also add a passthrough layer from the final 3×3×512335123\\times 3\\times 512 layer to the second to last convolutional layer so that our model can use fine grain features. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_45", "text": " We train the network for 160 epochs with a starting learning rate of 10−3superscript10310^{-3}, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_46", "text": " We propose a mechanism for jointly training on classification and detection data. Our method uses images labelled for detection to learn detection-specific information like bounding box coordinate prediction and objectness as well as how to classify common objects. It uses images with only class labels to expand the number of categories it can detect. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_47", "text": " During training we mix images from both detection and classification datasets. When our network sees an image labelled for detection we can backpropagate based on the full YOLOv2 loss function. When it sees a classification image we only backpropagate loss from the classification-specific parts of the architecture. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_48", "text": " This approach presents a few challenges. Detection datasets have only common objects and general labels, like “dog” or “boat”. Classification datasets have a much wider and deeper range of labels. ImageNet has more than a hundred breeds of dog, including “Norfolk terrier”, “Yorkshire terrier”, and “Bedlington terrier”. If we want to train on both datasets we need a coherent way to merge these labels. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_49", "text": " Most approaches to classification use a softmax layer across all the possible categories to compute the final probability distribution. Using a softmax assumes the classes are mutually exclusive. This presents problems for combining datasets, for example you would not want to combine ImageNet and COCO using this model because the classes “Norfolk terrier” and “dog” are not mutually exclusive. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_50", "text": " We could instead use a multi-label model to combine the datasets which does not assume mutual exclusion. This approach ignores all the structure we do know about the data, for example that all of the COCO classes are mutually exclusive. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_51", "text": " Hierarchical classification. ImageNet labels are pulled from WordNet, a language database that structures concepts and how they relate . In WordNet, “Norfolk terrier” and “Yorkshire terrier” are both hyponyms of “terrier” which is a type of “hunting dog”, which is a type of “dog”, which is a “canine”, etc. Most approaches to classification assume a flat structure to the labels however for combining datasets, structure is exactly what we need. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_52", "text": " WordNet is structured as a directed graph, not a tree, because language is complex. For example a “dog” is both a type of “canine” and a type of “domestic animal” which are both synsets in WordNet. Instead of using the full graph structure, we simplify the problem by building a hierarchical tree from the concepts in ImageNet. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_53", "text": " To build this tree we examine the visual nouns in ImageNet and look at their paths through the WordNet graph to the root node, in this case “physical object”. Many synsets only have one path through the graph so first we add all of those paths to our tree. Then we iteratively examine the concepts we have left and add the paths that grow the tree by as little as possible. So if a concept has two paths to the root and one path would add three edges to our tree and the other would only add one edge, we choose the shorter path. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_54", "text": " The final result is WordTree, a hierarchical model of visual concepts. To perform classification with WordTree we predict conditional probabilities at every node for the probability of each hyponym of that synset given that synset. For example, at the “terrier” node we predict: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_55", "text": " Pr(Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Yorkshire terrier\\displaystyle Pr(\\text{Yorkshire terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Bedlington terrier\\displaystyle Pr(\\text{Bedlington terrier} |terrier)\\displaystyle|\\text{terrier}) ……\\displaystyle... ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_56", "text": " If we want to compute the absolute probability for a particular node we simply follow the path through the tree to the root node and multiply to conditional probabilities. So if we want to know if a picture is of a Norfolk terrier we compute: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_57", "text": " P​r​(Norfolk terrier)𝑃𝑟Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier}) =P​r​(Norfolk terrier|terrier)absent𝑃𝑟conditionalNorfolk terrierterrier\\displaystyle=Pr(\\text{Norfolk terrier}|\\text{terrier}) ∗Pr(terrier\\displaystyle*Pr(\\text{terrier} |hunting dog)\\displaystyle|\\text{hunting dog}) ∗…absent…\\displaystyle*\\ldots ∗\\displaystyle* ∗Pr(mammal\\displaystyle*Pr(\\text{mammal} |Pr(animal)\\displaystyle|Pr(\\text{animal}) ∗Pr(animal\\displaystyle*Pr(\\text{animal} |physical object)\\displaystyle|\\text{physical object}) ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_58", "text": " For classification purposes we assume that the the image contains an object: P​r​(physical object)=1𝑃𝑟physical object1Pr(\\text{physical object})=1. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_59", "text": " To validate this approach we train the Darknet-19 model on WordTree built using the 1000 class ImageNet. To build WordTree1k we add in all of the intermediate nodes which expands the label space from 1000 to 1369. During training we propagate ground truth labels up the tree so that if an image is labelled as a “Norfolk terrier” it also gets labelled as a “dog” and a “mammal”, etc. To compute the conditional probabilities our model predicts a vector of 1369 values and we compute the softmax over all sysnsets that are hyponyms of the same concept, see Figure 5. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_60", "text": " Using the same training parameters as before, our hierarchical Darknet-19 achieves 71.9%percent71.971.9\\% top-1 accuracy and 90.4%percent90.490.4\\% top-5 accuracy. Despite adding 369 additional concepts and having our network predict a tree structure our accuracy only drops marginally. Performing classification in this manner also has some benefits. Performance degrades gracefully on new or unknown object categories. For example, if the network sees a picture of a dog but is uncertain what type of dog it is, it will still predict “dog” with high confidence but have lower confidences spread out among the hyponyms. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_61", "text": " This formulation also works for detection. Now, instead of assuming every image has an object, we use YOLOv2’s objectness predictor to give us the value of P​r​(physical object)𝑃𝑟physical objectPr(\\text{physical object}). The detector predicts a bounding box and the tree of probabilities. We traverse the tree down, taking the highest confidence path at every split until we reach some threshold and we predict that object class. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_62", "text": " Dataset combination with WordTree. We can use WordTree to combine multiple datasets together in a sensible fashion. We simply map the categories in the datasets to synsets in the tree. Figure 6 shows an example of using WordTree to combine the labels from ImageNet and COCO. WordNet is extremely diverse so we can use this technique with most datasets. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_63", "text": " Joint classification and detection. Now that we can combine datasets using WordTree we can train our joint model on classification and detection. We want to train an extremely large scale detector so we create our combined dataset using the COCO detection dataset and the top 9000 classes from the full ImageNet release. We also need to evaluate our method so we add in any classes from the ImageNet detection challenge that were not already included. The corresponding WordTree for this dataset has 9418 classes. ImageNet is a much larger dataset so we balance the dataset by oversampling COCO so that ImageNet is only larger by a factor of 4:1. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_64", "text": " Using this dataset we train YOLO9000. We use the base YOLOv2 architecture but only 3 priors instead of 5 to limit the output size. When our network sees a detection image we backpropagate loss as normal. For classification loss, we only backpropagate loss at or above the corresponding level of the label. For example, if the label is “dog” we do assign any error to predictions further down in the tree, “German Shepherd” versus “Golden Retriever”, because we do not have that information. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_65", "text": " When it sees a classification image we only backpropagate classification loss. To do this we simply find the bounding box that predicts the highest probability for that class and we compute the loss on just its predicted tree. We also assume that the predicted box overlaps what would be the ground truth label by at least .3.3.3 IOU and we backpropagate objectness loss based on this assumption. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_66", "text": " Using this joint training, YOLO9000 learns to find objects in images using the detection data in COCO and it learns to classify a wide variety of these objects using data from ImageNet. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_67", "text": " We evaluate YOLO9000 on the ImageNet detection task. The detection task for ImageNet shares on 44 object categories with COCO which means that YOLO9000 has only seen classification data for the majority of the test images, not detection data. YOLO9000 gets 19.7 mAP overall with 16.0 mAP on the disjoint 156 object classes that it has never seen any labelled detection data for. This mAP is higher than results achieved by DPM but YOLO9000 is trained on different datasets with only partial supervision . It also is simultaneously detecting 9000 other object categories, all in real-time. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_68", "text": " When we analyze YOLO9000’s performance on ImageNet we see it learns new species of animals well but struggles with learning categories like clothing and equipment. New animals are easier to learn because the objectness predictions generalize well from the animals in COCO. Conversely, COCO does not have bounding box label for any type of clothing, only for person, so YOLO9000 struggles to model categories like “sunglasses” or “swimming trunks”. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_69", "text": " We introduce YOLOv2 and YOLO9000, real-time detection systems. YOLOv2 is state-of-the-art and faster than other detection systems across a variety of detection datasets. Furthermore, it can be run at a variety of image sizes to provide a smooth tradeoff between speed and accuracy. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_70", "text": " YOLO9000 is a real-time framework for detection more than 9000 object categories by jointly optimizing detection and classification. We use WordTree to combine data from various sources and our joint optimization technique to train simultaneously on ImageNet and COCO. YOLO9000 is a strong step towards closing the dataset size gap between detection and classification. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_71", "text": " Many of our techniques generalize outside of object detection. Our WordTree representation of ImageNet offers a richer, more detailed output space for image classification. Dataset combination using hierarchical classification would be useful in the classification and segmentation domains. Training techniques like multi-scale training could provide benefit across a variety of visual tasks. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_72", "text": " For future work we hope to use similar techniques for weakly supervised image segmentation. We also plan to improve our detection results using more powerful matching strategies for assigning weak labels to classification data during training. Computer vision is blessed with an enormous amount of labelled data. We will continue looking for ways to bring different sources and structures of data together to make stronger models of the visual world. ", "title": "YOLO9000: Better, Faster, Stronger" } ]
What is kernel size used in each layer of SegNet?
The kernel size used in each layer of SegNet is 7*7 [1].
[ 1 ]
[ { "id": "1505.07293_all_0", "text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to autonomous vehicle driving. Early methods that relied on low-level vision cues have fast been superseded by popular machine learning algorithms. In particular, deep learning has seen huge success lately in handwritten digit recognition, speech, categorising whole images and detecting objects in images (37, 34) also seen growing interest in semantic pixel-wise labelling problems (7, 14, 35). However, these recent approaches have tried to directly adopt deep architectures designed for category prediction to pixel-wise labelling. The results, although very encouraging, have not been quite satisfactory. Primarily, the deepest layer representations/feature maps are of a small resolution as compared to input image dimensions due to several pooling layers e.g. if 2×2222\\times 2 non-overlapping max-pooling-subsampling layers are used three times, the resulting feature map is 1/8t​h1superscript8𝑡ℎ1/8^{th} of the input dimension. Therefore, an ad hoc technique is used to upsample the deepest layer feature map to match the input image dimensions by replicating features within a block i.e. all pixels within a block (8×8888\\times 8 in our example) have the same features. This often results in predictions that appear blocky222see http://david.grangier.info/scene_parsing/. This is exactly what we improve using our proposed SegNet architecture, wherein the decoders learn to map the deepest layer features to full image dimensions. Learning to decode has two other advantages. First, deeper layers each with pooling-subsampling can be introduced which increases the spatial context for pixel labelling. This results in smooth predictions unlike patch based classifiers (36, 2). Second, ablation studies to understand the effects of features such as in can be performed using the decoder stack. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_1", "text": " We draw inspiration of our encoder-decoder type architectures from probabilistic auto-encoders used to build generative models and unsupervised learning of feature hierarchies . Our main contribution is to learn an encoder-decoder stack trained in a modular and fully supervised manner for pixel-wise labelling. The addition of each deeper encoder-decoder pair results in an increased spatial context i.e., a 444 layer SegNet with 7×7777\\times 7 kernels and 2×2222\\times 2 non-overlapping max pooling in each layer has a spatial context of 106×106106106106\\times 106 pixels when a feature-map is backtracked to the input image. The SegNet predictions get smoother as more layers are added and demonstrate high accuracy, comparable to or even exceeding methods which use CRFs . SegNet maintains a constant number of features per layer which is typically set to 646464. This has a practical advantage that the computational cost successively decreases for each additional/deeper encoder-decoder pair. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_2", "text": " In Sec. 2 we review related recent literature. We describe in detail the SegNet architecture in Sec. 3 along with its qualitative analysis. Our quantitative experiments with SegNet on several well known benchmark datasets are described in Sec. 4. We also discuss the advantages and drawbacks of our approach including computational times. We conclude with pointers to future work in Sec. 5. For most of our experiments, we use outdoor RGB road scene analysis (1, 9) and indoor RGBD scene analysis datasets to measure the quantitative performance. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_3", "text": " Semantic pixel-wise segmentation is an ongoing topic of research, fuelled by challenging datasets (1, 33, 9). Current best performing methods all mostly rely on hand engineered features generally used for per-pixel independent classification. Typically, a patch is fed into a classifier e.g. Random Forest (32, 2) or Boosting (36, 20) to predict the class probabilities of the center pixel. Features based on appearance , SfM and appearance (2, 36, 20) have been explored for the CamVid test. These per-pixel noisy predictions (often called unary terms) from the classifiers are then smoothed by using a pair-wise or higher order CRF (36, 20) to improve the accuracy. More recent approaches have aimed to produce high quality unaries by trying to predict the labels for all the pixels in a patch as opposed to only the center pixel. This improves the results of Random Forest based unaries but thin structured classes are classfied poorly. Dense depth maps computed from the CamVid video have also been used as input for classification using Random Forests . Another approach argues for the use of a combination of popular hand designed features and spatio temporal super-pixelization to obtain higher accuracy . Recent top performing technique on the CamVid test addresses the imbalance among label frequencies by using additional training data from the PASCAL VOC dataset to learn object detectors. The result of all these techniques indicates the need for improved classification as increases in accuracy have mostly come from adding new features or modalities to the classifier. Post-processing using CRF models of various orders has mainly resulted in improving the accuracy of dominant classes such as sky, road, buildings with little effect on the accuracy of thin structured but equally important classes such as signs, poles, pedestrians. This highlights the need for better pixel-wise classification when imbalanced label frequencies exist. Meanwhile, indoor RGBD pixel-wise semantic segmentation has also gained popularity since the release of the NYU dataset which showed the usefulness of the depth channel to improve segmentation. Their approach used features such as RGB-SIFT, depth-SIFT, location as input to a neural network classifier to predict pixel unaries. The noisy unaries are then smoothed using a CRF. Improvements were made using a richer feature set including LBP and region segmentation to obtain higher accuracy followed by a CRF. In more recent work , both class segmentation and support relationships are inferred together using a combination of RGB and depth based cues. Another approach focusses on real-time joint reconstruction and semantic segmentation, where Random Forests are used as the classifier . Gupta et al. use boundary detection and hierarchical grouping before performing category segmentation. The common attribute along all these approaches is the use of hand engineered features for pixel-wise classifiction of either RGB or RGBD images. The application of deep learning for scene segmentation has only just begun. There have also been a few attempts to apply networks designed for categorization to segmentation, particularly by replicating the deepest layer features in blocks to match image dimensions (7, 6, 11, 8). However, the resulting classification is blocky . Another approach using recurrent neural networks merges several low resolution predictions to create input image resolution predictions. On the whole, although some of these techniques already present improvements over hand engineered features . ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_4", "text": " Our work is inspired by the unsupervised feature learning architecture proposed by Ranzato et. al . The key learning module is an encoder-decoder network where the encoder consists of a filter bank convolution, tanh squashing function, max pooling followed by sub-sampling to obtain the feature maps. For each sample, the indices of the max locations computed during pooling are stored and passed to the decoder. The decoder upsamples the feature maps by using the already stored pooled indices, also called switches, and learns a decoder filter bank to reconstruct the input image. This architecture was used for unsupervised pre-training of feature hierarchies. A similar decoding technique is used for visualizing trained convolutional networks for object classification; the transposed encoder kernels are set as the decoder kernels which are followed by a non-linearity and the pooling indices are used for upsampling. The architecture of Ranzato mainly concentrated on layer wise feature learning using small input patches although during test time a full sized image was the input. This discrepancy was corrected for by Kavukcuoglu et. al. by using test size images/feature maps to learn hierarchical encoders. Both these approaches however did not attempt to use deep encoder-decoder networks for unsupervised feature training as they discarded the decoders after each encoder training. Here, the SegNet architecture differs from these approaches as the objective used for training all the encoder-decoder pairs is the same, i.e., to minimise the cross-entropy label loss. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_5", "text": " Other applications where pixel wise predictions are made using deep networks are image super-resolution and depth map prediction from a single image . The authors in discuss the need for learning to upsample from low resolution feature maps which is the central topic of this paper. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_6", "text": " A four layer SegNet architecture used in our experiments is illustrated in Fig. 1. Each encoder performs dense convolutions, ReLU non-linearity, a non-overlapping max pooling with a 2×2222\\times 2 window and finally down-sampling. Each decoder upsamples its input using the memorized pooled indices and convolves it with a trainable filter bank. No ReLU non-linearity is used in the decoder unlike the deconvolution network (41, 42). This makes it easier to optimize the filters in each pair. The encoder and decoder filters are also untied to provide additional degrees of freedom to minimize the objective. The final layer is a soft-max classifier (with no bias term) which classifies each pixel independently. The output of the soft-max is a K channel image where K is the number of classes. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_7", "text": " SegNet uses a “flat” architecture, i.e, the number of features in each layer remains the same (646464 in our case) but with full connectivity. This choice is motivated by two reasons. First, it avoids parameter explosion, unlike an expanding deep encoder network with full feature connectivity (same for decoder). Second, the training time remains the same (in our experiments it slightly decreases) for each additional/deeper encoder-decoder pair as the feature map resolution is smaller which makes convolutions faster. Note that the decoder corresponding to the first encoder (closest to the input image) produces a multi-channel feature map although the encoder input is either 3 or 4 channels (RGB or RGBD) (see Fig. 1). This high dimensional feature representation is fed to the soft-max classifier. This is unlike the other decoders which produce feature maps the same size as their encoder inputs. A fixed pooling window of 2×2222\\times 2 with a stride of non-overlapping 222 pixels is used. This small size preserves thin structures in the scene. Further, a constant kernel size of 7×7777\\times 7 over all the layers was chosen to provide a wide context for smooth labelling i.e. a pixel in the deepest layer feature map can be traced back to a context window in the input image of 106×106106106106\\times 106 pixels. The trade-off here is between the size of the context window and retaining thin structures. Smaller kernels decrease context and larger ones potentially destroy thin structures. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_8", "text": " The input to the SegNet can be any arbitrary multi-channel image or feature map(s), e.g., RGB, RGBD, map of normals, depth etc. We perform local contrast normalization (LCN) as a pre-processing step to the input (23, 15). The advantage of this step are many, (i) to correct for non-uniform scene illumination thus reducing the dynamic range (increases contrast in shadowed parts). (ii) highlighting edges which leads the network to learn category shape, (iii) improves convergence as it decorrelates the input dimensions . LCN is performed independently for each modality, i.e., RGB is contrast normalized as a three channel input and depth as a single channel for RGBD inputs. This avoids highlighting pseudo depth edges due to RGB edges and vice-versa. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_9", "text": " Most deep learning methods use stochastic gradient descent (SGD) for training . SGD needs sufficient expertise to initialize weights with appropriate magnitudes, adapting appropriately learning rates and momentum parameters which both control the step sizes. Therefore, we adopt L-BFGS based on the comparative study by Ngiam et. al who advocate the use of L-BFGS particularly for auto-encoders. L-BFGS has faster and more stable convergence than SGD. It also works well in large batches which is useful to maximize the throughput of powerful GPUs. We initialize the weights in all the layers and the soft-max weights from a zero mean unit variance Gaussian 𝒩​(0,1)𝒩01\\mathcal{N}(0,1) and normalized the kernels to unit L2 norm. We obtained good predictive performance from the network without the need for special layer-wise weight initialization or any learning rate tuning. We also use inverse frequency weighting for the classes to correct for any label imbalances in the training set . ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_10", "text": " We use mini-batches that maximize GPU usage and avoid GPU-CPU memory transfers. Typically, 25−50255025-50 randomly chosen images (with replacement) per mini-batch. The optimizer is run for 202020 iterations per mini-batch and 101010 epochs for each layer. We empirically observe that the objective plateaus after 5−6565-6 epochs and so we run another 444 epochs as a margin. Note that, after 101010 epochs, each input sample approximately “influences” the optimizer 200200200 times. We train the encoder-decoder pair weights closest to the input layer. The soft-max layer can be trained first or randomly initialised. It then remains fixed throughout the experiment. Next, we introduce a deeper layer of encoder-decoder (see Fig. 2) and train their weights while holding the shallower layer encoder-decoder weights fixed. Note that the objective remains the same, i.e., to minimize label cross-entropy loss over the mini-batch. This is unlike unsupervised feature learning approaches which reconstruct the input of the layer in question (27, 16), thus varying the objective with each layer. The deconvolution network on the other hand optimizes the same reconstruction objective with each deeper layer. The difference to our approach is (i) the objective is unsupervised, (ii) there is no encoder to learn a feed-forward representation thus requiring an optimisation step during test time to produce features for recognition. We successively add deeper encoder-decoder pairs and train them while holding the preceeding pair’s weights fixed. In total, we use 4 layer networks, i.e., 4 encoders and 4 decoders in our experiments. Once the encoder-decoder stack is trained, we find that there is no advantage to training the soft-max layer as it only relies on a linear discriminant function. We wrote our own Matlab GPU compatible implementation of SegNet that uses the minFunc optimization library . Our code has been tested on NVIDIA Tesla K40, GTX GeForce 880M and GTXGeForce780 GPUs. We will make our light-weight Matlab code available publicly soon. With the current state of code optimisation, training a 4 layer deep SegNet on the CamVid dataset (367 training images of 360×480360480360\\times 480) takes about a week. The unoptimized test time is in the order of 222secs/frame: bulk of the computation time is spent performing tensor convolutions in the feedforward path and FFT based convolutions during backpropagation 333more speedup can be gained https://developer.nvidia.com/cuDNN. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_11", "text": " We perform an ablation study to gain some insight into about the SegNet features. The work of Zeiler et al. study the effects of feature activations in each layer of a trained network . The feature activations are mapped back to image pixel space using a deconvolutional network. The SegNet architecture by construction is trained to decode the encoder activations and we use this to visualize the effect of feature activations (which layer) in the pixel label space. A recent study has shown that in each layer of a deep network it is the “direction” or “space” (ensemble of feature activations) which encodes useful class information rather than individual units (feature activations). We therefore focus our study on the predictive effect of a subset of feature activations at each layer. For a given layer, we compute the feature activations/maps for each sample in the training set. We then compute the root mean square value of each map i.e. ∀j∈{1..64}for-all𝑗1..64\\forall j\\in\\{1..64\\} 1N​∑i∈ℐ(fji)21𝑁subscript𝑖ℐsuperscriptsuperscriptsubscript𝑓𝑗𝑖2\\sqrt{\\frac{1}{N}\\sum_{i\\in\\mathcal{I}}(f_{j}^{i})^{2}} where fjisuperscriptsubscript𝑓𝑗𝑖f_{j}^{i} is jt​hsuperscript𝑗𝑡ℎj^{th} feature map value at pixel i𝑖i at a given layer. This assigns each map a single value e.g., the CamVid training set would have a 646464 dimensional vector for each training sample for layer 4 of the SegNet. We now compute a histogram of the top ‘N’ elements of each such vector over all the samples. This histogram shows the most activated features in that layer over the training set. For any ‘N’, we set the remainder of feature maps to zero (ablation) and decode the pixel-wise labelling for a given input sample. Note that since our training is modular, this can be done after each deeper layer has been added. Some results of the top ’N’ feature activations based labelling across all the layers are shown in Fig. 3. We observe firstly that the predictions get smoother as depth is increased which is a consequence of larger spatial context in the input space. More interestingly, the top-1 4th layer features predict almost entirely the static scene classes and “fill in” the missing cars e.g. with sidewalk. Given the feature(s) which get activated for cars are zeroed out, this prediction is reasonable and indicates the network is able to learn spatial context/class location information. Similarly, trees are filled in with buildings and bollards are extended to poles. In contrast, this effect is less clear and gets worse for shallower layers. This suggests subsets of features in the deeper layers are more “tuned” to certain scene categories in agreement with earlier work . We would like to add here that our efforts to perform an ablation study by choosing each feature map in turn and setting the remaining to zero produced results which were not clearly interpretable. It is also interesting to note that for shallower layers to produce qualitatively better predictions ’N’ has to be set to about 5 or 10. The corresponding histogram has atleast 50%percent5050\\% of the features activated as opposed to about 15%percent1515\\% for the top-1 in layer 4, indicating deeper features are tuned to groups of related categories. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_12", "text": " A number of outdoor scene datasets are available for semantic parsing (10, 30, 1, 9). Out of these, we chose the CamVid and KITTI datasets which contains 11 semantic classes such as road, building, cars, pedestrians etc.. There is a large imbalance in their frequencies . Road, Sky, Building pixels are approximately 40−50405040-50 times more than pedestrian, poles, sign-symbols, cars, bicyclists in the dataset making it very challenging to label smaller categories. This dataset contains video sequences, thus we are able to benchmark our approach with those which use motion and structure (20, 36, 2) and video segments . Other datasets have more balanced label frequencies and are still image datasets. Another reason for choosing CamVid as compared to SIFT-flow, LabelMe is that the size of the training set is small (367367367) making it feasible to train the SegNet given a standard GPU in reasonable time. The CamVid dataset also contains train and test images (233233233) in day and dusk (poor lighting) conditions. The qualitative comparisons of SegNet predictions with several well known algorithms (unaries, unaries+CRF) are shown in Fig. 4. The qualitative results show the ability of the SegNet to segment small (cars, pedestrians, bicyclist) classes while producing a smooth segmentation of the overall scene. The other methods shown in Fig. 4 use structure from motion based cues. Lacking this cue, the SegNet misses some labels (cars) but fills it in with other reasonable context related classes. The CRF based results are smooth but do not retain small classes. More dense models can be better but with additional cost of inference. Table 1 compares the algorithms numerically and demonstrates its superiority over recent competing methods. The KITTI dataset is the largest publicly available road scene dataset. Recently, some images from this dataset have been hand-labelled (888 classes) for inferring dense 3D semantic maps . Note that the image sizes are approximately, 376×12413761241376\\times 1241, and so we cropped the centre 360×480360480360\\times 480 to make it compatible with the CamVid dataset. We use this dataset to analyse the effect of supervised pre-training using the CamVid data on the KITTI test set. First, we add here that testing on the KITTI samples with only the pre-trained SegNet (using CamVid data) resulted in poor performance. This is because of illumination related differences between the datasets. Therefore, we experimented with three other training variants for the KITTI dataset; (i) training all the layers of the SegNet from a random initialization, denoted SegNet(R), (ii) initializing the parameters with CamVid trained values and training only a soft-max classifier with a hidden layer, denoted SegNet(SM), and (iii) initializing the parameters with CamVid trained values and training only the 4th layer of the SegNet for just 222 epochs, denoted SegNet(L4). High quality predictions are obtained in scenario SegNet(R) as expected (Fig. 5). The good performance with CamVid pre-training and layer 4 training shows that, (i) useful semantic cues can be transferred across datasets using the shallower layers, and (ii) it is beneficial to train the deepest layer of the SegNet first given a small computational budget. Table 3 shows the SegNet(R) is competitive even when temporal cues are not used. For indoor RGBD scenes, the NYU dataset (version 2) is the largest benchmark dataset containing 795795795 training and 654654654 testing images with 141414 class (objects, furniture, wall, ceiling etc.) labelling comparison. The NYU dataset has been used to benchmark Farabet et. al’s multi-scale deep learning approach to scene parsing. This benchmark is therefore useful to compare their method, which uses ad hoc feature upsampling, with our learning to upsample based approach. We also note that they learn approximately 1.2​M1.2𝑀1.2M parameters as compared to SegNet’s 1.4​M1.4𝑀1.4M parameters. Other methods either use the smaller NYU dataset , different performance measures or test on a small set of classes citeraey. The quantitative analysis shown in Table 2 show that the SegNet predictions are better the multi-scale convnet (2 pooling layers only) in 9 out of 13 classes. This suggests the SegNet can deal with scale changes by increasing context using deeper layers. The overall results are still far from satisfactory and the lack of cues such as height from ground, depth normalization (used in ) are needed to achieve better performance. The qualitative results in Fig. 6 show that the predictions are largely correct but lack sharp edges. This is due to low input resolution of 320×240320240320\\times 240, lack of ground truth around class edges,and errors in depth interpolation. Another reason is that over the different datasets we tested on, the parameters of the SegNet remained the same. We plan to study the NYU dataset in more detail in the future. Additional results can be viewed in the supplementary material. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" }, { "id": "1505.07293_all_13", "text": " We presented SegNet, a fully trainable deep architecture for joint feature learning and mapping an input image in a feed-forward manner to its pixel-wise semantic labels. A highlight of the proposed architecture is its ability to produce smooth segment labels when compared with local patch based classifiers. This is due to deep layers of feature encoding that employ a large spatial context for pixel-wise labelling. To the best of our knowledge this is the first deep learning method to learn to map low resolution encoder feature maps to semantic labels. Both qualitative and numerical accuracy of the SegNet for outdoor and indoor scenes is very competitive, even without use of any CRF post-processing. We have also demonstrated the use of pre-trained SegNet for obtaining good performance on other datasets with a small extra computational effort. The encoder-decoder architecture of the SegNet can also be trained unsupervised and to handle missing data in the input during test time. ", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" } ]
How performing a self-attention mechanism to graph can be useful in node classification?
By using self-attention mechanism, model can find hidden meanings in the graph and it helps to do node classification [7].
[ 7 ]
[ { "id": "1710.10903_all_0", "text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has a grid-like structure. These architectures efficiently reuse their local filters, with learnable parameters, by applying them to all the input positions. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_1", "text": " However, many interesting tasks involve data that can not be represented in a grid-like structure and that instead lies in an irregular domain. This is the case of 3D meshes, social networks, telecommunication networks, biological networks or brain connectomes. Such data can usually be represented in the form of graphs. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_2", "text": " There have been several attempts in the literature to extend neural networks to deal with arbitrarily structured graphs. Early work used recursive neural networks to process data represented in graph domains as directed acyclic graphs (Frasconi et al., 1998; Sperduti & Starita, 1997). Graph Neural Networks (GNNs) were introduced in Gori et al. (2005) and Scarselli et al. (2009) as a generalization of recursive neural networks that can directly deal with a more general class of graphs, e.g. cyclic, directed and undirected graphs. GNNs consist of an iterative process, which propagates the node states until equilibrium; followed by a neural network, which produces an output for each node based on its state. This idea was adopted and improved by Li et al. (2016), which propose to use gated recurrent units (Cho et al., 2014) in the propagation step. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_3", "text": " Nevertheless, there is an increasing interest in generalizing convolutions to the graph domain. Advances in this direction are often categorized as spectral approaches and non-spectral approaches. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_4", "text": " On one hand, spectral approaches work with a spectral representation of the graphs and have been successfully applied in the context of node classification. In Bruna et al. (2014), the convolution operation is defined in the Fourier domain by computing the eigendecomposition of the graph Laplacian, resulting in potentially intense computations and non-spatially localized filters. These issues were addressed by subsequent works. Henaff et al. (2015) introduced a parameterization of the spectral filters with smooth coefficients in order to make them spatially localized. Later, Defferrard et al. (2016) proposed to approximate the filters by means of a Chebyshev expansion of the graph Laplacian, removing the need to compute the eigenvectors of the Laplacian and yielding spatially localized filters. Finally, Kipf & Welling (2017) simplified the previous method by restricting the filters to operate in a 1-step neighborhood around each node. However, in all of the aforementioned spectral approaches, the learned filters depend on the Laplacian eigenbasis, which depends on the graph structure. Thus, a model trained on a specific structure can not be directly applied to a graph with a different structure. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_5", "text": " On the other hand, we have non-spectral approaches (Duvenaud et al., 2015; Atwood & Towsley, 2016; Hamilton et al., 2017), which define convolutions directly on the graph, operating on groups of spatially close neighbors. One of the challenges of these approaches is to define an operator which works with different sized neighborhoods and maintains the weight sharing property of CNNs. In some cases, this requires learning a specific weight matrix for each node degree (Duvenaud et al., 2015), using the powers of a transition matrix to define the neighborhood while learning weights for each input channel and neighborhood degree (Atwood & Towsley, 2016), or extracting and normalizing neighborhoods containing a fixed number of nodes (Niepert et al., 2016). Monti et al. (2016) presented mixture model CNNs (MoNet), a spatial approach which provides a unified generalization of CNN architectures to graphs. More recently, Hamilton et al. (2017) introduced GraphSAGE, a method for computing node representations in an inductive manner. This technique operates by sampling a fixed-size neighborhood of each node, and then performing a specific aggregator over it (such as the mean over all the sampled neighbors’ feature vectors, or the result of feeding them through a recurrent neural network). This approach has yielded impressive performance across several large-scale inductive benchmarks. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_6", "text": " Attention mechanisms have become almost a de facto standard in many sequence-based tasks (Bahdanau et al., 2015; Gehring et al., 2016). One of the benefits of attention mechanisms is that they allow for dealing with variable sized inputs, focusing on the most relevant parts of the input to make decisions. When an attention mechanism is used to compute a representation of a single sequence, it is commonly referred to as self-attention or intra-attention. Together with Recurrent Neural Networks (RNNs) or convolutions, self-attention has proven to be useful for tasks such as machine reading (Cheng et al., 2016) and learning sentence representations (Lin et al., 2017). However, Vaswani et al. (2017) showed that not only self-attention can improve a method based on RNNs or convolutions, but also that it is sufficient for constructing a powerful model obtaining state-of-the-art performance on the machine translation task. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_7", "text": " Inspired by this recent work, we introduce an attention-based architecture to perform node classification of graph-structured data. The idea is to compute the hidden representations of each node in the graph, by attending over its neighbors, following a self-attention strategy. The attention architecture has several interesting properties: (1) the operation is efficient, since it is parallelizable across node-neighbor pairs; (2) it can be applied to graph nodes having different degrees by specifying arbitrary weights to the neighbors; and (3) the model is directly applicable to inductive learning problems, including tasks where the model has to generalize to completely unseen graphs. We validate the proposed approach on four challenging benchmarks: Cora, Citeseer and Pubmed citation networks as well as an inductive protein-protein interaction dataset, achieving or matching state-of-the-art results that highlight the potential of attention-based models when dealing with arbitrarily structured graphs. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_8", "text": " It is worth noting that, as Kipf & Welling (2017) and Atwood & Towsley (2016), our work can also be reformulated as a particular instance of MoNet (Monti et al., 2016). Moreover, our approach of sharing a neural network computation across edges is reminiscent of the formulation of relational networks (Santoro et al., 2017) and VAIN (Hoshen, 2017), wherein relations between objects or agents are aggregated pair-wise, by employing a shared mechanism. Similarly, our proposed attention model can be connected to the works by Duan et al. (2017) and Denil et al. (2017), which use a neighborhood attention operation to compute attention coefficients between different objects in an environment. Other related approaches include locally linear embedding (LLE) (Roweis & Saul, 2000) and memory networks (Weston et al., 2014). LLE selects a fixed number of neighbors around each data point, and learns a weight coefficient for each neighbor to reconstruct each point as a weighted sum of its neighbors. A second optimization step extracts the point’s feature embedding. Memory networks also share some connections with our work, in particular, if we interpret the neighborhood of a node as the memory, which is used to compute the node features by attending over its values, and then is updated by storing the new features in the same position. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_9", "text": " In this section, we will present the building block layer used to construct arbitrary graph attention networks (through stacking this layer), and directly outline its theoretical and practical benefits and limitations compared to prior work in the domain of neural graph processing. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_10", "text": " We will start by describing a single graph attentional layer, as the sole layer utilized throughout all of the GAT architectures used in our experiments. The particular attentional setup utilized by us closely follows the work of Bahdanau et al. (2015)—but the framework is agnostic to the particular choice of attention mechanism. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_11", "text": " The input to our layer is a set of node features, 𝐡={h→1,h→2,…,h→N},h→i∈ℝFformulae-sequence𝐡subscript→ℎ1subscript→ℎ2…subscript→ℎ𝑁subscript→ℎ𝑖superscriptℝ𝐹{\\bf h}=\\{\\vec{h}_{1},\\vec{h}_{2},\\dots,\\vec{h}_{N}\\},\\vec{h}_{i}\\in\\mathbb{R}^{F}, where N𝑁N is the number of nodes, and F𝐹F is the number of features in each node. The layer produces a new set of node features (of potentially different cardinality F′superscript𝐹′F^{\\prime}), 𝐡′={h→1′,h→2′,…,h→N′},h→i′∈ℝF′formulae-sequencesuperscript𝐡′superscriptsubscript→ℎ1′superscriptsubscript→ℎ2′…superscriptsubscript→ℎ𝑁′superscriptsubscript→ℎ𝑖′superscriptℝsuperscript𝐹′{\\bf h}^{\\prime}=\\{\\vec{h}_{1}^{\\prime},\\vec{h}_{2}^{\\prime},\\dots,\\vec{h}_{N}^{\\prime}\\},\\vec{h}_{i}^{\\prime}\\in\\mathbb{R}^{F^{\\prime}}, as its output. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_12", "text": " In order to obtain sufficient expressive power to transform the input features into higher-level features, at least one learnable linear transformation is required. To that end, as an initial step, a shared linear transformation, parametrized by a weight matrix, 𝐖∈ℝF′×F𝐖superscriptℝsuperscript𝐹′𝐹{\\bf W}\\in\\mathbb{R}^{F^{\\prime}\\times F}, is applied to every node. We then perform self-attention on the nodes—a shared attentional mechanism a:ℝF′×ℝF′→ℝ:𝑎→superscriptℝsuperscript𝐹′superscriptℝsuperscript𝐹′ℝa:\\mathbb{R}^{F^{\\prime}}\\times\\mathbb{R}^{F^{\\prime}}\\rightarrow\\mathbb{R} computes attention coefficients ei​j=a​(𝐖​h→i,𝐖​h→j)subscript𝑒𝑖𝑗𝑎𝐖subscript→ℎ𝑖𝐖subscript→ℎ𝑗e_{ij}=a({\\bf W}\\vec{h}_{i},{\\bf W}\\vec{h}_{j})\\vspace{0.15cm} (1) that indicate the importance of node j𝑗j’s features to node i𝑖i. In its most general formulation, the model allows every node to attend on every other node, dropping all structural information. We inject the graph structure into the mechanism by performing masked attention—we only compute ei​jsubscript𝑒𝑖𝑗e_{ij} for nodes j∈𝒩i𝑗subscript𝒩𝑖j\\in\\mathcal{N}_{i}, where 𝒩isubscript𝒩𝑖\\mathcal{N}_{i} is some neighborhood of node i𝑖i in the graph. In all our experiments, these will be exactly the first-order neighbors of i𝑖i (including i𝑖i). To make coefficients easily comparable across different nodes, we normalize them across all choices of j𝑗j using the softmax function: αi​j=softmaxj​(ei​j)=exp⁡(ei​j)∑k∈𝒩iexp⁡(ei​k).subscript𝛼𝑖𝑗subscriptsoftmax𝑗subscript𝑒𝑖𝑗subscript𝑒𝑖𝑗subscript𝑘subscript𝒩𝑖subscript𝑒𝑖𝑘\\vspace{0.1cm}\\alpha_{ij}=\\mathrm{softmax}_{j}(e_{ij})=\\frac{\\exp(e_{ij})}{\\sum_{k\\in\\mathcal{N}_{i}}\\exp(e_{ik})}. (2) In our experiments, the attention mechanism a𝑎a is a single-layer feedforward neural network, parametrized by a weight vector 𝐚→∈ℝ2​F′→𝐚superscriptℝ2superscript𝐹′\\vec{\\bf a}\\in\\mathbb{R}^{2F^{\\prime}}, and applying the LeakyReLU nonlinearity (with negative input slope α=0.2𝛼0.2\\alpha=0.2). Fully expanded out, the coefficients computed by the attention mechanism (illustrated by Figure 1 (left)) may then be expressed as: αi​j=exp⁡(LeakyReLU​(𝐚→T​(𝐖​h→i∥𝐖​h→j)))∑k∈𝒩iexp⁡(LeakyReLU​(𝐚→T​(𝐖​h→i∥𝐖​h→k)))subscript𝛼𝑖𝑗LeakyReLUsuperscript→𝐚𝑇delimited-()conditional𝐖subscript→ℎ𝑖𝐖subscript→ℎ𝑗subscript𝑘subscript𝒩𝑖LeakyReLUsuperscript→𝐚𝑇delimited-()conditional𝐖subscript→ℎ𝑖𝐖subscript→ℎ𝑘\\alpha_{ij}=\\frac{\\exp\\left(\\text{LeakyReLU}\\left(\\vec{\\bf a}^{T}({\\bf W}\\vec{h}_{i}\\|{\\bf W}\\vec{h}_{j})\\right)\\right)}{\\sum_{k\\in\\mathcal{N}_{i}}\\exp\\left(\\text{LeakyReLU}\\left(\\vec{\\bf a}^{T}({\\bf W}\\vec{h}_{i}\\|{\\bf W}\\vec{h}_{k})\\right)\\right)} (3) where ⋅Tsuperscript⋅𝑇\\cdot^{T} represents transposition and ∥∥\\| is the concatenation operation. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_13", "text": " Once obtained, the normalized attention coefficients are used to compute a linear combination of the features corresponding to them, to serve as the final output features for every node (after potentially applying a nonlinearity, σ𝜎\\sigma): h→i′=σ​(∑j∈𝒩iαi​j​𝐖​h→j).subscriptsuperscript→ℎ′𝑖𝜎subscript𝑗subscript𝒩𝑖subscript𝛼𝑖𝑗𝐖subscript→ℎ𝑗\\vec{h}^{\\prime}_{i}=\\sigma\\left(\\sum_{j\\in\\mathcal{N}_{i}}\\alpha_{ij}{\\bf W}\\vec{h}_{j}\\right). (4) ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_14", "text": " To stabilize the learning process of self-attention, we have found extending our mechanism to employ multi-head attention to be beneficial, similarly to Vaswani et al. (2017). Specifically, K𝐾K independent attention mechanisms execute the transformation of Equation 4, and then their features are concatenated, resulting in the following output feature representation: h→i′=∥k=1K⁡σ​(∑j∈𝒩iαi​jk​𝐖k​h→j)subscriptsuperscript→ℎ′𝑖superscriptsubscriptparallel-to𝑘1𝐾𝜎subscript𝑗subscript𝒩𝑖superscriptsubscript𝛼𝑖𝑗𝑘superscript𝐖𝑘subscript→ℎ𝑗\\vec{h}^{\\prime}_{i}=\\operatorname*{\\scalebox{1.0}(1.5){$\\parallel$}}_{k=1}^{K}\\sigma\\left(\\sum_{j\\in\\mathcal{N}_{i}}\\alpha_{ij}^{k}{\\bf W}^{k}\\vec{h}_{j}\\right) (5) where ∥parallel-to\\parallel represents concatenation, αi​jksuperscriptsubscript𝛼𝑖𝑗𝑘\\alpha_{ij}^{k} are normalized attention coefficients computed by the k𝑘k-th attention mechanism (aksuperscript𝑎𝑘a^{k}), and 𝐖ksuperscript𝐖𝑘{\\bf W}^{k} is the corresponding input linear transformation’s weight matrix. Note that, in this setting, the final returned output, 𝐡′superscript𝐡′{\\bf h}^{\\prime}, will consist of K​F′𝐾superscript𝐹′KF^{\\prime} features (rather than F′superscript𝐹′F^{\\prime}) for each node. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_15", "text": " Specially, if we perform multi-head attention on the final (prediction) layer of the network, concatenation is no longer sensible—instead, we employ averaging, and delay applying the final nonlinearity (usually a softmax or logistic sigmoid for classification problems) until then: h→i′=σ​(1K​∑k=1K∑j∈𝒩iαi​jk​𝐖k​h→j)subscriptsuperscript→ℎ′𝑖𝜎1𝐾superscriptsubscript𝑘1𝐾subscript𝑗subscript𝒩𝑖superscriptsubscript𝛼𝑖𝑗𝑘superscript𝐖𝑘subscript→ℎ𝑗\\vec{h}^{\\prime}_{i}=\\sigma\\left(\\frac{1}{K}\\sum_{k=1}^{K}\\sum_{j\\in\\mathcal{N}_{i}}\\alpha_{ij}^{k}{\\bf W}^{k}\\vec{h}_{j}\\right) (6) The aggregation process of a multi-head graph attentional layer is illustrated by Figure 1 (right). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_16", "text": " The graph attentional layer described in subsection 2.1 directly addresses several issues that were present in prior approaches to modelling graph-structured data with neural networks: • Computationally, it is highly efficient: the operation of the self-attentional layer can be parallelized across all edges, and the computation of output features can be parallelized across all nodes. No eigendecompositions or similar costly matrix operations are required. The time complexity of a single GAT attention head computing F′superscript𝐹′F^{\\prime} features may be expressed as O​(|V|​F​F′+|E|​F′)𝑂𝑉𝐹superscript𝐹′𝐸superscript𝐹′O(|V|FF^{\\prime}+|E|F^{\\prime}), where F𝐹F is the number of input features, and |V|𝑉|V| and |E|𝐸|E| are the numbers of nodes and edges in the graph, respectively. This complexity is on par with the baseline methods such as Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017). Applying multi-head attention multiplies the storage and parameter requirements by a factor of K𝐾K, while the individual heads’ computations are fully independent and can be parallelized. • As opposed to GCNs, our model allows for (implicitly) assigning different importances to nodes of a same neighborhood, enabling a leap in model capacity. Furthermore, analyzing the learned attentional weights may lead to benefits in interpretability, as was the case in the machine translation domain (e.g. the qualitative analysis of Bahdanau et al. (2015)). • The attention mechanism is applied in a shared manner to all edges in the graph, and therefore it does not depend on upfront access to the global graph structure or (features of) all of its nodes (a limitation of many prior techniques). This has several desirable implications: – The graph is not required to be undirected (we may simply leave out computing αi​jsubscript𝛼𝑖𝑗\\alpha_{ij} if edge j→i→𝑗𝑖j\\rightarrow i is not present). – It makes our technique directly applicable to inductive learning—including tasks where the model is evaluated on graphs that are completely unseen during training. • The recently published inductive method of Hamilton et al. (2017) samples a fixed-size neighborhood of each node, in order to keep its computational footprint consistent; this does not allow it access to the entirety of the neighborhood while performing inference. Moreover, this technique achieved some of its strongest results when an LSTM (Hochreiter & Schmidhuber, 1997)-based neighborhood aggregator is used. This assumes the existence of a consistent sequential node ordering across neighborhoods, and the authors have rectified it by consistently feeding randomly-ordered sequences to the LSTM. Our technique does not suffer from either of these issues—it works with the entirety of the neighborhood (at the expense of a variable computational footprint, which is still on-par with methods like the GCN), and does not assume any ordering within it. • As mentioned in Section 1, GAT can be reformulated as a particular instance of MoNet (Monti et al., 2016). More specifically, setting the pseudo-coordinate function to be u​(x,y)=f​(x)∥f​(y)𝑢𝑥𝑦conditional𝑓𝑥𝑓𝑦u(x,y)=f(x)\\|f(y), where f​(x)𝑓𝑥f(x) represent (potentially MLP-transformed) features of node x𝑥x and ∥∥\\| is concatenation; and the weight function to be wj​(u)=softmax​(MLP​(u))subscript𝑤𝑗𝑢softmaxMLP𝑢w_{j}(u)=\\mathrm{softmax}(\\mathrm{MLP}(u)) (with the softmax performed over the entire neighborhood of a node) would make MoNet’s patch operator similar to ours. Nevertheless, one should note that, in comparison to previously considered MoNet instances, our model uses node features for similarity computations, rather than the node’s structural properties (which would assume knowing the graph structure upfront). We were able to produce a version of the GAT layer that leverages sparse matrix operations, reducing the storage complexity to linear in the number of nodes and edges and enabling the execution of GAT models on larger graph datasets. However, the tensor manipulation framework we used only supports sparse matrix multiplication for rank-2 tensors, which limits the batching capabilities of the layer as it is currently implemented (especially for datasets with multiple graphs). Appropriately addressing this constraint is an important direction for future work. Depending on the regularity of the graph structure in place, GPUs may not be able to offer major performance benefits compared to CPUs in these sparse scenarios. It should also be noted that the size of the “receptive field” of our model is upper-bounded by the depth of the network (similarly as for GCN and similar models). Techniques such as skip connections (He et al., 2016) could be readily applied for appropriately extending the depth, however. Lastly, parallelization across all the graph edges, especially in a distributed manner, may involve a lot of redundant computation, as the neighborhoods will often highly overlap in graphs of interest. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_17", "text": " We have performed comparative evaluation of GAT models against a wide variety of strong baselines and previous approaches, on four established graph-based benchmark tasks (transductive as well as inductive), achieving or matching state-of-the-art performance across all of them. This section summarizes our experimental setup, results, and a brief qualitative analysis of a GAT model’s extracted feature representations. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_18", "text": " We utilize three standard citation network benchmark datasets—Cora, Citeseer and Pubmed (Sen et al., 2008)—and closely follow the transductive experimental setup of Yang et al. (2016). In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for training—however, honoring the transductive setup, the training algorithm has access to all of the nodes’ feature vectors. The predictive power of the trained models is evaluated on 1000 test nodes, and we use 500 additional nodes for validation purposes (the same ones as used by Kipf & Welling (2017)). The Cora dataset contains 2708 nodes, 5429 edges, 7 classes and 1433 features per node. The Citeseer dataset contains 3327 nodes, 4732 edges, 6 classes and 3703 features per node. The Pubmed dataset contains 19717 nodes, 44338 edges, 3 classes and 500 features per node. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_19", "text": " We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues (Zitnik & Leskovec, 2017). The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain completely unobserved during training. To construct the graphs, we used the preprocessed data provided by Hamilton et al. (2017). The average number of nodes per graph is 2372. Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database (Subramanian et al., 2005), and a node can possess several labels simultaneously. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_20", "text": " An overview of the interesting characteristics of the datasets is given in Table 1. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_21", "text": " For transductive learning tasks, we compare against the same strong baselines and state-of-the-art approaches as specified in Kipf & Welling (2017). This includes label propagation (LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifold regularization (ManiReg) (Belkin et al., 2006), skip-gram based graph embeddings (DeepWalk) (Perozzi et al., 2014), the iterative classification algorithm (ICA) (Lu & Getoor, 2003) and Planetoid (Yang et al., 2016). We also directly compare our model against GCNs (Kipf & Welling, 2017), as well as graph convolutional models utilising higher-order Chebyshev filters (Defferrard et al., 2016), and the MoNet model presented in Monti et al. (2016). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_22", "text": " For the inductive learning task, we compare against the four different supervised GraphSAGE inductive methods presented in Hamilton et al. (2017). These provide a variety of approaches to aggregating features within a sampled neighborhood: GraphSAGE-GCN (which extends a graph convolution-style operation to the inductive setting), GraphSAGE-mean (taking the elementwise mean value of feature vectors), GraphSAGE-LSTM (aggregating by feeding the neighborhood features into an LSTM) and GraphSAGE-pool (taking the elementwise maximization operation of feature vectors transformed by a shared nonlinear multilayer perceptron). The other transductive approaches are either completely inappropriate in an inductive setting or assume that nodes are incrementally added to a single graph, making them unusable for the setup where test graphs are completely unseen during training (such as the PPI dataset). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_23", "text": " Additionally, for both tasks we provide the performance of a per-node shared multilayer perceptron (MLP) classifier (that does not incorporate graph structure at all). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_24", "text": " For the transductive learning tasks, we apply a two-layer GAT model. Its architectural hyperparameters have been optimized on the Cora dataset and are then reused for Citeseer. The first layer consists of K=8𝐾8K=8 attention heads computing F′=8superscript𝐹′8F^{\\prime}=8 features each (for a total of 64 features), followed by an exponential linear unit (ELU) (Clevert et al., 2016) nonlinearity. The second layer is used for classification: a single attention head that computes C𝐶C features (where C𝐶C is the number of classes), followed by a softmax activation. For coping with the small training set sizes, regularization is liberally applied within the model. During training, we apply L2subscript𝐿2L_{2} regularization with λ=0.0005𝜆0.0005\\lambda=0.0005. Furthermore, dropout (Srivastava et al., 2014) with p=0.6𝑝0.6p=0.6 is applied to both layers’ inputs, as well as to the normalized attention coefficients (critically, this means that at each training iteration, each node is exposed to a stochastically sampled neighborhood). Similarly as observed by Monti et al. (2016), we found that Pubmed’s training set size (60 examples) required slight changes to the GAT architecture: we have applied K=8𝐾8K=8 output attention heads (instead of one), and strengthened the L2subscript𝐿2L_{2} regularization to λ=0.001𝜆0.001\\lambda=0.001. Otherwise, the architecture matches the one used for Cora and Citeseer. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_25", "text": " For the inductive learning task, we apply a three-layer GAT model. Both of the first two layers consist of K=4𝐾4K=4 attention heads computing F′=256superscript𝐹′256F^{\\prime}=256 features (for a total of 1024 features), followed by an ELU nonlinearity. The final layer is used for (multi-label) classification: K=6𝐾6K=6 attention heads computing 121 features each, that are averaged and followed by a logistic sigmoid activation. The training sets for this task are sufficiently large and we found no need to apply L2subscript𝐿2L_{2} regularization or dropout—we have, however, successfully employed skip connections (He et al., 2016) across the intermediate attentional layer. We utilize a batch size of 2 graphs during training. To strictly evaluate the benefits of applying an attention mechanism in this setting (i.e. comparing with a near GCN-equivalent model), we also provide the results when a constant attention mechanism, a​(x,y)=1𝑎𝑥𝑦1a(x,y)=1, is used, with the same architecture—this will assign the same weight to every neighbor. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_26", "text": " Both models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to minimize cross-entropy on the training nodes using the Adam SGD optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.01 for Pubmed, and 0.005 for all other datasets. In both cases we use an early stopping strategy on both the cross-entropy loss and accuracy (transductive) or micro-F1 (inductive) score on the validation nodes, with a patience of 100 epochs111Our implementation of the GAT layer may be found at: https://github.com/PetarV-/GAT.. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_27", "text": " The results of our comparative evaluation experiments are summarized in Tables 2 and 3. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_28", "text": " For the transductive tasks, we report the mean classification accuracy (with standard deviation) on the test nodes of our method after 100 runs, and reuse the metrics already reported in Kipf & Welling (2017) and Monti et al. (2016) for state-of-the-art techniques. Specifically, for the Chebyshev filter-based approach (Defferrard et al., 2016), we provide the maximum reported performance for filters of orders K=2𝐾2K=2 and K=3𝐾3K=3. In order to fairly assess the benefits of the attention mechanism, we further evaluate a GCN model that computes 64 hidden features, attempting both the ReLU and ELU activation, and reporting (as GCN-64∗) the better result after 100 runs (which was the ReLU in all three cases). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_29", "text": " For the inductive task, we report the micro-averaged F1 score on the nodes of the two unseen test graphs, averaged after 10 runs, and reuse the metrics already reported in Hamilton et al. (2017) for the other techniques. Specifically, as our setup is supervised, we compare against the supervised GraphSAGE approaches. To evaluate the benefits of aggregating across the entire neighborhood, we further provide (as GraphSAGE∗) the best result we were able to achieve with GraphSAGE by just modifying its architecture (this was with a three-layer GraphSAGE-LSTM with (512, 512, 726) features computed in each layer and 128 features used for aggregating neighborhoods). Finally, we report the 10-run result of our constant attention GAT model (as Const-GAT), to fairly evaluate the benefits of the attention mechanism against a GCN-like aggregation scheme (with the same architecture). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_30", "text": " Our results successfully demonstrate state-of-the-art performance being achieved or matched across all four datasets—in concordance with our expectations, as per the discussion in Section 2.2. More specifically, we are able to improve upon GCNs by a margin of 1.5% and 1.6% on Cora and Citeseer, respectively, suggesting that assigning different weights to nodes of a same neighborhood may be beneficial. It is worth noting the improvements achieved on the PPI dataset: Our GAT model improves by 20.5% w.r.t. the best GraphSAGE result we were able to obtain, demonstrating that our model has the potential to be applied in inductive settings, and that larger predictive power can be leveraged by observing the entire neighborhood. Furthermore, it improves by 3.9% w.r.t. Const-GAT (the identical architecture with constant attention mechanism), once again directly demonstrating the significance of being able to assign different weights to different neighbors. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_31", "text": " The effectiveness of the learned feature representations may also be investigated qualitatively—and for this purpose we provide a visualization of the t-SNE (Maaten & Hinton, 2008)-transformed feature representations extracted by the first layer of a GAT model pre-trained on the Cora dataset (Figure 2). The representation exhibits discernible clustering in the projected 2D space. Note that these clusters correspond to the seven labels of the dataset, verifying the model’s discriminative power across the seven topic classes of Cora. Additionally, we visualize the relative strengths of the normalized attention coefficients (averaged across all eight attention heads). Properly interpreting these coefficients (as performed by e.g. Bahdanau et al. (2015)) will require further domain knowledge about the dataset under study, and is left for future work. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_32", "text": " We have presented graph attention networks (GATs), novel convolution-style neural networks that operate on graph-structured data, leveraging masked self-attentional layers. The graph attentional layer utilized throughout these networks is computationally efficient (does not require costly matrix operations, and is parallelizable across all nodes in the graph), allows for (implicitly) assigning different importances to different nodes within a neighborhood while dealing with different sized neighborhoods, and does not depend on knowing the entire graph structure upfront—thus addressing many of the theoretical issues with previous spectral-based approaches. Our models leveraging attention have successfully achieved or matched state-of-the-art performance across four well-established node classification benchmarks, both transductive and inductive (especially, with completely unseen graphs used for testing). ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_33", "text": " There are several potential improvements and extensions to graph attention networks that could be addressed as future work, such as overcoming the practical problems described in subsection 2.2 to be able to handle larger batch sizes. A particularly interesting research direction would be taking advantage of the attention mechanism to perform a thorough analysis on the model interpretability. Moreover, extending the method to perform graph classification instead of node classification would also be relevant from the application perspective. Finally, extending the model to incorporate edge features (possibly indicating relationship among nodes) would allow us to tackle a larger variety of problems. ", "title": "Graph Attention Networks" }, { "id": "1710.10903_all_34", "text": " The authors would like to thank the developers of TensorFlow (Abadi et al., 2015). PV and PL have received funding from the European Union’s Horizon 2020 research and innovation programme PROPAG-AGEING under grant agreement No 634821. We further acknowledge the support of the following agencies for research funding and computing support: CIFAR, Canada Research Chairs, Compute Canada and Calcul Québec, as well as NVIDIA for the generous GPU support. Special thanks to: Benjamin Day and Fabian Jansen for kindly pointing out issues in a previous iteration of the paper; Michał Drożdżal for useful discussions, feedback and support; and Gaétan Marceau for reviewing the paper prior to submission. ", "title": "Graph Attention Networks" } ]
Is hyperparameter optimization performed independently for the two dataset corpora?
Yes, it does appear that hyperparameter optimization for each dataset is performed independently [27].
[ 27 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or information extraction methods of detecting predicate argument triples that can later be queried as a relational database . Supervised machine learning approaches have largely been absent from this space due to both the lack of large scale training datasets, and the difficulty in structuring statistical models flexible enough to learn to exploit document structure. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_1", "text": " While obtaining supervised natural language reading comprehension data has proved difficult, some researchers have explored generating synthetic narratives and queries (3, 4). Such approaches allow the generation of almost unlimited amounts of supervised data and enable researchers to isolate the performance of their algorithms on individual simulated phenomena. Work on such data has shown that neural network based models hold promise for modelling reading comprehension, something that we will build upon here. Historically, however, many similar approaches in Computational Linguistics have failed to manage the transition from synthetic data to real environments, as such closed worlds inevitably fail to capture the complexity, richness, and noise of natural language . ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_2", "text": " In this work we seek to directly address the lack of real natural language training data by introducing a novel approach to building a supervised reading comprehension data set. We observe that summary and paraphrase sentences, with their associated documents, can be readily converted to context–query–answer triples using simple entity detection and anonymisation algorithms. Using this approach we have collected two new corpora of roughly a million news stories with associated queries from the CNN and Daily Mail websites. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_3", "text": " We demonstrate the efficacy of our new corpora by building novel deep learning models for reading comprehension. These models draw on recent developments for incorporating attention mechanisms into recurrent neural network architectures (6, 7, 8, 4). This allows a model to focus on the aspects of a document that it believes will help it answer a question, and also allows us to visualises its inference process. We compare these neural models to a range of baselines and heuristic benchmarks based upon a traditional frame semantic analysis provided by a state-of-the-art natural language processing (NLP) pipeline. Our results indicate that the neural models achieve a higher accuracy, and do so without any specific encoding of the document or query structure. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_4", "text": " The reading comprehension task naturally lends itself to a formulation as a supervised learning problem. Specifically we seek to estimate the conditional probability p​(a|c,q)𝑝conditional𝑎𝑐𝑞p(a|c,q), where c𝑐c is a context document, q𝑞q a query relating to that document, and a𝑎a the answer to that query. For a focused evaluation we wish to be able to exclude additional information, such as world knowledge gained from co-occurrence statistics, in order to test a model’s core capability to detect and understand the linguistic relationships between entities in the context document. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_5", "text": " Such an approach requires a large training corpus of document–query–answer triples and until now such corpora have been limited to hundreds of examples and thus mostly of use only for testing . This limitation has meant that most work in this area has taken the form of unsupervised approaches which use templates or syntactic/semantic analysers to extract relation tuples from the document to form a knowledge graph that can be queried. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_6", "text": " Here we propose a methodology for creating real-world, large scale supervised training data for learning reading comprehension models. Inspired by work in summarisation (10, 11), we create two machine reading corpora by exploiting online newspaper articles and their matching summaries. We have collected 93k articles from the CNN111www.cnn.com and 220k articles from the Daily Mail222www.dailymail.co.uk websites. Both news providers supplement their articles with a number of bullet points, summarising aspects of the information contained in the article. Of key importance is that these summary points are abstractive and do not simply copy sentences from the documents. We construct a corpus of document–query–answer triples by turning these bullet points into Cloze style questions by replacing one entity at a time with a placeholder. This results in a combined corpus of roughly 1M data points (Table 1). Code to replicate our datasets—and to apply this method to other sources—is available online333http://www.github.com/deepmind/rc-data/. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_7", "text": " Note that the focus of this paper is to provide a corpus for evaluating a model’s ability to read and comprehend a single document, not world knowledge or co-occurrence. To understand that distinction consider for instance the following Cloze form queries (created from headlines in the Daily Mail validation set): a) The hi-tech bra that helps you beat breast X; b) Could Saccharin help beat X ?; c) Can fish oils help fight prostate X ? An ngram language model trained on the Daily Mail would easily correctly predict that (X = cancer), regardless of the contents of the context document, simply because this is a very frequently cured entity in the Daily Mail corpus. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_8", "text": " To prevent such degenerate solutions and create a focused task we anonymise and randomise our corpora with the following procedure, a) use a coreference system to establish coreferents in each data point; b) replace all entities with abstract entity markers according to coreference; c) randomly permute these entity markers whenever a data point is loaded. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_9", "text": " Compare the original and anonymised version of the example in Table 3. Clearly a human reader can answer both queries correctly. However in the anonymised setup the context document is required for answering the query, whereas the original version could also be answered by someone with the requisite background knowledge. Therefore, following this procedure, the only remaining strategy for answering questions is to do so by exploiting the context presented with each question. Thus performance on our two corpora truly measures reading comprehension capability. Naturally a production system would benefit from using all available information sources, such as clues through language and co-occurrence statistics. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_10", "text": " Table 2 gives an indication of the difficulty of the task, showing how frequent the correct answer is contained in the top N𝑁N entity markers in a given document. Note that our models don’t distinguish between entity markers and regular words. This makes the task harder and the models more general. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_11", "text": " So far we have motivated the need for better datasets and tasks to evaluate the capabilities of machine reading models. We proceed by describing a number of baselines, benchmarks and new models to evaluate against this paradigm. We define two simple baselines, the majority baseline (maximum frequency) picks the entity most frequently observed in the context document, whereas the exclusive majority (exclusive frequency) chooses the entity most frequently observed in the context but not observed in the query. The idea behind this exclusion is that the placeholder is unlikely to be mentioned twice in a single Cloze form query. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_12", "text": " Traditionally, a pipeline of NLP models has been used for attempting question answering, that is models that make heavy use of linguistic annotation, structured world knowledge and semantic parsing and similar NLP pipeline outputs. Building on these approaches, we define a number of NLP-centric models for our machine reading task. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_13", "text": " Frame-semantic parsing attempts to identify predicates and their arguments, allowing models access to information about “who did what to whom”. Naturally this kind of annotation lends itself to being exploited for question answering. We develop a benchmark that makes use of frame-semantic annotations which we obtained by parsing our model with a state-of-the-art frame-semantic parser (13, 14). As the parser makes extensive use of linguistic information we run these benchmarks on the unanonymised version of our corpora. There is no significant advantage in this as the frame-semantic approach used here does not possess the capability to generalise through a language model beyond exploiting one during the parsing phase. Thus, the key objective of evaluating machine comprehension abilities is maintained. Extracting entity-predicate triples—denoted as (e1,V,e2)subscript𝑒1𝑉subscript𝑒2(e_{1},V,e_{2})—from both the query q𝑞q and context document d𝑑d, we attempt to resolve queries using a number of rules with an increasing recall/precision trade-off as follows (Table 4). ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_14", "text": " For reasons of clarity, we pretend that all PropBank triples are of the form (e1,V,e2)subscript𝑒1𝑉subscript𝑒2(e_{1},V,e_{2}). In practice, we take the argument numberings of the parser into account and only compare like with like, except in cases such as the permuted frame rule, where ordering is relaxed. In the case of multiple possible answers from a single rule, we randomly choose one. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_15", "text": " We consider another baseline that relies on word distance measurements. Here, we align the placeholder of the Cloze form question with each possible entity in the context document and calculate a distance measure between the question and the context around the aligned entity. This score is calculated by summing the distances of every word in q𝑞q to their nearest aligned word in d𝑑d, where alignment is defined by matching words either directly or as aligned by the coreference system. We tune the maximum penalty per word (m=8𝑚8m=8) on the validation data. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_16", "text": " Neural networks have successfully been applied to a range of tasks in NLP. This includes classification tasks such as sentiment analysis or POS tagging , as well as generative problems such as language modelling or machine translation . We propose three neural models for estimating the probability of word type a𝑎a from document d𝑑d answering query q𝑞q: p​(a|d,q)𝑝conditional𝑎𝑑𝑞\\displaystyle p(a|d,q) ∝exp⁡(W​(a)​g​(d,q)),s.t. ​a∈V,formulae-sequenceproportional-toabsent𝑊𝑎𝑔𝑑𝑞s.t. 𝑎𝑉\\displaystyle\\propto\\exp\\left(W(a)g(d,q)\\right),\\quad\\text{s.t. }a\\in V, where V𝑉V is the vocabulary444The vocabulary includes all the word types in the documents, questions, the entity maskers, and the question unknown entity marker., and W​(a)𝑊𝑎W(a) indexes row a𝑎a of weight matrix W𝑊W and through a slight abuse of notation word types double as indexes. Note that we do not privilege entities or variables, the model must learn to differentiate these in the input sequence. The function g​(d,q)𝑔𝑑𝑞g(d,q) returns a vector embedding of a document and query pair. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_17", "text": " Long short-term memory (LSTM, ) networks have recently seen considerable success in tasks such as machine translation and language modelling . When used for translation, Deep LSTMs have shown a remarkable ability to embed long sequences into a vector representation which contains enough information to generate a full translation in another language. Our first neural model for reading comprehension tests the ability of Deep LSTM encoders to handle significantly longer sequences. We feed our documents one word at a time into a Deep LSTM encoder, after a delimiter we then also feed the query into the encoder. Alternatively we also experiment with processing the query then the document. The result is that this model processes each document query pair as a single long sequence. Given the embedded document and query the network predicts which token in the document answers the query. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_18", "text": " We employ a Deep LSTM cell with skip connections from each input x​(t)𝑥𝑡x(t) to every hidden layer, and from every hidden layer to the output y​(t)𝑦𝑡y(t): x′​(t,k)superscript𝑥′𝑡𝑘\\displaystyle x^{\\prime}(t,k) =x(t)||y′(t,k−1),y(t)=y′(t,1)||…||y′(t,K)\\displaystyle=x(t)||y^{\\prime}(t,k-1),\\quad\\quad y(t)=y^{\\prime}(t,1)||\\ldots||y^{\\prime}(t,K) i​(t,k)𝑖𝑡𝑘\\displaystyle i(t,k) =σ​(Wk​x​i​x′​(t,k)+Wk​h​i​h​(t−1,k)+Wk​c​i​c​(t−1,k)+bk​i)absent𝜎subscript𝑊𝑘𝑥𝑖superscript𝑥′𝑡𝑘subscript𝑊𝑘ℎ𝑖ℎ𝑡1𝑘subscript𝑊𝑘𝑐𝑖𝑐𝑡1𝑘subscript𝑏𝑘𝑖\\displaystyle=\\sigma\\left(W_{kxi}x^{\\prime}(t,k)+W_{khi}h(t-1,k)+W_{kci}c(t-1,k)+b_{ki}\\right) f​(t,k)𝑓𝑡𝑘\\displaystyle f(t,k) =σ​(Wk​x​f​x​(t)+Wk​h​f​h​(t−1,k)+Wk​c​f​c​(t−1,k)+bk​f)absent𝜎subscript𝑊𝑘𝑥𝑓𝑥𝑡subscript𝑊𝑘ℎ𝑓ℎ𝑡1𝑘subscript𝑊𝑘𝑐𝑓𝑐𝑡1𝑘subscript𝑏𝑘𝑓\\displaystyle=\\sigma\\left(W_{kxf}x(t)+W_{khf}h(t-1,k)+W_{kcf}c(t-1,k)+b_{kf}\\right) c​(t,k)𝑐𝑡𝑘\\displaystyle c(t,k) =f​(t,k)​c​(t−1,k)+i​(t,k)​tanh⁡(Wk​x​c​x′​(t,k)+Wk​h​c​h​(t−1,k)+bk​c)absent𝑓𝑡𝑘𝑐𝑡1𝑘𝑖𝑡𝑘subscript𝑊𝑘𝑥𝑐superscript𝑥′𝑡𝑘subscript𝑊𝑘ℎ𝑐ℎ𝑡1𝑘subscript𝑏𝑘𝑐\\displaystyle=f(t,k)c(t-1,k)+i(t,k)\\tanh\\left(W_{kxc}x^{\\prime}(t,k)+W_{khc}h(t-1,k)+b_{kc}\\right) o​(t,k)𝑜𝑡𝑘\\displaystyle o(t,k) =σ​(Wk​x​o​x′​(t,k)+Wk​h​o​h​(t−1,k)+Wk​c​o​c​(t,k)+bk​o)absent𝜎subscript𝑊𝑘𝑥𝑜superscript𝑥′𝑡𝑘subscript𝑊𝑘ℎ𝑜ℎ𝑡1𝑘subscript𝑊𝑘𝑐𝑜𝑐𝑡𝑘subscript𝑏𝑘𝑜\\displaystyle=\\sigma\\left(W_{kxo}x^{\\prime}(t,k)+W_{kho}h(t-1,k)+W_{kco}c(t,k)+b_{ko}\\right) h​(t,k)ℎ𝑡𝑘\\displaystyle h(t,k) =o​(t,k)​tanh⁡(c​(t,k))absent𝑜𝑡𝑘𝑐𝑡𝑘\\displaystyle=o(t,k)\\tanh\\left(c(t,k)\\right) y′​(t,k)superscript𝑦′𝑡𝑘\\displaystyle y^{\\prime}(t,k) =Wk​y​h​(t,k)+bk​yabsentsubscript𝑊𝑘𝑦ℎ𝑡𝑘subscript𝑏𝑘𝑦\\displaystyle=W_{ky}h(t,k)+b_{ky} where |||| indicates vector concatenation h​(t,k)ℎ𝑡𝑘h(t,k) is the hidden state for layer k𝑘k at time t𝑡t, and i𝑖i, f𝑓f, o𝑜o are the input, forget, and output gates respectively. Thus our Deep LSTM Reader is defined by gLSTM​(d,q)=y​(|d|+|q|)superscript𝑔LSTM𝑑𝑞𝑦𝑑𝑞g^{\\text{LSTM}}(d,q)=y(|d|+|q|) with input x​(t)𝑥𝑡x(t) the concatenation of d𝑑d and q𝑞q separated by the delimiter ||||||. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_19", "text": " The Deep LSTM Reader must propagate dependencies over long distances in order to connect queries to their answers. The fixed width hidden vector forms a bottleneck for this information flow that we propose to circumvent using an attention mechanism inspired by recent results in translation and image recognition (6, 7). This attention model first encodes the document and the query using separate bidirectional single layer LSTMs . ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_20", "text": " We denote the outputs of the forward and backward LSTMs as y→​(t)→𝑦𝑡\\overrightarrow{y}(t) and y←​(t)←𝑦𝑡\\overleftarrow{y}(t) respectively. The encoding u𝑢u of a query of length |q|𝑞|q| is formed by the concatenation of the final forward and backward outputs, u=yq→(|q|)||yq←(1).u=\\overrightarrow{y_{q}}(|q|)\\,\\,||\\,\\,\\overleftarrow{y_{q}}(1). ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_21", "text": " For the document the composite output for each token at position t𝑡t is, yd(t)=yd→(t)||yd←(t).y_{d}(t)=\\overrightarrow{y_{d}}(t)\\,\\,||\\,\\,\\overleftarrow{y_{d}}(t). The representation r𝑟r of the document d𝑑d is formed by a weighted sum of these output vectors. These weights are interpreted as the degree to which the network attends to a particular token in the document when answering the query: m​(t)𝑚𝑡\\displaystyle m(t) =tanh⁡(Wy​m​yd​(t)+Wu​m​u),absentsubscript𝑊𝑦𝑚subscript𝑦𝑑𝑡subscript𝑊𝑢𝑚𝑢\\displaystyle=\\tanh\\left(W_{ym}y_{d}(t)+W_{um}u\\right), s​(t)𝑠𝑡\\displaystyle s(t) ∝exp⁡(wm​s⊺​m​(t)),proportional-toabsentsuperscriptsubscriptw𝑚𝑠⊺𝑚𝑡\\displaystyle\\propto\\exp\\left(\\mathrm{w}_{ms}^{\\intercal}m(t)\\right), r𝑟\\displaystyle r =yd​s,absentsubscript𝑦𝑑𝑠\\displaystyle=y_{d}s, where we are interpreting ydsubscript𝑦𝑑y_{d} as a matrix with each column being the composite representation yd​(t)subscript𝑦𝑑𝑡y_{d}(t) of document token t𝑡t. The variable s​(t)𝑠𝑡s(t) is the normalised attention at token t𝑡t. Given this attention score the embedding of the document r𝑟r is computed as the weighted sum of the token embeddings. The model is completed with the definition of the joint document and query embedding via a non-linear combination: gAR​(d,q)=tanh⁡(Wr​g​r+Wu​g​u).superscript𝑔AR𝑑𝑞subscript𝑊𝑟𝑔𝑟subscript𝑊𝑢𝑔𝑢\\displaystyle g^{\\text{AR}}(d,q)=\\tanh\\left(W_{rg}r+W_{ug}u\\right). ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_22", "text": " The Attentive Reader can be viewed as a generalisation of the application of Memory Networks to question answering . That model employs an attention mechanism at the sentence level where each sentence is represented by a bag of embeddings. The Attentive Reader employs a finer grained token level attention mechanism where the tokens are embedded given their entire future and past context in the input document. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_23", "text": " The Attentive Reader is able to focus on the passages of a context document that are most likely to inform the answer to the query. We can go further by equipping the model with the ability to reread from the document as each query token is read. At each token i𝑖i of the query q𝑞q the model computes a document representation vector r​(i)𝑟𝑖r(i) using the bidirectional embedding yq(i)=yq→(i)||yq←(i)y_{q}(i)=\\overrightarrow{y_{q}}(i)\\,\\,||\\,\\,\\overleftarrow{y_{q}}(i): m​(i,t)𝑚𝑖𝑡\\displaystyle m(i,t) =tanh⁡(Wd​m​yd​(t)+Wr​m​r​(i−1)+Wq​m​yq​(i)),1≤i≤|q|,formulae-sequenceabsentsubscript𝑊𝑑𝑚subscript𝑦𝑑𝑡subscript𝑊𝑟𝑚𝑟𝑖1subscript𝑊𝑞𝑚subscript𝑦𝑞𝑖1𝑖𝑞\\displaystyle=\\tanh\\left(W_{dm}y_{d}(t)+W_{rm}r(i-1)+W_{qm}y_{q}(i)\\right),\\quad 1\\leq i\\leq|q|, s​(i,t)𝑠𝑖𝑡\\displaystyle s(i,t) ∝exp⁡(wm​s⊺​m​(i,t)),proportional-toabsentsuperscriptsubscriptw𝑚𝑠⊺𝑚𝑖𝑡\\displaystyle\\propto\\exp\\left(\\mathrm{w}_{ms}^{\\intercal}m(i,t)\\right), r​(0)𝑟0\\displaystyle r(0) =𝐫𝟎,r​(i)=yd⊺​s​(i)+tanh⁡(Wr​r​r​(i−1))1≤i≤|q|.formulae-sequenceabsentsubscript𝐫0formulae-sequence𝑟𝑖superscriptsubscript𝑦𝑑⊺𝑠𝑖subscript𝑊𝑟𝑟𝑟𝑖11𝑖𝑞\\displaystyle=\\mathbf{r_{0}},\\quad r(i)=y_{d}^{\\intercal}s(i)+\\tanh\\left(W_{rr}r(i-1)\\right)\\quad 1\\leq i\\leq|q|. The result is an attention mechanism that allows the model to recurrently accumulate information from the document as it sees each query token, ultimately outputting a final joint document query representation for the answer prediction, gIR​(d,q)=tanh⁡(Wr​g​r​(|q|)+Wq​g​u).superscript𝑔IR𝑑𝑞subscript𝑊𝑟𝑔𝑟𝑞subscript𝑊𝑞𝑔𝑢\\displaystyle g^{\\text{IR}}(d,q)=\\tanh\\left(W_{rg}r(|q|)+W_{qg}u\\right). ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_24", "text": " Having described a number of models in the previous section, we next evaluate these models on our reading comprehension corpora. Our hypothesis is that neural models should in principle be well suited for this task. However, we argued that simple recurrent models such as the LSTM probably have insufficient expressive power for solving tasks that require complex inference. We expect that the attention-based models would therefore outperform the pure LSTM-based approaches. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_25", "text": " Considering the second dimension of our investigation, the comparison of traditional versus neural approaches to NLP, we do not have a strong prior favouring one approach over the other. While numerous publications in the past few years have demonstrated neural models outperforming classical methods, it remains unclear how much of that is a side-effect of the language modelling capabilities intrinsic to any neural model for NLP. The entity anonymisation and permutation aspect of the task presented here may end up levelling the playing field in that regard, favouring models capable of dealing with syntax rather than just semantics. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_26", "text": " With these considerations in mind, the experimental part of this paper is designed with a three-fold aim. First, we want to establish the difficulty of our machine reading task by applying a wide range of models to it. Second, we compare the performance of parse-based methods versus that of neural models. Third, within the group of neural models examined, we want to determine what each component contributes to the end performance; that is, we want to analyse the extent to which an LSTM can solve this task, and to what extent various attention mechanisms impact performance. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_27", "text": " All model hyperparameters were tuned on the respective validation sets of the two corpora.555For the Deep LSTM Reader, we consider hidden layer sizes (64,128,256¯)64128¯256{(64,128,\\underline{256})}, depths (1,2¯,4)1¯24{(1,\\underline{2},4)}, initial learning rates (1​e−3,5​e−4,1​e−4¯,5​e−5)1e35e4¯1e45e5{(1\\text{\\sc{e}}{-}3,5\\text{\\sc{e}}{-}4,\\underline{1\\text{\\sc{e}}{-}4},5\\text{\\sc{e}}{-}5)}, batch sizes (16,32¯)16¯32{(16,\\underline{32})} and dropout (0.0,0.1¯,0.2)0.0¯0.10.2(0.0,\\underline{0.1},0.2). We evaluate two types of feeds. In the cqa setup we feed first the context document and subsequently the question into the encoder, while the qca model starts by feeding in the question followed by the context document. We report results on the best model (underlined hyperparameters, qca setup). For the attention models we consider hidden layer sizes (64,128,256)64128256(64,128,256), single layer, initial learning rates (1​e−4,5​e−5,2.5​e−5,1​e−5)1e45e52.5e51e5(1\\text{\\sc{e}}{-}4,5\\text{\\sc{e}}{-}5,2.5\\text{\\sc{e}}{-}5,1\\text{\\sc{e}}{-}5), batch sizes (8,16,32)81632(8,16,32) and dropout (0,0.1,0.2,0.5)00.10.20.5(0,0.1,0.2,0.5). For all models we used asynchronous RmsProp with a momentum of 0.90.90.9 and a decay of 0.950.950.95. See Appendix A for more details of the experimental setup. Our experimental results are in Table 5, with the Attentive and Impatient Readers performing best across both datasets. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_28", "text": " While the one frame-semantic model proposed in this paper is clearly a simplification of what could be achieved with annotations from an NLP pipeline, it does highlight the difficulty of the task when approached from a symbolic NLP perspective. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_29", "text": " Two issues stand out when analysing the results in detail. First, the frame-semantic pipeline has a poor degree of coverage with many relations not being picked up by our PropBank parser as they do not adhere to the default predicate-argument structure. This effect is exacerbated by the type of language used in the highlights that form the basis of our datasets. The second issue is that the frame-semantic approach does not trivially scale to situations where several sentences, and thus frames, are required to answer a query. This was true for the majority of queries in the dataset. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_30", "text": " More surprising perhaps is the relatively strong performance of the word distance benchmark, particularly relative to the frame-semantic benchmark, which we had expected to perform better. Here, again, the nature of the datasets used can explain aspects of this result. Where the frame-semantic model suffered due to the language used in the highlights, the word distance model benefited. Particularly in the case of the Daily Mail dataset, highlights frequently have significant lexical overlap with passages in the accompanying article, which makes it easy for the word distance benchmark. For instance the query “Tom Hanks is friends with X’s manager, Scooter Brown” has the phrase “… turns out he is good friends with Scooter Brown, manager for Carly Rae Jepson” in the context. The word distance benchmark correctly aligns these two while the frame-semantic approach fails to pickup the friendship or management relations when parsing the query. We expect that on other types of machine reading data where questions rather than Cloze queries are used this particular model would perform significantly worse. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_31", "text": " Within the group of neural models explored here, the results paint a clear picture with the Impatient and the Attentive Readers outperforming all other models. This is consistent with our hypothesis that attention is a key ingredient for machine reading and question answering due to the need to propagate information over long distances. The Deep LSTM Reader performs surprisingly well, once again demonstrating that this simple sequential architecture can do a reasonable job of learning to abstract long sequences, even when they are up to two thousand tokens in length. However this model does fail to match the performance of the attention based models, even though these only use single layer LSTMs.666Memory constraints prevented us from experimenting with deeper Attentive Readers. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_32", "text": " The poor results of the Uniform Reader support our hypothesis of the significance of the attention mechanism in the Attentive model’s performance as the only difference between these models is that the attention variables are ignored in the Uniform Reader. The precision@recall statistics in Figure 2 again highlight the strength of the attentive approach. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_33", "text": " We can visualise the attention mechanism as a heatmap over a context document to gain further insight into the models’ performance. The highlighted words show which tokens in the document were attended to by the model. In addition we must also take into account that the vectors at each token integrate long range contextual information via the bidirectional LSTM encoders. Figure 3 depicts heat maps for two queries that were correctly answered by the Attentive Reader.777Note that these examples were chosen as they were short, the average CNN validation document contained 763 tokens and 27 entities, thus most instances were significantly harder to answer than these examples. In both cases confidently arriving at the correct answer requires the model to perform both significant lexical generalsiation, e.g. ‘killed’ →→\\rightarrow ‘deceased’, and co-reference or anaphora resolution, e.g. ‘ent119 was killed’ →→\\rightarrow ‘he was identified.’ However it is also clear that the model is able to integrate these signals with rough heuristic indicators such as the proximity of query words to the candidate answer. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_34", "text": " The supervised paradigm for training machine reading and comprehension models provides a promising avenue for making progress on the path to building full natural language understanding systems. We have demonstrated a methodology for obtaining a large number of document-query-answer triples and shown that recurrent and attention based neural networks provide an effective modelling framework for this task. Our analysis indicates that the Attentive and Impatient Readers are able to propagate and integrate semantic information over long distances. In particular we believe that the incorporation of an attention mechanism is the key contributor to these results. ", "title": "Teaching Machines to Read and Comprehend" }, { "id": "1506.03340_all_35", "text": " The attention mechanism that we have employed is just one instantiation of a very general idea which can be further exploited. However, the incorporation of world knowledge and multi-document queries will also require the development of attention and embedding mechanisms whose complexity to query does not scale linearly with the data set size. There are still many queries requiring complex inference and long range reference resolution that our models are not yet able to answer. As such our data provides a scalable challenge that should support NLP research into the future. Further, significantly bigger training data sets can be acquired using the techniques we have described, undoubtedly allowing us to train more expressive and accurate models. ", "title": "Teaching Machines to Read and Comprehend" } ]
Why do the authors use heuristics to estimate a variable's life span in the computational graph, instead of calculating it exactly?
While calculating the variable's estimated life span costs quadratic time complexity, heuristics costs only linear time complexity which is much efficient [20]. Also, experimental results show that heuristics can reduce the memory footprint effectively [21].
[ 20, 21 ]
[ { "id": "1512.01274_all_0", "text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge  winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The rise of structural and computational complexity poses interesting challenges to ML system design and implementation. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_1", "text": " Most ML systems embed a domain-specific language (DSL) into a host language (e.g. Python, Lua, C++). Possible programming paradigms range from imperative, where the user specifies exactly “how” computation needs to be performed, and declarative, where the user specification focuses on “what” to be done. Examples of imperative programming include numpy and Matlab, whereas packages such as Caffe, CXXNet program over layer definition which abstracts away and hide the inner-working of actual implementation. The dividing line between the two can be muddy at times. Frameworks such as Theano and the more recent Tensorflow can also be viewed as a mixture of both, they declare a computational graph, yet the computation within the graph is imperatively specified. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_2", "text": " Related to the issue of programming paradigms is how the computation is carried out. Execution can be concrete, where the result is returned right away on the same thread, or asynchronize or delayed, where the statements are gathered and transformed into a dataflow graph as an intermediate representation first, before released to available devices. These two execution models have different implications on how inherent parallelisms are discovered. Concrete execution is restrictive (e.g. parallelized matrix multiplication), whereas asynchronize/delayed execution additionally identified all parallelism within the scope of an instance of dataflow graph automatically. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_3", "text": " The combination of the programming paradigm and execution model yields a large design space, some of which are more interesting (and valid) than others. In fact, our team has collectively explored a number of them, as does the rest of the community. For example, Minerva  combines imperative programming with asynchronize execution. While Theano takes an declarative approach, enabling more global graph-aware optimization. Similar discipline was adopted in Purine2 . Instead, CXXNet adopts declarative programming (over tensor abstraction) and concrete execution, similar to Caffe . Table 1 gives more examples. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_4", "text": " Our combined new effort resulted in MXNet (or “mix-net”), intending to blend advantages of different approaches. Declarative programming offers clear boundary on the global computation graph, discovering more optimization opportunity, whereas imperative programs offers more flexibility. In the context of deep learning, declarative programming is useful in specifying the computation structure in neural network configurations, while imperative programming are more natural for parameter updates and interactive debugging. We also took the effort to embed into multiple host languages, including C++, Python, R, Go and Julia. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_5", "text": " Despite the support of multiple languages and combination of different programming paradigm, we are able to fuse the execution to the same backend engine. The engine tracks data dependencies across computation graphs and imperative operations, and schedules them efficiently jointly. We aggressively reduce memory footprint, performing in-place update and memory space reuse whenever possible. Finally, we designed a compact communication API so that a MXNet program runs on multiple machines with little change. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_6", "text": " Comparing to other open-source ML systems, MXNet provides a superset programming interface to Torch7 , Theano , Chainer  and Caffe , and supports more systems such as GPU clusters. Besides supporting the optimization for declarative programs as TensorFlow  do, MXNet additionally embed imperative tensor operations to provide more flexibility. MXNet is lightweight, e.g. the prediction codes fit into a single 50K lines C++ source file with no other dependency, and has more languages supports. More detailed comparisons are shown in Table 2. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_7", "text": " MXNet uses multi-output symbolic expressions, Symbol, declare the computation graph. Symbols are composited by operators, such as simple matrix operations (e.g. “+”), or a complex neural network layer (e.g. convolution layer). An operator can take several input variables, produce more than one output variables, and have internal state variables. A variable can be either free, which we can bind with value later, or an output of another symbol. Figure 3 shows the construction of a multi-layer perception symbol by chaining a variable , which presents the input data, and several layer operators. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_8", "text": " To evaluate a symbol we need to bind the free variables with data and declare the required outputs. Beside evaluation (“forward”), a symbol supports auto symbolic differentiation (“backward”). Other functions, such as load, save, memory estimation, and visualization, are also provided for symbols. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_9", "text": " MXNet offers NDArray with imperative tensor computation to fill the gap between the declarative symbolic expression and the host language. Figure 3 shows an example which does matrix-constant multiplication on GPU and then prints the results by numpy.ndarray. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_10", "text": " NDArray abstraction works seamlessly with the executions declared by Symbol, we can mix the imperative tensor computation of the former with the latter. For example, given a symbolic neural network and the weight updating function, e.g. w=w−η​g𝑤𝑤𝜂𝑔w=w-\\eta g. Then we can implement the gradient descent by ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_11", "text": " The above is as efficient as the implementation using a single but often much more complex symbolic expression. The reason is that MXNet uses lazy evaluation of NDArray and the backend engine can correctly resolve the data dependency between the two. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_12", "text": " The KVStore is a distributed key-value store for data synchronization over multiple devices. It supports two primitives: push a key-value pair from a device to the store, and pull the value on a key from the store. In addition, a user-defined updater can specify how to merge the pushed value. Finally, model divergence is controlled via consistency model . Currently, we support the sequential and eventual consistency. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_13", "text": " The following example implements the distributed gradient descent by data parallelization. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_14", "text": " where the weight updating function is registered to the KVStore, and each worker repeatedly pulls the newest weight from the store and then pushes out the locally computed gradient. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_15", "text": " The above mixed implementation has the same performance comparing to a single declarative program, because the actual data push and pull are executed by lazy evaluation, which are scheduled by the backend engine just like others. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_16", "text": " MXNet ships with tools to pack arbitrary sized examples into a single compact file to facilitate both sequential and random seek. Data iterators are also provided. Data pre-fetching and pre-processing are multi-threaded, reducing overheads due to possible remote file store reads and/or image decoding and transformation. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_17", "text": " The training module implements the commonly used optimization algorithms, such as stochastic gradient descent. It trains a model on a given symbolic module and data iterators, optionally distributedly if an additional KVStore is provided. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_18", "text": " A binded symbolic expression is presented as a computation graph for evaluation. Figure 4 shows a part of the graph of both forward and backward of the MLP symbol in Figure 3. Before evaluation, MXNet transforms the graph to optimize the efficiency and allocate memory to internal variables. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_19", "text": " Graph Optimization. We explore the following straightforward optimizations. We note first that only the subgraph required to obtain the outputs specified during binding is needed. For example, in prediction only the forward graph is needed, while for extracting features from internal layers, the last layers can be skipped. Secondly, operators can be grouped into a single one. For example, a×b+1𝑎𝑏1a\\times b+1 is replaced by a single BLAS or GPU call. Finally, we manually implemented well-optimized “big” operations, such as a layer in neural network. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_20", "text": " Memory Allocation. Note that each variable’s life time, namely the period between the creation and the last time will be used, is known for a computation graph. So we can reuse memory for non-intersected variables. However, an ideal allocation strategy requires O​(n2)𝑂superscript𝑛2O(n^{2}) time complexity, where n𝑛n is the number of variables. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_21", "text": " We proposed two heuristics strategies with linear time complexity. The first, called inplace, simulates the procedure of traversing the graph, and keeps a reference counter of depended nodes that are not used so far. If the counter reaches zero, the memory is recycled. The second, named co-share, allows two nodes to share a piece of memory if only if they cannot be run in parallel. Exploring co-share imposes one additional dependency constraint. In particular, each time upon scheduling, among the pending paths in the graph, we find the longest path and perform needed memory allocations. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_22", "text": " In MXNet, each source units, including NDArray, random number generator and temporal space, is registered to the engine with a unique tag. Any operations, such as a matrix operation or data communication, is then pushed into the engine with specifying the required resource tags. The engine continuously schedules the pushed operations for execution if dependencies are resolved. Since there usually exists multiple computation resources such as CPUs, GPUs, and the memory/PCIe buses, the engine uses multiple threads to scheduling the operations for better resource utilization and parallelization. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_23", "text": " Different to most dataflow engines , our engine tracks mutation operations as an existing resource unit. That is, ours supports the specification of the tags that a operation will write in addition to read. This enables scheduling of array mutations as in numpy and other tensor libraries. It also enables easier memory reuse of parameters, by representing parameter updates as mutating the parameter arrays. It also makes scheduling of some special operations easier. For example, when generating two random numbers with the same random seed, we can inform the engine they will write the seed so that they should not be executed in parallel. This helps reproducibility. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_24", "text": " We implemented KVStore based on the parameter server (8, 9, 4)(Figure 5). It differs to previous works in two aspects: First, we use the engine to schedule the KVStore operations and manage the data consistency. The strategy not only makes the data synchronization works seamless with computation, and also greatly simplifies the implementation. Second, we adopt an two-level structure. A level-1 server manages the data synchronization between the devices within a single machine, while a level-2 server manages inter-machine synchronization. Outbound data from a level-1 server can be aggregated, reducing bandwidth requirement; intra- and inter-machine synchronization can use different consistency model (e.g. intra- is sequential and inter- is eventual). ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_25", "text": " We fist compare MXNet with Torch7, Caffe, and TensorFlow on the popular “convnet-benchmarks” . All these systems are compiled with CUDA 7.5 and CUDNN 3 except for TensorFlow, which only supports CUDA 7.0 and CUDNN 2. We use batch size 32 for all networks and run the experiments on a single Nvidia GTX 980 card. Results are shown in Figure 7. As expected that MXNet has similar performance comparing to Torch7 and Caffe, because most computations are spent on the CUDA/CUDNN kernels. TensorFlow is always 2x slower, which might be due its use of a lower CUDNN version. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_26", "text": " Figure 7 shows the memory usages of the internal variables excepts for the outputs. As can be seen, both “inplace” and “co-share” can effective reduce the memory footprint. Combing them leads to a 2x reduction for all networks during model training, and further improves to 4x for model prediction. For instance, even for the most expensive VGG net, training needs less than 16MB extra. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_27", "text": " We run the experiment on Amazon EC2 g2.8x instances, each of which is shipped with four Nvidia GK104 GPUs and 10G Ethernet. We train googlenet with batch normalization  on the ILSVRC12 dataset  which consists of 1.3 million images and 1,000 classes. We fix the learning rate to .05.05.05, momentum to .9.9.9, weight decay to 10−4superscript10410^{-4}, and feed each GPU with 363636 images in one batch. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_28", "text": " The convergence results are shown in Figure 8. As can be seen, comparing to single machine, the distributed training converges slower at the beginning, but outperforms after 10 data passes. The average cost of a data pass is 14K and 1.4K sec on a single machine and 10 machines, respectively. Consequently, this experiment reveals a super-linear speedup. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_29", "text": " MXNet is a machine learning library combining symbolic expression with tensor computation to maximize efficiency and flexibility. It is lightweight and embeds in multiple host languages, and can be run in a distributed setting. Experimental results are encouraging. While we continue to explore new design choices, we believe it can already benefit the relevant research community. The codes are available at http://dmlc.io. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" }, { "id": "1512.01274_all_30", "text": " Acknowledgment. We sincerely thanks Dave Andersen, Carlos Guestrin, Tong He, Chuntao Hong, Qiang Kou, Hu Shiwen, Alex Smola, Junyuan Xie, Dale Schuurmans and all other contributors. ", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems" } ]
Out of all the classification datasets used in the experiments of this paper, what is the ratio of number of samples in the largest to the smallest dataset?
The statistics for each dataset and task are found in Table 1 [30].
[ 30 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets Sharif Razavian et al. (2014); Long et al. (2015a); He et al. (2016); Huang et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_1", "text": " Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection Jindal and Liu (2007); Ngai et al. (2011); Chu et al. (2012), emergency response Caragea et al. (2011), and commercial document classification, such as for legal discovery Roitblat et al. (2010). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_2", "text": " While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer Blitzer et al. (2007). For inductive transfer, fine-tuning pretrained word embeddings Mikolov et al. (2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers Peters et al. (2017); McCann et al. (2017); Peters et al. (2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_3", "text": " In light of the benefits of pretraining Erhan et al. (2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via fine-tuning has been unsuccessful for NLP Mou et al. (2016). Dai and Le (2015) first proposed fine-tuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_4", "text": " We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_5", "text": " We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture—with the same hyperparameters and no additions other than tuned dropout hyperparameters—outperforms highly engineered models and transfer learning approaches on six widely studied text classification tasks. On IMDb, with 100100100 labeled examples, ULMFiT matches the performance of training from scratch with 10×10\\times and—given 505050k unlabeled examples—with 100×100\\times more data. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_6", "text": " Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_7", "text": " Features in deep neural networks in CV have been observed to transition from general to task-specific from the first to the last layer Yosinski et al. (2014). For this reason, most work in CV focuses on transferring the first layers of the model Long et al. (2015b). Sharif Razavian et al. (2014) achieve state-of-the-art results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last Donahue et al. (2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_8", "text": " In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns Hariharan et al. (2015) in CV333A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_9", "text": " A related direction is multi-task learning (MTL) Caruana (1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the task-specific objective functions Chen et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_10", "text": " Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA Min et al. (2017), for distantly supervised sentiment analysis Severyn and Moschitti (2015), or MT domains Sennrich et al. (2015) but has been shown to fail between unrelated ones Mou et al. (2016). Dai and Le (2015) also fine-tune a language model, but overfit with 101010k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel fine-tuning techniques to prevent overfitting even with only 100100100 labeled examples and achieves state-of-the-art results also on small datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_11", "text": " We are interested in the most general inductive transfer learning setting for NLP Pan and Yang (2010): Given a static source task 𝒯Ssubscript𝒯𝑆\\mathcal{T}_{S} and any target task 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T} with 𝒯S≠𝒯Tsubscript𝒯𝑆subscript𝒯𝑇\\mathcal{T}_{S}\\neq\\mathcal{T}_{T}, we would like to improve performance on 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T}. Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies Linzen et al. (2016), hierarchical relations Gulordava et al. (2018), and sentiment Radford et al. (2017). In contrast to tasks like MT McCann et al. (2017) and entailment Conneau et al. (2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space ℋℋ\\mathcal{H} that should be useful for many other NLP tasks Vapnik and Kotz (1982); Baxter (2000). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_12", "text": " We propose Universal Language Model Fine-tuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_13", "text": " In our experiments, we use the state-of-the-art language model AWD-LSTM Merity et al. (2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higher-performance language models in the future. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_14", "text": " ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_15", "text": " An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 Merity et al. (2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100100100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_16", "text": " No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_17", "text": " As different layers capture different types of information Yosinski et al. (2014), they should be fine-tuned to different extents. To this end, we propose a novel fine-tuning method, discriminative fine-tuning444 An unrelated method of the same name exists for deep Boltzmann machines Salakhutdinov and Hinton (2009).. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_18", "text": " Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ𝜃\\theta at time step t𝑡t looks like the following Ruder (2016): θt=θt−1−η⋅∇θJ​(θ)subscript𝜃𝑡subscript𝜃𝑡1⋅𝜂subscript∇𝜃𝐽𝜃\\theta_{t}=\\theta_{t-1}-\\eta\\cdot\\nabla_{\\theta}J(\\theta) (1) where η𝜂\\eta is the learning rate and ∇θJ​(θ)subscript∇𝜃𝐽𝜃\\nabla_{\\theta}J(\\theta) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ𝜃\\theta into {θ1,…,θL}superscript𝜃1…superscript𝜃𝐿\\{\\theta^{1},\\ldots,\\theta^{L}\\} where θlsuperscript𝜃𝑙\\theta^{l} contains the parameters of the model at the l𝑙l-th layer and L𝐿L is the number of layers of the model. Similarly, we obtain {η1,…,ηL}superscript𝜂1…superscript𝜂𝐿\\{\\eta^{1},\\ldots,\\eta^{L}\\} where ηlsuperscript𝜂𝑙\\eta^{l} is the learning rate of the l𝑙l-th layer. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_19", "text": " The SGD update with discriminative fine-tuning is then the following: θtl=θt−1l−ηl⋅∇θlJ​(θ)superscriptsubscript𝜃𝑡𝑙superscriptsubscript𝜃𝑡1𝑙⋅superscript𝜂𝑙subscript∇superscript𝜃𝑙𝐽𝜃\\theta_{t}^{l}=\\theta_{t-1}^{l}-\\eta^{l}\\cdot\\nabla_{\\theta^{l}}J(\\theta) (2) We empirically found it to work well to first choose the learning rate ηLsuperscript𝜂𝐿\\eta^{L} of the last layer by fine-tuning only the last layer and using ηl−1=ηl/2.6superscript𝜂𝑙1superscript𝜂𝑙2.6\\eta^{l-1}=\\eta^{l}/2.6 as the learning rate for lower layers. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_20", "text": " For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: c​u​t=⌊T⋅c​u​t​_​f​r​a​c⌋p={t/c​u​t,if​t<c​u​t1−t−c​u​tc​u​t⋅(1/c​u​t​_​f​r​a​c−1),otherwiseηt=ηm​a​x⋅1+p⋅(r​a​t​i​o−1)r​a​t​i​o𝑐𝑢𝑡⋅𝑇𝑐𝑢𝑡_𝑓𝑟𝑎𝑐𝑝cases𝑡𝑐𝑢𝑡if𝑡𝑐𝑢𝑡1𝑡𝑐𝑢𝑡⋅𝑐𝑢𝑡1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐1otherwisesubscript𝜂𝑡⋅subscript𝜂𝑚𝑎𝑥1⋅𝑝𝑟𝑎𝑡𝑖𝑜1𝑟𝑎𝑡𝑖𝑜\\begin{split}cut&=\\lfloor T\\cdot cut\\_frac\\rfloor\\\\ p&=\\begin{cases}t/cut,&\\text{if}\\ t<cut\\\\ 1-\\frac{t-cut}{cut\\cdot(1/cut\\_frac-1)},&\\text{otherwise}\\end{cases}\\\\ \\eta_{t}&=\\eta_{max}\\cdot\\frac{1+p\\cdot(ratio-1)}{ratio}\\end{split} (3) where T𝑇T is the number of training iterations555In other words, the number of epochs times the number of updates per epoch., c​u​t​_​f​r​a​c𝑐𝑢𝑡_𝑓𝑟𝑎𝑐cut\\_frac is the fraction of iterations we increase the LR, c​u​t𝑐𝑢𝑡cut is the iteration when we switch from increasing to decreasing the LR, p𝑝p is the fraction of the number of iterations we have increased or will decrease the LR respectively, r​a​t​i​o𝑟𝑎𝑡𝑖𝑜ratio specifies how much smaller the lowest LR is from the maximum LR ηm​a​xsubscript𝜂𝑚𝑎𝑥\\eta_{max}, and ηtsubscript𝜂𝑡\\eta_{t} is the learning rate at iteration t𝑡t. We generally use c​u​t​_​f​r​a​c=0.1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐0.1cut\\_frac=0.1, r​a​t​i​o=32𝑟𝑎𝑡𝑖𝑜32ratio=32 and ηm​a​x=0.01subscript𝜂𝑚𝑎𝑥0.01\\eta_{max}=0.01. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_21", "text": " STLR modifies triangular learning rates Smith (2017) with a short increase and a long decay period, which we found key for good performance.666We also credit personal communication with the author. In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV Loshchilov and Hutter (2017).777While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_22", "text": " Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization Ioffe and Szegedy (2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_23", "text": " The signal in text classification tasks is often contained in a few words, which may occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step 𝐡Tsubscript𝐡𝑇\\mathbf{h}_{T} of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory 𝐇={𝐡1,…,𝐡T}𝐇subscript𝐡1…subscript𝐡𝑇\\mathbf{H}=\\{\\mathbf{h}_{1},\\ldots,\\mathbf{h}_{T}\\}: 𝐡c=(𝐡T,𝚖𝚊𝚡𝚙𝚘𝚘𝚕​(𝐇),𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕​(𝐇))subscript𝐡𝑐subscript𝐡𝑇𝚖𝚊𝚡𝚙𝚘𝚘𝚕𝐇𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕𝐇\\mathbf{h}_{c}=(\\mathbf{h}_{T},\\mathtt{maxpool}(\\mathbf{H}),\\mathtt{meanpool}(\\mathbf{H})) (4) where ()() is concatenation. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_24", "text": " Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative fine-tuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_25", "text": " Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge Yosinski et al. (2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we fine-tune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ Felbo et al. (2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_26", "text": " While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_27", "text": " Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixed-length batches of size b𝑏b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_28", "text": " Similar to existing work Peters et al. (2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_29", "text": " While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important real-world applications. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_30", "text": " We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches Johnson and Zhang (2017); McCann et al. (2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_31", "text": " For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset Maas et al. (2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_32", "text": " We use the six-class version of the small TREC dataset Voorhees and Tice (1999) dataset of open-domain, fact-based questions divided into broad semantic categories. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_33", "text": " For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_34", "text": " We use the same pre-processing as in earlier work Johnson and Zhang (2017); McCann et al. (2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_35", "text": " We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model Merity et al. (2017a) with an embedding size of 400400400, 333 layers, 115011501150 hidden activations per layer, and a BPTT batch size of 707070. We apply dropout of 0.40.40.4 to layers, 0.30.30.3 to RNN layers, 0.40.40.4 to input embedding layers, 0.050.050.05 to embedding layers, and weight dropout of 0.50.50.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 505050. We use Adam with β1=0.7subscript𝛽10.7\\beta_{1}=0.7 instead of the default β1=0.9subscript𝛽10.9\\beta_{1}=0.9 and β2=0.99subscript𝛽20.99\\beta_{2}=0.99, similar to Dozat and Manning (2017). We use a batch size of 646464, a base learning rate of 0.0040.0040.004 and 0.010.010.01 for fine-tuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task888On small datasets such as TREC-6, we fine-tune the LM only for 151515 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 505050 epochs to be a good default for fine-tuning the classifier.. We otherwise use the same practices used in Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_36", "text": " For each task, we compare against the current state-of-the-art. For the IMDb and TREC-6 datasets, we compare against CoVe McCann et al. (2017), a state-of-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_37", "text": " For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing state-of-the-art requires complex architectures Peters et al. (2018), multiple forms of attention McCann et al. (2017) and sophisticated embedding schemes Johnson and Zhang (2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of real-world datasets: Its documents are generally a few paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_38", "text": " On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6—to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_39", "text": " We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-of-the-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_40", "text": " In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC-6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10%percent1010\\% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 505050 epochs and train all methods but ULMFiT with early stopping. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_41", "text": " One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_42", "text": " On IMDb and AG, supervised ULMFiT with only 100100100 labeled examples matches the performance of training from scratch with 10×10\\times and 20×20\\times more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (505050k for IMDb, 100100100k for AG), at 100100100 labeled examples, we match the performance of training from scratch with 50×50\\times and 100×100\\times more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_43", "text": " We compare using no pretraining with pretraining on WikiText-103 Merity et al. (2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_44", "text": " In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout999To avoid overfitting, we only train the vanilla LM classifier for 555 epochs and keep dropout of 0.40.40.4 in the classifier. with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_45", "text": " We compare no fine-tuning against fine-tuning the full model Erhan et al. (2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_46", "text": " We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) Donahue et al. (2014), ‘Chain-thaw’ Felbo et al. (2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) Loshchilov and Hutter (2017). We use a learning rate ηL=0.01superscript𝜂𝐿0.01\\eta^{L}=0.01 for ‘Discr’, learning rates of 0.0010.0010.001 and 0.00010.00010.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in Felbo et al. (2017), and a learning rate of 0.0010.0010.001 otherwise. We show the results in Table 7. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_47", "text": " Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 00. ‘Chain-thaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_48", "text": " While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful Mou et al. (2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_49", "text": " On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_50", "text": " At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.50.50.5–0.70.70.7. On IMDb we lower the test error from 5.305.305.30 of a single model to 4.584.584.58 for the bidirectional model. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_51", "text": " While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches Conneau et al. (2017); McCann et al. (2017); Peters et al. (2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_52", "text": " Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop Huh et al. (2016), while recent work shows that an alignment between source and target task label sets is important Mahajan et al. (2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion Caruana (1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies Linzen et al. (2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_53", "text": " Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_54", "text": " We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the state-of-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. ", "title": "Universal Language Model Fine-tuning for Text Classification" } ]
Are there any differences between AWD-LSTM and LSTMs in general?
The language model AWD-LSTM is a regular LSTM with various tuned dropout hyperparameters, but with no attention, short-cut connections or other sophisticated additions [13].
[ 13 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets Sharif Razavian et al. (2014); Long et al. (2015a); He et al. (2016); Huang et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_1", "text": " Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection Jindal and Liu (2007); Ngai et al. (2011); Chu et al. (2012), emergency response Caragea et al. (2011), and commercial document classification, such as for legal discovery Roitblat et al. (2010). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_2", "text": " While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer Blitzer et al. (2007). For inductive transfer, fine-tuning pretrained word embeddings Mikolov et al. (2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers Peters et al. (2017); McCann et al. (2017); Peters et al. (2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_3", "text": " In light of the benefits of pretraining Erhan et al. (2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via fine-tuning has been unsuccessful for NLP Mou et al. (2016). Dai and Le (2015) first proposed fine-tuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_4", "text": " We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_5", "text": " We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture—with the same hyperparameters and no additions other than tuned dropout hyperparameters—outperforms highly engineered models and transfer learning approaches on six widely studied text classification tasks. On IMDb, with 100100100 labeled examples, ULMFiT matches the performance of training from scratch with 10×10\\times and—given 505050k unlabeled examples—with 100×100\\times more data. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_6", "text": " Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_7", "text": " Features in deep neural networks in CV have been observed to transition from general to task-specific from the first to the last layer Yosinski et al. (2014). For this reason, most work in CV focuses on transferring the first layers of the model Long et al. (2015b). Sharif Razavian et al. (2014) achieve state-of-the-art results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last Donahue et al. (2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_8", "text": " In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns Hariharan et al. (2015) in CV333A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_9", "text": " A related direction is multi-task learning (MTL) Caruana (1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the task-specific objective functions Chen et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_10", "text": " Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA Min et al. (2017), for distantly supervised sentiment analysis Severyn and Moschitti (2015), or MT domains Sennrich et al. (2015) but has been shown to fail between unrelated ones Mou et al. (2016). Dai and Le (2015) also fine-tune a language model, but overfit with 101010k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel fine-tuning techniques to prevent overfitting even with only 100100100 labeled examples and achieves state-of-the-art results also on small datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_11", "text": " We are interested in the most general inductive transfer learning setting for NLP Pan and Yang (2010): Given a static source task 𝒯Ssubscript𝒯𝑆\\mathcal{T}_{S} and any target task 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T} with 𝒯S≠𝒯Tsubscript𝒯𝑆subscript𝒯𝑇\\mathcal{T}_{S}\\neq\\mathcal{T}_{T}, we would like to improve performance on 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T}. Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies Linzen et al. (2016), hierarchical relations Gulordava et al. (2018), and sentiment Radford et al. (2017). In contrast to tasks like MT McCann et al. (2017) and entailment Conneau et al. (2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space ℋℋ\\mathcal{H} that should be useful for many other NLP tasks Vapnik and Kotz (1982); Baxter (2000). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_12", "text": " We propose Universal Language Model Fine-tuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_13", "text": " In our experiments, we use the state-of-the-art language model AWD-LSTM Merity et al. (2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higher-performance language models in the future. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_14", "text": " ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_15", "text": " An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 Merity et al. (2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100100100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_16", "text": " No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_17", "text": " As different layers capture different types of information Yosinski et al. (2014), they should be fine-tuned to different extents. To this end, we propose a novel fine-tuning method, discriminative fine-tuning444 An unrelated method of the same name exists for deep Boltzmann machines Salakhutdinov and Hinton (2009).. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_18", "text": " Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ𝜃\\theta at time step t𝑡t looks like the following Ruder (2016): θt=θt−1−η⋅∇θJ​(θ)subscript𝜃𝑡subscript𝜃𝑡1⋅𝜂subscript∇𝜃𝐽𝜃\\theta_{t}=\\theta_{t-1}-\\eta\\cdot\\nabla_{\\theta}J(\\theta) (1) where η𝜂\\eta is the learning rate and ∇θJ​(θ)subscript∇𝜃𝐽𝜃\\nabla_{\\theta}J(\\theta) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ𝜃\\theta into {θ1,…,θL}superscript𝜃1…superscript𝜃𝐿\\{\\theta^{1},\\ldots,\\theta^{L}\\} where θlsuperscript𝜃𝑙\\theta^{l} contains the parameters of the model at the l𝑙l-th layer and L𝐿L is the number of layers of the model. Similarly, we obtain {η1,…,ηL}superscript𝜂1…superscript𝜂𝐿\\{\\eta^{1},\\ldots,\\eta^{L}\\} where ηlsuperscript𝜂𝑙\\eta^{l} is the learning rate of the l𝑙l-th layer. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_19", "text": " The SGD update with discriminative fine-tuning is then the following: θtl=θt−1l−ηl⋅∇θlJ​(θ)superscriptsubscript𝜃𝑡𝑙superscriptsubscript𝜃𝑡1𝑙⋅superscript𝜂𝑙subscript∇superscript𝜃𝑙𝐽𝜃\\theta_{t}^{l}=\\theta_{t-1}^{l}-\\eta^{l}\\cdot\\nabla_{\\theta^{l}}J(\\theta) (2) We empirically found it to work well to first choose the learning rate ηLsuperscript𝜂𝐿\\eta^{L} of the last layer by fine-tuning only the last layer and using ηl−1=ηl/2.6superscript𝜂𝑙1superscript𝜂𝑙2.6\\eta^{l-1}=\\eta^{l}/2.6 as the learning rate for lower layers. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_20", "text": " For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: c​u​t=⌊T⋅c​u​t​_​f​r​a​c⌋p={t/c​u​t,if​t<c​u​t1−t−c​u​tc​u​t⋅(1/c​u​t​_​f​r​a​c−1),otherwiseηt=ηm​a​x⋅1+p⋅(r​a​t​i​o−1)r​a​t​i​o𝑐𝑢𝑡⋅𝑇𝑐𝑢𝑡_𝑓𝑟𝑎𝑐𝑝cases𝑡𝑐𝑢𝑡if𝑡𝑐𝑢𝑡1𝑡𝑐𝑢𝑡⋅𝑐𝑢𝑡1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐1otherwisesubscript𝜂𝑡⋅subscript𝜂𝑚𝑎𝑥1⋅𝑝𝑟𝑎𝑡𝑖𝑜1𝑟𝑎𝑡𝑖𝑜\\begin{split}cut&=\\lfloor T\\cdot cut\\_frac\\rfloor\\\\ p&=\\begin{cases}t/cut,&\\text{if}\\ t<cut\\\\ 1-\\frac{t-cut}{cut\\cdot(1/cut\\_frac-1)},&\\text{otherwise}\\end{cases}\\\\ \\eta_{t}&=\\eta_{max}\\cdot\\frac{1+p\\cdot(ratio-1)}{ratio}\\end{split} (3) where T𝑇T is the number of training iterations555In other words, the number of epochs times the number of updates per epoch., c​u​t​_​f​r​a​c𝑐𝑢𝑡_𝑓𝑟𝑎𝑐cut\\_frac is the fraction of iterations we increase the LR, c​u​t𝑐𝑢𝑡cut is the iteration when we switch from increasing to decreasing the LR, p𝑝p is the fraction of the number of iterations we have increased or will decrease the LR respectively, r​a​t​i​o𝑟𝑎𝑡𝑖𝑜ratio specifies how much smaller the lowest LR is from the maximum LR ηm​a​xsubscript𝜂𝑚𝑎𝑥\\eta_{max}, and ηtsubscript𝜂𝑡\\eta_{t} is the learning rate at iteration t𝑡t. We generally use c​u​t​_​f​r​a​c=0.1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐0.1cut\\_frac=0.1, r​a​t​i​o=32𝑟𝑎𝑡𝑖𝑜32ratio=32 and ηm​a​x=0.01subscript𝜂𝑚𝑎𝑥0.01\\eta_{max}=0.01. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_21", "text": " STLR modifies triangular learning rates Smith (2017) with a short increase and a long decay period, which we found key for good performance.666We also credit personal communication with the author. In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV Loshchilov and Hutter (2017).777While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_22", "text": " Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization Ioffe and Szegedy (2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_23", "text": " The signal in text classification tasks is often contained in a few words, which may occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step 𝐡Tsubscript𝐡𝑇\\mathbf{h}_{T} of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory 𝐇={𝐡1,…,𝐡T}𝐇subscript𝐡1…subscript𝐡𝑇\\mathbf{H}=\\{\\mathbf{h}_{1},\\ldots,\\mathbf{h}_{T}\\}: 𝐡c=(𝐡T,𝚖𝚊𝚡𝚙𝚘𝚘𝚕​(𝐇),𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕​(𝐇))subscript𝐡𝑐subscript𝐡𝑇𝚖𝚊𝚡𝚙𝚘𝚘𝚕𝐇𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕𝐇\\mathbf{h}_{c}=(\\mathbf{h}_{T},\\mathtt{maxpool}(\\mathbf{H}),\\mathtt{meanpool}(\\mathbf{H})) (4) where ()() is concatenation. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_24", "text": " Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative fine-tuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_25", "text": " Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge Yosinski et al. (2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we fine-tune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ Felbo et al. (2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_26", "text": " While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_27", "text": " Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixed-length batches of size b𝑏b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_28", "text": " Similar to existing work Peters et al. (2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_29", "text": " While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important real-world applications. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_30", "text": " We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches Johnson and Zhang (2017); McCann et al. (2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_31", "text": " For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset Maas et al. (2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_32", "text": " We use the six-class version of the small TREC dataset Voorhees and Tice (1999) dataset of open-domain, fact-based questions divided into broad semantic categories. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_33", "text": " For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_34", "text": " We use the same pre-processing as in earlier work Johnson and Zhang (2017); McCann et al. (2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_35", "text": " We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model Merity et al. (2017a) with an embedding size of 400400400, 333 layers, 115011501150 hidden activations per layer, and a BPTT batch size of 707070. We apply dropout of 0.40.40.4 to layers, 0.30.30.3 to RNN layers, 0.40.40.4 to input embedding layers, 0.050.050.05 to embedding layers, and weight dropout of 0.50.50.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 505050. We use Adam with β1=0.7subscript𝛽10.7\\beta_{1}=0.7 instead of the default β1=0.9subscript𝛽10.9\\beta_{1}=0.9 and β2=0.99subscript𝛽20.99\\beta_{2}=0.99, similar to Dozat and Manning (2017). We use a batch size of 646464, a base learning rate of 0.0040.0040.004 and 0.010.010.01 for fine-tuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task888On small datasets such as TREC-6, we fine-tune the LM only for 151515 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 505050 epochs to be a good default for fine-tuning the classifier.. We otherwise use the same practices used in Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_36", "text": " For each task, we compare against the current state-of-the-art. For the IMDb and TREC-6 datasets, we compare against CoVe McCann et al. (2017), a state-of-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_37", "text": " For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing state-of-the-art requires complex architectures Peters et al. (2018), multiple forms of attention McCann et al. (2017) and sophisticated embedding schemes Johnson and Zhang (2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of real-world datasets: Its documents are generally a few paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_38", "text": " On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6—to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_39", "text": " We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-of-the-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_40", "text": " In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC-6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10%percent1010\\% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 505050 epochs and train all methods but ULMFiT with early stopping. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_41", "text": " One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_42", "text": " On IMDb and AG, supervised ULMFiT with only 100100100 labeled examples matches the performance of training from scratch with 10×10\\times and 20×20\\times more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (505050k for IMDb, 100100100k for AG), at 100100100 labeled examples, we match the performance of training from scratch with 50×50\\times and 100×100\\times more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_43", "text": " We compare using no pretraining with pretraining on WikiText-103 Merity et al. (2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_44", "text": " In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout999To avoid overfitting, we only train the vanilla LM classifier for 555 epochs and keep dropout of 0.40.40.4 in the classifier. with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_45", "text": " We compare no fine-tuning against fine-tuning the full model Erhan et al. (2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_46", "text": " We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) Donahue et al. (2014), ‘Chain-thaw’ Felbo et al. (2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) Loshchilov and Hutter (2017). We use a learning rate ηL=0.01superscript𝜂𝐿0.01\\eta^{L}=0.01 for ‘Discr’, learning rates of 0.0010.0010.001 and 0.00010.00010.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in Felbo et al. (2017), and a learning rate of 0.0010.0010.001 otherwise. We show the results in Table 7. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_47", "text": " Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 00. ‘Chain-thaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_48", "text": " While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful Mou et al. (2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_49", "text": " On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_50", "text": " At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.50.50.5–0.70.70.7. On IMDb we lower the test error from 5.305.305.30 of a single model to 4.584.584.58 for the bidirectional model. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_51", "text": " While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches Conneau et al. (2017); McCann et al. (2017); Peters et al. (2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_52", "text": " Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop Huh et al. (2016), while recent work shows that an alignment between source and target task label sets is important Mahajan et al. (2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion Caruana (1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies Linzen et al. (2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_53", "text": " Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_54", "text": " We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the state-of-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. ", "title": "Universal Language Model Fine-tuning for Text Classification" } ]
Can kernel functions other than the exponential kernel be applied?
Any kernel that compares a pair of subgraphs can replace the exponential kernel [15]. The reason is that \kappa_{\text{graph}} is defined as \kappa_{\exp} in P6, but P2 states that \kappa_{\text{graph}} can be changed as other kernel that is able to compare a pair of subgraphs [16].
[ 15, 16 ]
[ { "id": "2202.03036_all_0", "text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019), and so on. A large class of GNNs build multilayer models, where each layer operates on the previous layer to generate new representations using a message-passing mechanism (Gilmer et al., 2017) to aggregate local neighborhood information. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_1", "text": " While many different message-passing strategies have been proposed, some critical limitations have been uncovered in this class of GNNs. These include the limited expressiveness of GNNs (Xu et al., 2019; Morris et al., 2019), as well as known problems such as over-smoothing (Li et al., 2018, 2019; Chen et al., 2020; Oono & Suzuki, 2020) and over-squashing (Alon & Yahav, 2021). Over-smoothing manifests as all node representations converging to a constant after sufficiently many layers, while over-squashing occurs when messages from distant nodes are not effectively propagated through certain “bottlenecks” in a graph, since too many messages get compressed into a single fixed-length vector. Designing new architectures beyond neighborhood aggregation is thus essential to solve these problems. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_2", "text": " Transformers (Vaswani et al., 2017), which have proved to be successful in natural language understanding (Vaswani et al., 2017), computer vision (Dosovitskiy et al., 2020), and biological sequence modeling (Rives et al., 2021), offer the potential to address these issues. Rather than only aggregating local neighborhood information in the message-passing mechanism, the Transformer architecture is able to capture interaction information between any node pair via a single self-attention layer. Moreover, in contrast to GNNs, the Transformer avoids introducing any structural inductive bias at intermediate layers, addressing the expressivity limitation of GNNs. Instead, it encodes structural or positional information about nodes only into input node features, albeit limiting how much information it can learn from the graph structure. Integrating information about the graph structure into the Transformer architecture has thus gained growing attention in the graph representation learning field. However, most existing approaches only encode positional relationships between nodes, rather than explicitly encoding the structural relationships. As a result, they may not identify structural similarities between nodes and could fail to model the structural interaction between nodes (see Figure 1). This could explain why their performance was dominated by sparse GNNs in several tasks (Dwivedi et al., 2022). ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_3", "text": " In this work, we address the critical question of how to encode structural information into a Transformer architecture. Our principal contribution is to introduce a flexible structure-aware self-attention mechanism that explicitly considers the graph structure and thus captures structural interaction between nodes. The resulting class of Transformers, which we call the Structure-Aware Transformer (SAT), can provide structure-aware representations of graphs, in contrast to most existing position-aware Transformers for graph-structured data. Specifically: • We reformulate the self-attention mechanism in Vaswani et al. (2017) as a kernel smoother and extend the original exponential kernel on node features to also account for local structures, by extracting a subgraph representation centered around each node. • We propose several methods for automatically generating the subgraph representations, enabling the resulting kernel smoother to simultaneously capture structural and attributed similarities between nodes. The resulting representations are theoretically guaranteed to be at least as expressive as the subgraph representations. • We demonstrate the effectiveness of SAT models on five graph and node property prediction benchmarks by showing it achieves better performance than state-of-the-art GNNs and Transformers. Furthermore, we show how SAT can easily leverage any GNN to compute the node representations which incorporate subgraph information and outperform the base GNN, making it an effortless enhancer of any existing GNN. • Finally, we show that we can attribute the performance gains to the structure-aware aspect of our architecture, and showcase how SAT is more interpretable than the classic Transformer with an absolute encoding. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_4", "text": " We will present the related work and relevant background in Sections 2 and 3 before presenting our method in Section 4 and our experimental findings in Section 5. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_5", "text": " We present here the work most related to ours, namely the work stemming from message passing GNNs, positional representations on graphs, and graph Transformers. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_6", "text": " Message passing graph neural networks have recently been one of the leading methods for graph representation learning. An early seminal example is the GCN (Kipf & Welling, 2017), which was based on performing convolutions on the graph. Gilmer et al. (2017) reformulated the early GNNs into a framework of message passing GNNs, which has since then become the predominant framework of GNNs in use today, with extensive examples (Hamilton et al., 2017; Xu et al., 2019; Corso et al., 2020; Hu et al., 2020b; Veličković et al., 2018; Li et al., 2020a; Yang et al., 2022). However, as mentioned above, they suffer from problems of limited expressiveness, over-smoothing, and over-squashing. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_7", "text": " Because of the limited expressiveness of GNNs, there has been some recent research into the use of absolute encoding (Shaw et al., 2018), which consists of adding or concatenating positional or structural representations to the input node features. While it is often called an absolute positional encoding, we refer to it more generally as an absolute encoding to include both positional and structural encoding, which are both important in graph modeling. Absolute encoding primarily considers position or location relationships between nodes. Examples of position-based methods include the Laplacian positional encoding (Dwivedi & Bresson, 2021; Kreuzer et al., 2021), Weisfeiler–Lehman-based positional encoding (Zhang et al., 2020), and random walk positional encoding (RWPE) (Li et al., 2020b; Dwivedi et al., 2022), while distance-based methods include distances to a predefined set of nodes (You et al., 2019) and shortest path distances between pairs of nodes (Zhang et al., 2020; Li et al., 2020b). Dwivedi et al. (2022) extend these ideas by using a trainable absolute encoding. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_8", "text": " While the absolute encoding methods listed above can be used with message passing GNNs, they also play a crucial role in the (graph) Transformer architecture. Graph Transformer (Dwivedi & Bresson, 2021) provided an early example of how to generalize the Transformer architecture to graphs, using Laplacian eigenvectors as an absolute encoding and computing attention on the immediate neighborhood of each node, rather than on the full graph. SAN (Kreuzer et al., 2021) also used the Laplacian eigenvectors for computing an absolute encoding, but computed attention on the full graph, while distinguishing between true and created edges. Many graph Transformer methods also use a relative encoding (Shaw et al., 2018) in addition to absolute encoding. This strategy incorporates representations of the relative position or distances between nodes on the graph directly into the self-attention mechanism, as opposed to the absolute encoding which is only applied once to the input node features. Mialon et al. (2021) propose a relative encoding by means of kernels on graphs to bias the self-attention calculation, which is then able to incorporate positional information into Transformers via the choice of kernel function. Other recent work seeks to incorporate structural information into the graph Transformer, for example by encoding some carefully selected graph theoretic properties such as centrality measures and shortest path distances as positional representations (Ying et al., 2021) or by using GNNs to integrate the graph structure (Rong et al., 2020; Jain et al., 2021; Mialon et al., 2021; Shi et al., 2021). ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_9", "text": " In this work, we combine the best of both worlds from message passing GNNs and from the Transformer architecture. We incorporate both an absolute as well as a novel relative encoding that explicitly incorporates the graph structure, thereby designing a Transformer architecture that takes both local and global information into account. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_10", "text": " In the following, we refer to a graph as G=(V,E,𝐗)𝐺𝑉𝐸𝐗G=(V,E,\\mathbf{X}), where the node attributes for node u∈V𝑢𝑉u\\in V is denoted by xu∈𝒳⊂dsubscript𝑥𝑢𝒳superscript𝑑absentx_{u}\\in{\\mathcal{X}}\\subset^{d} and the node attributes for all nodes are stored in 𝐗∈n×dsuperscript𝑛𝑑𝐗absent\\mathbf{X}\\in^{n\\times d} for a graph with n𝑛n nodes. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_11", "text": " While GNNs use the graph structure explicitly, Transformers remove that explicit structure, and instead infer relations between nodes by leveraging the node attributes. In this sense, the Transformer (Vaswani et al., 2017) ignores the graph structure and rather considers the graph as a (multi-) set of nodes, and uses the self-attention mechanism to infer the similarity between nodes. The Transformer itself is composed of two main blocks: a self-attention module followed by a feed-forward neural network. In the self-attention module, the input node features 𝐗𝐗\\mathbf{X} are first projected to query (𝐐𝐐\\mathbf{Q}), key (𝐊𝐊\\mathbf{K}) and value (𝐕𝐕\\mathbf{V}) matrices through a linear projection such that 𝐐=𝐗𝐖𝐐𝐐subscript𝐗𝐖𝐐\\mathbf{Q}=\\mathbf{X}\\mathbf{W_{Q}}, 𝐊=𝐗𝐖𝐊𝐊subscript𝐗𝐖𝐊\\mathbf{K}=\\mathbf{X}\\mathbf{W_{K}} and 𝐕=𝐗𝐖𝐕𝐕subscript𝐗𝐖𝐕\\mathbf{V}=\\mathbf{X}\\mathbf{W_{V}} respectively. We can compute the self-attention via Attn​(𝐗):=softmax​(𝐐𝐊Tdo​u​t)​𝐕∈n×do​u​t,assignAttn𝐗softmaxsuperscript𝐐𝐊𝑇subscript𝑑𝑜𝑢𝑡𝐕superscript𝑛subscript𝑑𝑜𝑢𝑡absent\\mathrm{Attn}(\\mathbf{X}):=\\mathrm{softmax}(\\frac{\\mathbf{Q}\\mathbf{K}^{T}}{\\sqrt{d_{out}}})\\mathbf{V}\\in^{n\\times d_{out}}, (1) where do​u​tsubscript𝑑𝑜𝑢𝑡d_{out} refers to the dimension of 𝐐𝐐\\mathbf{Q}, and 𝐖𝐐,𝐖𝐊,𝐖𝐕subscript𝐖𝐐subscript𝐖𝐊subscript𝐖𝐕\\mathbf{W_{Q}},\\mathbf{W_{K}},\\mathbf{W_{V}} are trainable parameters. It is common to use multi-head attention, which concatenates multiple instances of Eq. (1) and has shown to be effective in practice (Vaswani et al., 2017). Then, the output of the self-attention is followed by a skip-connection and a feed-forward network (FFN), which jointly compose a Transformer layer, as shown below: 𝐗′superscript𝐗′\\displaystyle\\mathbf{X}^{\\prime} =𝐗+Attn​(𝐗),absent𝐗Attn𝐗\\displaystyle=\\mathbf{X}+\\mathrm{Attn}(\\mathbf{X}), (2) 𝐗′′superscript𝐗′′\\displaystyle\\mathbf{X}^{\\prime\\prime} =FFN​(𝐗′):=ReLU​(𝐗′​W1)​W2.absentFFNsuperscript𝐗′assignReLUsuperscript𝐗′subscript𝑊1subscript𝑊2\\displaystyle=\\mathrm{FFN}(\\mathbf{X}^{\\prime}):=\\text{ReLU}(\\mathbf{X}^{\\prime}W_{1})W_{2}. Multiple layers can be stacked to form a Transformer model, which ultimately provides node-level representations of the graph. As the self-attention is equivariant to permutations of the input nodes, the Transformer will always generate the same representations for nodes with the same attributes regardless of their locations and surrounding structures in the graph. It is thus necessary to incorporate such information into the Transformer, generally via absolute encoding. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_12", "text": " Absolute encoding refers to adding or concatenating the positional or structural representations of the graph to the input node features before the main Transformer model, such as the Laplacian positional encoding (Dwivedi & Bresson, 2021) or RWPE (Dwivedi et al., 2022). The main shortcoming of these encoding methods is that they generally do not provide a measure of the structural similarity between nodes and their neighborhoods. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_13", "text": " As noticed by Mialon et al. (2021), the self-attention in Eq. (1) can be rewritten as a kernel smoother Attn​(xv)=∑u∈Vκexp​(xv,xu)∑w∈Vκexp​(xv,xw)​f​(xu),∀v∈V,formulae-sequenceAttnsubscript𝑥𝑣subscript𝑢𝑉subscript𝜅subscript𝑥𝑣subscript𝑥𝑢subscript𝑤𝑉subscript𝜅subscript𝑥𝑣subscript𝑥𝑤𝑓subscript𝑥𝑢for-all𝑣𝑉\\mathrm{Attn}(x_{v})=\\sum_{u\\in V}\\frac{\\kappa_{\\exp}(x_{v},x_{u})}{\\sum_{w\\in V}\\kappa_{\\exp}(x_{v},x_{w})}f(x_{u}),~{}\\forall v\\in V, (3) where f​(x)=𝐖𝐕​x𝑓𝑥subscript𝐖𝐕𝑥f(x)=\\mathbf{W_{V}}x is the linear value function and κexpsubscript𝜅\\kappa_{\\exp} is a (non-symmetric) exponential kernel on ×dd{}^{d}\\times^{d} parameterized by 𝐖𝐐subscript𝐖𝐐\\mathbf{W_{Q}} and 𝐖𝐊subscript𝐖𝐊\\mathbf{W_{K}}: κexp​(x,x′):=exp⁡(⟨𝐖𝐐​x,𝐖𝐊​x′⟩/do​u​t),assignsubscript𝜅𝑥superscript𝑥′subscript𝐖𝐐𝑥subscript𝐖𝐊superscript𝑥′subscript𝑑𝑜𝑢𝑡\\kappa_{\\exp}(x,x^{\\prime}):=\\exp\\left(\\langle\\mathbf{W_{Q}}x,\\mathbf{W_{K}}x^{\\prime}\\rangle/\\sqrt{d_{out}}\\right), (4) where ⟨⋅,⋅⟩⋅⋅\\langle\\cdot,\\cdot\\rangle is the dot product on d. With this form, Mialon et al. (2021) propose a relative positional encoding strategy via the product of this kernel and a diffusion kernel on the graph, which consequently captures the positional similarity between nodes. However, this method is only position-aware, in contrast to our structure-aware encoding that will be presented in Section 4. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_14", "text": " In this section, we will describe how to encode the graph structure into the self-attention mechanism and provide a class of Transformer models based on this framework. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_15", "text": " As presented above, self-attention in the Transformer can be rewritten as a kernel smoother where the kernel is a trainable exponential kernel defined on node features, and which only captures attributed similarity between a pair of nodes. The problem with this kernel smoother is that it cannot filter out nodes that are structurally different from the node of interest when they have the same or similar node features. In order to also incorporate the structural similarity between nodes, we consider a more generalized kernel that additionally accounts for the local substructures around each node. By introducing a set of subgraphs centered at each node, we define our structure-aware attention as: SA-attn​(v):=∑u∈Vκgraph​(SG​(v),SG​(u))∑w∈Vκgraph​(SG​(v),SG​(w))​f​(xu),assignSA-attn𝑣subscript𝑢𝑉subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑢subscript𝑤𝑉subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑤𝑓subscript𝑥𝑢\\text{SA-attn}(v):=\\sum_{u\\in V}\\frac{\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(u))}{\\sum_{w\\in V}\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(w))}f(x_{u}), (5) where SG​(v)subscript𝑆𝐺𝑣S_{G}(v) denotes a subgraph in G𝐺G centered at a node v𝑣v associated with node features 𝐗𝐗\\mathbf{X} and κgraphsubscript𝜅graph\\kappa_{\\text{graph}} can be any kernel that compares a pair of subgraphs. This new self-attention function not only takes the attributed similarity into account but also the structural similarity between subgraphs. It thus generates more expressive node representations than the original self-attention, as we will show in Section 4.4. Moreover, this self-attention is no longer equivariant to any permutation of nodes but only to nodes whose features and subgraphs coincide, which is a desirable property. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_16", "text": " In the rest of the paper, we will consider the following form of κgraphsubscript𝜅graph\\kappa_{\\text{graph}} that already includes a large class of expressive and computationally tractable models: κgraph​(SG​(v),SG​(u))=κexp​(φ​(v,G),φ​(u,G)),subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑢subscript𝜅𝜑𝑣𝐺𝜑𝑢𝐺\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(u))=\\kappa_{\\exp}(\\varphi(v,G),\\varphi(u,G)), (6) where φ​(u,G)𝜑𝑢𝐺\\varphi(u,G) is a structure extractor that extracts vector representations of some subgraph centered at u𝑢u with node features 𝐗𝐗\\mathbf{X}. We provide several alternatives of the structure extractor below. It is worth noting that our structure-aware self-attention is flexible enough to be combined with any model that generates representations of subgraphs, including GNNs and (differentiable) graph kernels. For notational simplicity, we assume there are no edge attributes, but our method can easily incorporate edge attributes as long as the structure extractor can accommodate them. The edge attributes are consequently not considered in the self-attention computation, but are incorporated into the structure-aware node representations. In the structure extractors presented in this paper, this means that edge attributes were included whenever the base GNN was able to handle edge attributes. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_17", "text": " A straightforward way to extract local structural information at node u𝑢u is to apply any existing GNN model to the input graph with node features 𝐗𝐗\\mathbf{X} and take the output node representation at u𝑢u as the subgraph representation at u𝑢u. More formally, if we denote by GNNG(k)superscriptsubscriptGNN𝐺𝑘\\text{GNN}_{G}^{(k)} an arbitrary GNN model with k𝑘k layers applied to G𝐺G with node features 𝐗𝐗\\mathbf{X}, then φ​(u,G)=GNNG(k)​(u).𝜑𝑢𝐺subscriptsuperscriptGNN𝑘𝐺𝑢\\varphi(u,G)=\\text{GNN}^{(k)}_{G}(u). (7) This extractor is able to represent the k𝑘k-subtree structure rooted at u𝑢u (Xu et al., 2019). While this class of structure extractors is fast to compute and can flexibly leverage any existing GNN, they cannot be more expressive than the Weisfeiler–Lehman test due to the expressiveness limitation of message passing GNNs (Xu et al., 2019). In practice, a small value of k𝑘k already leads to good performance, while not suffering from over-smoothing or over-squashing. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_18", "text": " A more expressive extractor is to use a GNN to directly compute the representation of the entire k𝑘k-hop subgraph centered at u𝑢u rather than just the node representation u𝑢u. Recent work has explored the idea of using subgraphs rather than subtrees around a node in GNNs, with positive experimental results (Zhang & Li, 2021; Wijesinghe & Wang, 2022), as well as being strictly more powerful than the 1-WL test (Zhang & Li, 2021). We follow the same setup as is done in Zhang & Li (2021), and adapt our GNN extractor to utilize the entire k𝑘k-hop subgraph. The k𝑘k-subgraph GNN extractor aggregates the updated node representations of all nodes within the k𝑘k-hop neighborhood using a pooling function such as summation. Formally, if we denote by 𝒩k​(u)subscript𝒩𝑘𝑢{\\mathcal{N}}_{k}(u) the k𝑘k-hop neighborhood of node u𝑢u including itself, the representation of a node u𝑢u is: φ​(u,G)=∑v∈𝒩k​(u)GNNG(k)​(v).𝜑𝑢𝐺subscript𝑣subscript𝒩𝑘𝑢subscriptsuperscriptGNN𝑘𝐺𝑣\\varphi(u,G)=\\sum_{v\\in{\\mathcal{N}}_{k}(u)}\\text{GNN}^{(k)}_{G}(v). (8) ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_19", "text": " We observe that prior to the pooling function, the k𝑘k-subgraph GNN extractor is equivalent to using the k𝑘k-subtree GNN extractor within each k𝑘k-hop subgraph. So as to capture the attributed similarity as well as structural similarity, we augment the node representation from k𝑘k-subgraph GNN extractor with the original node features via concatenation. While this extractor provides more expressive subgraph representations than the k𝑘k-subtree extractor, it requires enumerating all k𝑘k-hop subgraphs, and consequently does not scale as well as the k𝑘k-subtree extractor to large datasets. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_20", "text": " Finally, we present a list of other potential structure extractors for different purposes. One possible choice is to directly learn a number of “hidden graphs” as the “anchor subgraphs” to represent subgraphs for better model interpretability, by using the concepts introduced in Nikolentzos & Vazirgiannis (2020). While Nikolentzos & Vazirgiannis (2020) obtain a vector representation of the input graph by counting the number of matching walks between the whole graph and each of the hidden graphs, one could extend this to the node level by comparing the hidden graphs to the k𝑘k-hop subgraph centered around each node. The adjacency matrix of the hidden graphs is a trainable parameter in the network, thereby enabling end-to-end training to identify which subgraph structures are predictive. Then, for a trained model, visualizing the learned hidden graphs provides useful insights about the structural motifs in the dataset. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_21", "text": " Furthermore, more domain-specific GNNs could also be used to extract potentially more expressive subgraph representations. For instance, Bodnar et al. (2021) recently proposed a new kind of message passing scheme operating on regular cell complexes which benefits from provably stronger expressivity for molecules. Our self-attention mechanism can fully benefit from the development of more domain-specific and expressive GNNs. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_22", "text": " Finally, another possible structure extractor is to use a non-parametric graph kernel (e.g. a Weisfeiler-Lehman graph kernel) on the k𝑘k-hop subgraphs centered around each node. This provides a flexible way to combine graph kernels and deep learning, which might offer new theoretical insights into the link between the self-attention and kernel methods. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_23", "text": " Having defined our structure-aware self-attention function, the other components of the Structure-Aware Transformer follow the Transformer architecture as described in Section 3.1; see Figure 2 for a visual overview. Specifically, the self-attention function is followed by a skip-connection, a FFN and two normalization layers before and after the FFN. In addition, we also include the degree factor in the skip-connection, which was found useful for reducing the overwhelming influence of highly connected graph components (Mialon et al., 2021), i.e., xv′=xv+1/dv​SA-attn​(v),superscriptsubscript𝑥𝑣′subscript𝑥𝑣1subscript𝑑𝑣SA-attn𝑣x_{v}^{\\prime}=x_{v}+1/\\sqrt{d_{v}}\\,\\text{SA-attn}(v), (9) where dvsubscript𝑑𝑣d_{v} denotes the degree of node v𝑣v. After a Transformer layer, we obtain a new graph with the same structure but different node features G′=(V,E,𝐗′)superscript𝐺′𝑉𝐸superscript𝐗′G^{\\prime}=(V,E,\\mathbf{X}^{\\prime}), where 𝐗′superscript𝐗′\\mathbf{X}^{\\prime} corresponds to the output of the Transformer layer. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_24", "text": " Finally, for graph property prediction, there are various ways to aggregate node-level representations into a graph representation, such as by taking the average or sum. Alternatively, one can use the embedding of a virtual (CLS) node (Jain et al., 2021) that is attached to the input graph without any connectivity to other nodes. We compare these approaches in Section 5. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_25", "text": " While the self-attention in Eq. (5) is structure-aware, most absolute encoding techniques are only position-aware and could therefore provide complementary information. Indeed, we find that the combination leads to further performance improvements, which we show in Section 5. We choose to use the RWPE (Dwivedi et al., 2022), though any other absolute positional representations, including learnable ones, can also be used. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_26", "text": " We further argue that only using absolute positional encoding with the Transformer would exhibit a too relaxed structural inductive bias which is not guaranteed to generate similar node representations even if two nodes have similar local structures. This is due to the fact that distance or Laplacian-based positional representations generally serve as structural or positional signatures but do not provide a measure of structural similarity between nodes, especially in the inductive case where two nodes are from different graphs. This is also empirically affirmed in Section 5 by their relatively worse performance without using our structural encoding. In contrast, the subgraph representations used in the structure-aware attention can be tailored to measure the structural similarity between nodes, and thus generate similar node-level representations if they possess similar attributes and surrounding structures. We can formally state this in the following theorem: ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_27", "text": " The proof is provided in the Appendix. The metric D𝐷D is an optimal matching metric between two multisets which measures how different they are. This theorem shows that two node representations from the SA-attn are similar if the graphs that they belong to have similar multisets of node features and subgraph representations overall, and at the same time, the subgraph representations at these two nodes are similar. In particular, if two nodes belong to the same graph, i.e. G=G′𝐺superscript𝐺′G=G^{\\prime}, then the second and last terms on the right side of Eq. (10) are equal to zero and the distance between their representations is thus constrained by the distance between their corresponding subgraph representations. However, for Transformers with absolute positional encoding, the distance between two node representations is not constrained by their structural similarity, as the distance between two positional representations does not necessarily characterize how structurally similar two nodes are. Despite stronger inductive biases, we will show that our model is still sufficiently expressive in the next section. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_28", "text": " The expressive power of graph Transformers compared to classic GNNs has hardly been studied, since the soft structural inductive bias introduced in absolute encoding is generally hard to characterize. Thanks to the unique design of our SAT, which relies on a subgraph structure extractor, it becomes possible to study the expressiveness of the output representations. More specifically, we formally show that the node representation from a structure-aware attention layer is at least as expressive as its subgraph representation given by the structure extractor, following the injectivity of the attention function with respect to the query: ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_29", "text": " Note that the assumptions made in the theorem are mild as one can always add some absolute encoding or random noise to make the attributes of one node different from all other nodes, and similarly for subgraph representations. The countable assumption on 𝒳𝒳{\\mathcal{X}} is generally adopted for expressivity analysis of GNNs (e.g. Xu et al. (2019)). We assume f𝑓f to be any mapping rather than just a linear function as in the definition of the self-attention function since it can be practically approximated by a FFN in multi-layer Transformers through the universal approximation theorem (Hornik, 1991). Theorem 2 suggests that if the structure extractor is sufficiently expressive, the resulting SAT model can also be at least equally expressive. Furthermore, more expressive extractors could lead to more expressively powerful SAT models and thus better prediction performance, which is also empirically confirmed in Section 5. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_30", "text": " In this section, we evaluate SAT models versus several SOTA methods for graph representation learning, including GNNs and Transformers, on five graph and node prediction tasks, as well as analyze the different components of our architecture to identify what drives the performance. In summary, we discovered the following aspects about SAT: ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_31", "text": " • The structure-aware framework achieves SOTA performance on graph and node classification tasks, outperforming SOTA graph Transformers and sparse GNNs. • Both instances of the SAT, namely k𝑘k-subtree and k𝑘k-subgraph SAT, always improve upon the base GNN it is built upon, highlighting the improved expressiveness of our structure-aware approach. • We show that incorporating the structure via our structure-aware attention brings a notable improvement relative to the vanilla Transformer with RWPE that just uses node attribute similarity instead of also incorporating structural similarity. We also show that a small value of k𝑘k already leads to good performance, while not suffering from over-smoothing or over-squashing. • We show that choosing a proper absolute positional encoding and a readout method improves performance, but to a much lesser extent than incorporating the structure into the approach. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_32", "text": " Furthermore, we note that SAT achieves SOTA performance while only considering a small hyperparameter search space. Performance could likely be further improved with more hyperparameter tuning. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_33", "text": " We assess the performance of our method with five medium to large benchmark datasets for node and graph property prediction, including ZINC (Dwivedi et al., 2020), CLUSTER (Dwivedi et al., 2020), PATTERN (Dwivedi et al., 2020), OGBG-PPA (Hu et al., 2020a) and OGBG-CODE2 (Hu et al., 2020a). ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_34", "text": " We compare our method to the following GNNs: GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2018), GIN (Xu et al., 2019), PNA (Corso et al., 2020), DeeperGCN  (Li et al., 2020a), and ExpC (Yang et al., 2022). Our comparison partners also include several recently proposed Transformers on graphs, including the original Transformer with RWPE (Dwivedi et al., 2022), Graph Transformer (Dwivedi & Bresson, 2021), SAN (Kreuzer et al., 2021), Graphormer (Ying et al., 2021) and GraphTrans (Jain et al., 2021), a model that uses the vanilla Transformer on top of a GNN. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_35", "text": " All results for the comparison methods are either taken from the original paper or from Dwivedi et al. (2020) if not available. We consider k𝑘k-subtree and k𝑘k-subgraph SAT equipped with different GNN extractors, including GCN, GIN, GraphSAGE and PNA. For OGBG-PPA and OGBG-CODE2, we do not run experiments for k𝑘k-subgraph SAT models due to large memory requirements. Full details on the datasets, experimental setup, and hyperparameters are provided in the Appendix. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_36", "text": " We show the performance of SATs compared to other GNNs and Transformers in Table 1 and 2. SAT models consistently outperform SOTA methods on these datasets, showing its ability to combine the benefits of both GNNs and Transformers. In particular, for the CODE2 dataset, our SAT models outperform SOTA methods by a large margin despite a relatively small number of parameters and minimal hyperparameter tuning, which will put it at the first place on the OGB leaderboard. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_37", "text": " Table 3 summarizes the performance of SAT relative to the sparse GNN it uses to extract the subgraph representations, across different GNNs. We observe that both variations of SAT consistently bring large performance gains to its base GNN counterpart, making it a systematic enhancer of any GNN model. Furthermore, PNA, which is the most expressive GNN we considered, has consistently the best performance when used with SAT, empirically validating our theoretical finding in Section 4.4. k𝑘k-subgraph SAT also outperforms or performs equally as k𝑘k-subtree SAT in almost all the cases, showing its superior expressiveness. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_38", "text": " While Table 3 showcases the added value of the SAT relative to sparse GNNs, we now dissect the components of SAT on the ZINC dataset to identify which aspects of the architecture bring the biggest performance gains. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_39", "text": " The key contribution of SAT is its ability to explicitly incorporate structural information in the self-attention. Here, we seek to demonstrate that this information provides crucial predictive information, and study how the choice of k𝑘k affects the results. Figure 3(a) shows how the test MAE is impacted by varying k𝑘k for k𝑘k-subtree and k𝑘k-subgraph extractors using PNA on the ZINC dataset. All models use the RWPE. k=0𝑘0k=0 corresponds to the vanilla Transformer only using absolute positional encoding, i.e. not using structure. We find that incorporating structural information leads to substantial improvement in performance, with optimal performance around k=3𝑘3k=3 for both k𝑘k-subtree and k𝑘k-subgraph extractors. As k𝑘k increases beyond k=4𝑘4k=4, the performance in k𝑘k-subtree extractors deteriorated, which is consistent with the observed phenomenon that GNNs work best in shallower networks (Kipf & Welling, 2017). We observe that k𝑘k-subgraph does not suffer as much from this issue, underscoring a new aspect of its usefulness. On the other hand, k𝑘k-subtree extractors are more computationally efficient and scalable to larger OGB datasets. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_40", "text": " We assess here whether the absolute encoding brought complementary information to SAT. In Figure 3(b), we conduct an ablation study showing the results of SAT with and without absolute positional encoding, including RWPE and Laplacian PE (Dwivedi et al., 2020). Our SAT with a positional encoding outperforms its counterpart without it, confirming the complementary nature of the two encodings. However, we also note that the performance gain brought by the absolute encoding is far less than the gain obtained by using our structure-aware attention, as shown in Figure 3(a) (comparing the instance of k=0𝑘0k=0 to k>0𝑘0k>0), emphasizing that our structure-aware attention is the more important aspect of the model. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_41", "text": " Finally, we compare the performance of SAT models using different readout methods for aggregating node-level representations on the ZINC dataset in Figure 3(c), including the CLS pooling discussed in Section 4.2. Unlike the remarkable influence of the readout method in GNNs (Xu et al., 2019), we observe very little impact in SAT models. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_42", "text": " In addition to performance improvement, we show that SAT offers better model interpretability compared to the classic Transformer with only absolute positional encoding. We respectively train a SAT model and a Transformer with a CLS readout on the Mutagenicity dataset, and visualize the attention scores between the (CLS) node and other nodes learned by SAT and the Transformer in Figure 4. The salient difference between the two models is that SAT has structure-aware node embeddings, and thus we can attribute the following interpretability gains to that. While both models manage to identify some chemical motifs known for mutagenicity, such as NO2 and NH2, the attention scores learned by SAT are sparser and more informative, meaning that SAT puts more attention weights on these known mutagenic motifs than the Transformer with RWPE. The vanilla Transformer even fails to put attention on some important atoms such as the H atoms in the NH2 group. The only H atoms highlighted by SAT are those in the NH2 group, suggesting that our SAT indeed takes the structure into account. More focus on these discriminative motifs makes the SAT model less influenced by other chemical patterns that commonly exist in the dataset, such as benzene, and thus leads to overall improved performance. More results are provided in the Appendix. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_43", "text": " We introduced the SAT model, which successfully incorporates structural information into the Transformer architecture and overcomes the limitations of the absolute encoding. In addition to SOTA empirical performance with minimal hyperparameter tuning, SAT also provides better interpretability than the Transformer. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_44", "text": " As mentioned above, k𝑘k-subgraph SAT has higher memory requirements than k𝑘k-subtree SAT, which can restrict its applicability if access to high memory GPUs is restricted. We see the main limitation of SAT is that it suffers from the same drawbacks as the Transformer, namely the quadratic complexity of the self-attention computation. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_45", "text": " Because SAT can be combined with any GNN, a natural extension of our work is to combine SAT with structure extractors which have shown to be strictly more expressive than the 1-WL test, such as the recent topological GNN introduced by Horn et al. (2021). Additionally, the SAT framework is flexible and can incorporate any structure extractor which produces structure-aware node representations, and could even be extended beyond using GNNs, such as differentiable graph kernels. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_46", "text": " Another important area for future work is to focus on reducing the high memory cost and time complexity of the self-attention computation, as is being done in recent efforts for developing a so-called linear transformer, which has linear complexity in both time and space requirements (Tay et al., 2020; Wang et al., 2020; Qin et al., 2022). ", "title": "Structure-Aware Transformer for Graph Representation Learning" } ]
What strategy is used to reduce time spent on detection?
Truncated SVD [43].
[ 43 ]
[ { "id": "1504.08083_all_0", "text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, current approaches (e.g., (9, 11, 19, 25)) train models in multi-stage pipelines that are slow and inelegant. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_1", "text": " Complexity arises because detection requires the accurate localization of objects, creating two primary challenges. First, numerous candidate object locations (often called “proposals”) must be processed. Second, these candidates provide only rough localization that must be refined to achieve precise localization. Solutions to these problems often compromise speed, accuracy, or simplicity. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_2", "text": " In this paper, we streamline the training process for state-of-the-art ConvNet-based object detectors (9, 11). We propose a single-stage training algorithm that jointly learns to classify object proposals and refine their spatial locations. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_3", "text": " The resulting method can train a very deep detection network (VGG16 ) 9×\\times faster than R-CNN and 3×\\times faster than SPPnet . At runtime, the detection network processes images in 0.3s (excluding object proposal time) while achieving top accuracy on PASCAL VOC 2012 with a mAP of 66% (vs. 62% for R-CNN).111All timings use one Nvidia K40 GPU overclocked to 875 MHz. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_4", "text": " The Region-based Convolutional Network method (R-CNN) achieves excellent object detection accuracy by using a deep ConvNet to classify object proposals. R-CNN, however, has notable drawbacks: 1. Training is a multi-stage pipeline. R-CNN first fine-tunes a ConvNet on object proposals using log loss. Then, it fits SVMs to ConvNet features. These SVMs act as object detectors, replacing the softmax classifier learnt by fine-tuning. In the third training stage, bounding-box regressors are learned. 2. Training is expensive in space and time. For SVM and bounding-box regressor training, features are extracted from each object proposal in each image and written to disk. With very deep networks, such as VGG16, this process takes 2.5 GPU-days for the 5k images of the VOC07 trainval set. These features require hundreds of gigabytes of storage. 3. Object detection is slow. At test-time, features are extracted from each object proposal in each test image. Detection with VGG16 takes 47s / image (on a GPU). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_5", "text": " R-CNN is slow because it performs a ConvNet forward pass for each object proposal, without sharing computation. Spatial pyramid pooling networks (SPPnets) were proposed to speed up R-CNN by sharing computation. The SPPnet method computes a convolutional feature map for the entire input image and then classifies each object proposal using a feature vector extracted from the shared feature map. Features are extracted for a proposal by max-pooling the portion of the feature map inside the proposal into a fixed-size output (e.g., 6×6666\\times 6). Multiple output sizes are pooled and then concatenated as in spatial pyramid pooling . SPPnet accelerates R-CNN by 10 to 100×\\times at test time. Training time is also reduced by 3×\\times due to faster proposal feature extraction. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_6", "text": " SPPnet also has notable drawbacks. Like R-CNN, training is a multi-stage pipeline that involves extracting features, fine-tuning a network with log loss, training SVMs, and finally fitting bounding-box regressors. Features are also written to disk. But unlike R-CNN, the fine-tuning algorithm proposed in cannot update the convolutional layers that precede the spatial pyramid pooling. Unsurprisingly, this limitation (fixed convolutional layers) limits the accuracy of very deep networks. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_7", "text": " We propose a new training algorithm that fixes the disadvantages of R-CNN and SPPnet, while improving on their speed and accuracy. We call this method Fast R-CNN because it’s comparatively fast to train and test. The Fast R-CNN method has several advantages: 1. Higher detection quality (mAP) than R-CNN, SPPnet 2. Training is single-stage, using a multi-task loss 3. Training can update all network layers 4. No disk storage is required for feature caching ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_8", "text": " Fast R-CNN is written in Python and C++ (Caffe ) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_9", "text": " Fig. 1 illustrates the Fast R-CNN architecture. A Fast R-CNN network takes as input an entire image and a set of object proposals. The network first processes the whole image with several convolutional (conv) and max pooling layers to produce a conv feature map. Then, for each object proposal a region of interest (RoI) pooling layer extracts a fixed-length feature vector from the feature map. Each feature vector is fed into a sequence of fully connected (fc) layers that finally branch into two sibling output layers: one that produces softmax probability estimates over K𝐾K object classes plus a catch-all “background” class and another layer that outputs four real-valued numbers for each of the K𝐾K object classes. Each set of 444 values encodes refined bounding-box positions for one of the K𝐾K classes. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_10", "text": " The RoI pooling layer uses max pooling to convert the features inside any valid region of interest into a small feature map with a fixed spatial extent of H×W𝐻𝑊H\\times W (e.g., 7×7777\\times 7), where H𝐻H and W𝑊W are layer hyper-parameters that are independent of any particular RoI. In this paper, an RoI is a rectangular window into a conv feature map. Each RoI is defined by a four-tuple (r,c,h,w)𝑟𝑐ℎ𝑤(r,c,h,w) that specifies its top-left corner (r,c)𝑟𝑐(r,c) and its height and width (h,w)ℎ𝑤(h,w). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_11", "text": " RoI max pooling works by dividing the h×wℎ𝑤h\\times w RoI window into an H×W𝐻𝑊H\\times W grid of sub-windows of approximate size h/H×w/Wℎ𝐻𝑤𝑊h/H\\times w/W and then max-pooling the values in each sub-window into the corresponding output grid cell. Pooling is applied independently to each feature map channel, as in standard max pooling. The RoI layer is simply the special-case of the spatial pyramid pooling layer used in SPPnets in which there is only one pyramid level. We use the pooling sub-window calculation given in . ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_12", "text": " We experiment with three pre-trained ImageNet networks, each with five max pooling layers and between five and thirteen conv layers (see Section 4.1 for network details). When a pre-trained network initializes a Fast R-CNN network, it undergoes three transformations. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_13", "text": " First, the last max pooling layer is replaced by a RoI pooling layer that is configured by setting H𝐻H and W𝑊W to be compatible with the net’s first fully connected layer (e.g., H=W=7𝐻𝑊7H=W=7 for VGG16). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_14", "text": " Second, the network’s last fully connected layer and softmax (which were trained for 1000-way ImageNet classification) are replaced with the two sibling layers described earlier (a fully connected layer and softmax over K+1𝐾1K+1 categories and category-specific bounding-box regressors). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_15", "text": " Third, the network is modified to take two data inputs: a list of images and a list of RoIs in those images. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_16", "text": " Training all network weights with back-propagation is an important capability of Fast R-CNN. First, let’s elucidate why SPPnet is unable to update weights below the spatial pyramid pooling layer. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_17", "text": " The root cause is that back-propagation through the SPP layer is highly inefficient when each training sample (i.e. RoI) comes from a different image, which is exactly how R-CNN and SPPnet networks are trained. The inefficiency stems from the fact that each RoI may have a very large receptive field, often spanning the entire input image. Since the forward pass must process the entire receptive field, the training inputs are large (often the entire image). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_18", "text": " We propose a more efficient training method that takes advantage of feature sharing during training. In Fast R-CNN training, stochastic gradient descent (SGD) mini-batches are sampled hierarchically, first by sampling N𝑁N images and then by sampling R/N𝑅𝑁R/N RoIs from each image. Critically, RoIs from the same image share computation and memory in the forward and backward passes. Making N𝑁N small decreases mini-batch computation. For example, when using N=2𝑁2N=2 and R=128𝑅128R=128, the proposed training scheme is roughly 64×\\times faster than sampling one RoI from 128128128 different images (i.e., the R-CNN and SPPnet strategy). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_19", "text": " One concern over this strategy is it may cause slow training convergence because RoIs from the same image are correlated. This concern does not appear to be a practical issue and we achieve good results with N=2𝑁2N=2 and R=128𝑅128R=128 using fewer SGD iterations than R-CNN. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_20", "text": " In addition to hierarchical sampling, Fast R-CNN uses a streamlined training process with one fine-tuning stage that jointly optimizes a softmax classifier and bounding-box regressors, rather than training a softmax classifier, SVMs, and regressors in three separate stages (9, 11). The components of this procedure (the loss, mini-batch sampling strategy, back-propagation through RoI pooling layers, and SGD hyper-parameters) are described below. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_21", "text": " A Fast R-CNN network has two sibling output layers. The first outputs a discrete probability distribution (per RoI), p=(p0,…,pK)𝑝subscript𝑝0…subscript𝑝𝐾p=(p_{0},\\ldots,p_{K}), over K+1𝐾1K+1 categories. As usual, p𝑝p is computed by a softmax over the K+1𝐾1K+1 outputs of a fully connected layer. The second sibling layer outputs bounding-box regression offsets, tk=(txk,tyk,twk,thk)superscript𝑡𝑘subscriptsuperscript𝑡𝑘xsubscriptsuperscript𝑡𝑘ysubscriptsuperscript𝑡𝑘wsubscriptsuperscript𝑡𝑘ht^{k}=\\left(t^{k}_{\\textrm{x}},t^{k}_{\\textrm{y}},t^{k}_{\\textrm{w}},t^{k}_{\\textrm{h}}\\right), for each of the K𝐾K object classes, indexed by k𝑘k. We use the parameterization for tksuperscript𝑡𝑘t^{k} given in , in which tksuperscript𝑡𝑘t^{k} specifies a scale-invariant translation and log-space height/width shift relative to an object proposal. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_22", "text": " Each training RoI is labeled with a ground-truth class u𝑢u and a ground-truth bounding-box regression target v𝑣v. We use a multi-task loss L𝐿L on each labeled RoI to jointly train for classification and bounding-box regression: L​(p,u,tu,v)=Lcls​(p,u)+λ​(u≥1)​Lloc​(tu,v),𝐿𝑝𝑢superscript𝑡𝑢𝑣subscript𝐿cls𝑝𝑢𝜆delimited-()𝑢1subscript𝐿locsuperscript𝑡𝑢𝑣L(p,u,t^{u},v)=L_{\\textrm{cls}}(p,u)+\\lambda(u\\geq 1)L_{\\textrm{loc}}(t^{u},v), (1) in which Lcls​(p,u)=−log⁡pusubscript𝐿cls𝑝𝑢subscript𝑝𝑢L_{\\textrm{cls}}(p,u)=-\\log p_{u} is log loss for true class u𝑢u. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_23", "text": " The second task loss, Llocsubscript𝐿locL_{\\textrm{loc}}, is defined over a tuple of true bounding-box regression targets for class u𝑢u, v=(vx,vy,vw,vh)𝑣subscript𝑣xsubscript𝑣ysubscript𝑣wsubscript𝑣hv=(v_{\\textrm{x}},v_{\\textrm{y}},v_{\\textrm{w}},v_{\\textrm{h}}), and a predicted tuple tu=(txu,tyu,twu,thu)superscript𝑡𝑢subscriptsuperscript𝑡𝑢xsubscriptsuperscript𝑡𝑢ysubscriptsuperscript𝑡𝑢wsubscriptsuperscript𝑡𝑢ht^{u}=(t^{u}_{\\textrm{x}},t^{u}_{\\textrm{y}},t^{u}_{\\textrm{w}},t^{u}_{\\textrm{h}}), again for class u𝑢u. The Iverson bracket indicator function (u≥1)delimited-()𝑢1(u\\geq 1) evaluates to 1 when u≥1𝑢1u\\geq 1 and 0 otherwise. By convention the catch-all background class is labeled u=0𝑢0u=0. For background RoIs there is no notion of a ground-truth bounding box and hence Llocsubscript𝐿locL_{\\textrm{loc}} is ignored. For bounding-box regression, we use the loss Lloc​(tu,v)=∑i∈{x,y,w,h}smoothL1​(tiu−vi),subscript𝐿locsuperscript𝑡𝑢𝑣subscript𝑖xywhsubscriptsmoothsubscript𝐿1subscriptsuperscript𝑡𝑢𝑖subscript𝑣𝑖L_{\\textrm{loc}}(t^{u},v)=\\sum_{i\\in\\{\\textrm{x},\\textrm{y},\\textrm{w},\\textrm{h}\\}}\\textrm{smooth}_{L_{1}}(t^{u}_{i}-v_{i}), (2) in which smoothL1​(x)={0.5​x2if ​|x|<1|x|−0.5otherwise,subscriptsmoothsubscript𝐿1𝑥cases0.5superscript𝑥2if 𝑥1𝑥0.5otherwise\\textrm{smooth}_{L_{1}}(x)=\\begin{cases}0.5x^{2}&\\text{if }|x|<1\\\\ |x|-0.5&\\text{otherwise},\\end{cases} (3) is a robust L1subscript𝐿1L_{1} loss that is less sensitive to outliers than the L2subscript𝐿2L_{2} loss used in R-CNN and SPPnet. When the regression targets are unbounded, training with L2subscript𝐿2L_{2} loss can require careful tuning of learning rates in order to prevent exploding gradients. Eq. 3 eliminates this sensitivity. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_24", "text": " The hyper-parameter λ𝜆\\lambda in Eq. 1 controls the balance between the two task losses. We normalize the ground-truth regression targets visubscript𝑣𝑖v_{i} to have zero mean and unit variance. All experiments use λ=1𝜆1\\lambda=1. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_25", "text": " We note that uses a related loss to train a class-agnostic object proposal network. Different from our approach, advocates for a two-network system that separates localization and classification. OverFeat , R-CNN , and SPPnet also train classifiers and bounding-box localizers, however these methods use stage-wise training, which we show is suboptimal for Fast R-CNN (Section 5.1). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_26", "text": " During fine-tuning, each SGD mini-batch is constructed from N=2𝑁2N=2 images, chosen uniformly at random (as is common practice, we actually iterate over permutations of the dataset). We use mini-batches of size R=128𝑅128R=128, sampling 646464 RoIs from each image. As in , we take 25% of the RoIs from object proposals that have intersection over union (IoU) overlap with a ground-truth bounding box of at least 0.50.50.5. These RoIs comprise the examples labeled with a foreground object class, i.e. u≥1𝑢1u\\geq 1. The remaining RoIs are sampled from object proposals that have a maximum IoU with ground truth in the interval (0.1,0.5)0.10.5(0.1,0.5), following . These are the background examples and are labeled with u=0𝑢0u=0. The lower threshold of 0.10.10.1 appears to act as a heuristic for hard example mining . During training, images are horizontally flipped with probability 0.50.50.5. No other data augmentation is used. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_27", "text": " Back-propagation routes derivatives through the RoI pooling layer. For clarity, we assume only one image per mini-batch (N=1𝑁1N=1), though the extension to N>1𝑁1N>1 is straightforward because the forward pass treats all images independently. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_28", "text": " Let xi∈ℝsubscript𝑥𝑖ℝx_{i}\\in\\mathbb{R} be the i𝑖i-th activation input into the RoI pooling layer and let yr​jsubscript𝑦𝑟𝑗y_{rj} be the layer’s j𝑗j-th output from the r𝑟r-th RoI. The RoI pooling layer computes yr​j=xi∗​(r,j)subscript𝑦𝑟𝑗subscript𝑥superscript𝑖𝑟𝑗y_{rj}=x_{i^{*}(r,j)}, in which i∗​(r,j)=argmaxi′∈ℛ​(r,j)xi′superscript𝑖𝑟𝑗subscriptargmaxsuperscript𝑖′ℛ𝑟𝑗subscript𝑥superscript𝑖′i^{*}(r,j)=\\operatorname*{argmax}_{i^{\\prime}\\in\\mathcal{R}(r,j)}x_{i^{\\prime}}. ℛ​(r,j)ℛ𝑟𝑗\\mathcal{R}(r,j) is the index set of inputs in the sub-window over which the output unit yr​jsubscript𝑦𝑟𝑗y_{rj} max pools. A single xisubscript𝑥𝑖x_{i} may be assigned to several different outputs yr​jsubscript𝑦𝑟𝑗y_{rj}. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_29", "text": " The RoI pooling layer’s backwards function computes partial derivative of the loss function with respect to each input variable xisubscript𝑥𝑖x_{i} by following the argmax switches: ∂L∂xi=∑r∑j(i=i∗​(r,j))​∂L∂yr​j.𝐿subscript𝑥𝑖subscript𝑟subscript𝑗delimited-()𝑖superscript𝑖𝑟𝑗𝐿subscript𝑦𝑟𝑗\\frac{\\partial L}{\\partial x_{i}}=\\sum_{r}\\sum_{j}\\left(i=i^{*}(r,j)\\right)\\frac{\\partial L}{\\partial y_{rj}}. (4) In words, for each mini-batch RoI r𝑟r and for each pooling output unit yr​jsubscript𝑦𝑟𝑗y_{rj}, the partial derivative ∂L/∂yr​j𝐿subscript𝑦𝑟𝑗\\partial L/\\partial y_{rj} is accumulated if i𝑖i is the argmax selected for yr​jsubscript𝑦𝑟𝑗y_{rj} by max pooling. In back-propagation, the partial derivatives ∂L/∂yr​j𝐿subscript𝑦𝑟𝑗\\partial L/\\partial y_{rj} are already computed by the backwards function of the layer on top of the RoI pooling layer. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_30", "text": " The fully connected layers used for softmax classification and bounding-box regression are initialized from zero-mean Gaussian distributions with standard deviations 0.010.010.01 and 0.0010.0010.001, respectively. Biases are initialized to 00. All layers use a per-layer learning rate of 1 for weights and 2 for biases and a global learning rate of 0.0010.0010.001. When training on VOC07 or VOC12 trainval we run SGD for 30k mini-batch iterations, and then lower the learning rate to 0.00010.00010.0001 and train for another 10k iterations. When we train on larger datasets, we run SGD for more iterations, as described later. A momentum of 0.90.90.9 and parameter decay of 0.00050.00050.0005 (on weights and biases) are used. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_31", "text": " We explore two ways of achieving scale invariant object detection: (1) via “brute force” learning and (2) by using image pyramids. These strategies follow the two approaches in . In the brute-force approach, each image is processed at a pre-defined pixel size during both training and testing. The network must directly learn scale-invariant object detection from the training data. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_32", "text": " The multi-scale approach, in contrast, provides approximate scale-invariance to the network through an image pyramid. At test-time, the image pyramid is used to approximately scale-normalize each object proposal. During multi-scale training, we randomly sample a pyramid scale each time an image is sampled, following , as a form of data augmentation. We experiment with multi-scale training for smaller networks only, due to GPU memory limits. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_33", "text": " Once a Fast R-CNN network is fine-tuned, detection amounts to little more than running a forward pass (assuming object proposals are pre-computed). The network takes as input an image (or an image pyramid, encoded as a list of images) and a list of R𝑅R object proposals to score. At test-time, R𝑅R is typically around 200020002000, although we will consider cases in which it is larger (≈\\approx 454545k). When using an image pyramid, each RoI is assigned to the scale such that the scaled RoI is closest to 2242superscript2242224^{2} pixels in area . ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_34", "text": " For each test RoI r𝑟r, the forward pass outputs a class posterior probability distribution p𝑝p and a set of predicted bounding-box offsets relative to r𝑟r (each of the K𝐾K classes gets its own refined bounding-box prediction). We assign a detection confidence to r𝑟r for each object class k𝑘k using the estimated probability Pr​(class=k|r)=ΔpksuperscriptΔPrclassconditional𝑘𝑟subscript𝑝𝑘\\textrm{Pr}(\\textrm{class}=k~{}|~{}r)\\stackrel{{\\scriptstyle\\Delta}}{{=}}p_{k}. We then perform non-maximum suppression independently for each class using the algorithm and settings from R-CNN . ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_35", "text": " For whole-image classification, the time spent computing the fully connected layers is small compared to the conv layers. On the contrary, for detection the number of RoIs to process is large and nearly half of the forward pass time is spent computing the fully connected layers (see Fig. 2). Large fully connected layers are easily accelerated by compressing them with truncated SVD (5, 23). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_36", "text": " In this technique, a layer parameterized by the u×v𝑢𝑣u\\times v weight matrix W𝑊W is approximately factorized as W≈U​Σt​VT𝑊𝑈subscriptΣ𝑡superscript𝑉𝑇W\\approx U\\Sigma_{t}V^{T} (5) using SVD. In this factorization, U𝑈U is a u×t𝑢𝑡u\\times t matrix comprising the first t𝑡t left-singular vectors of W𝑊W, ΣtsubscriptΣ𝑡\\Sigma_{t} is a t×t𝑡𝑡t\\times t diagonal matrix containing the top t𝑡t singular values of W𝑊W, and V𝑉V is v×t𝑣𝑡v\\times t matrix comprising the first t𝑡t right-singular vectors of W𝑊W. Truncated SVD reduces the parameter count from u​v𝑢𝑣uv to t​(u+v)𝑡𝑢𝑣t(u+v), which can be significant if t𝑡t is much smaller than min⁡(u,v)𝑢𝑣\\min(u,v). To compress a network, the single fully connected layer corresponding to W𝑊W is replaced by two fully connected layers, without a non-linearity between them. The first of these layers uses the weight matrix Σt​VTsubscriptΣ𝑡superscript𝑉𝑇\\Sigma_{t}V^{T} (and no biases) and the second uses U𝑈U (with the original biases associated with W𝑊W). This simple compression method gives good speedups when the number of RoIs is large. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_37", "text": " Three main results support this paper’s contributions: 1. State-of-the-art mAP on VOC07, 2010, and 2012 2. Fast training and testing compared to R-CNN, SPPnet 3. Fine-tuning conv layers in VGG16 improves mAP ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_38", "text": " Our experiments use three pre-trained ImageNet models that are available online.222https://github.com/BVLC/caffe/wiki/Model-Zoo The first is the CaffeNet (essentially AlexNet ) from R-CNN . We alternatively refer to this CaffeNet as model S, for “small.” The second network is VGG_CNN_M_1024 from , which has the same depth as S, but is wider. We call this network model M, for “medium.” The final network is the very deep VGG16 model from . Since this model is the largest, we call it model L. In this section, all experiments use single-scale training and testing (s=600𝑠600s=600; see Section 5.2 for details). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_39", "text": " On these datasets, we compare Fast R-CNN (FRCN, for short) against the top methods on the comp4 (outside data) track from the public leaderboard (Table 2, Table 3).333http://host.robots.ox.ac.uk:8080/leaderboard (accessed April 18, 2015) For the NUS_NIN_c2000 and BabyLearning methods, there are no associated publications at this time and we could not find exact information on the ConvNet architectures used; they are variants of the Network-in-Network design . All other methods are initialized from the same pre-trained VGG16 network. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_40", "text": " Fast R-CNN achieves the top result on VOC12 with a mAP of 65.7% (and 68.4% with extra data). It is also two orders of magnitude faster than the other methods, which are all based on the “slow” R-CNN pipeline. On VOC10, SegDeepM achieves a higher mAP than Fast R-CNN (67.2% vs. 66.1%). SegDeepM is trained on VOC12 trainval plus segmentation annotations; it is designed to boost R-CNN accuracy by using a Markov random field to reason over R-CNN detections and segmentations from the O2P semantic-segmentation method. Fast R-CNN can be swapped into SegDeepM in place of R-CNN, which may lead to better results. When using the enlarged 07++12 training set (see Table 2 caption), Fast R-CNN’s mAP increases to 68.8%, surpassing SegDeepM. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_41", "text": " On VOC07, we compare Fast R-CNN to R-CNN and SPPnet. All methods start from the same pre-trained VGG16 network and use bounding-box regression. The VGG16 SPPnet results were computed by the authors of . SPPnet uses five scales during both training and testing. The improvement of Fast R-CNN over SPPnet illustrates that even though Fast R-CNN uses single-scale training and testing, fine-tuning the conv layers provides a large improvement in mAP (from 63.1% to 66.9%). R-CNN achieves a mAP of 66.0%. As a minor point, SPPnet was trained without examples marked as “difficult” in PASCAL. Removing these examples improves Fast R-CNN mAP to 68.1%. All other experiments use “difficult” examples. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_42", "text": " Fast training and testing times are our second main result. Table 4 compares training time (hours), testing rate (seconds per image), and mAP on VOC07 between Fast R-CNN, R-CNN, and SPPnet. For VGG16, Fast R-CNN processes images 146×\\times faster than R-CNN without truncated SVD and 213×\\times faster with it. Training time is reduced by 9×\\times, from 84 hours to 9.5. Compared to SPPnet, Fast R-CNN trains VGG16 2.7×\\times faster (in 9.5 vs. 25.5 hours) and tests 7×\\times faster without truncated SVD or 10×\\times faster with it. Fast R-CNN also eliminates hundreds of gigabytes of disk storage, because it does not cache features. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_43", "text": " Truncated SVD can reduce detection time by more than 30% with only a small (0.3 percentage point) drop in mAP and without needing to perform additional fine-tuning after model compression. Fig. 2 illustrates how using the top 102410241024 singular values from the 25088×409625088409625088\\times 4096 matrix in VGG16’s fc6 layer and the top 256256256 singular values from the 4096×4096409640964096\\times 4096 fc7 layer reduces runtime with little loss in mAP. Further speed-ups are possible with smaller drops in mAP if one fine-tunes again after compression. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_44", "text": " For the less deep networks considered in the SPPnet paper , fine-tuning only the fully connected layers appeared to be sufficient for good accuracy. We hypothesized that this result would not hold for very deep networks. To validate that fine-tuning the conv layers is important for VGG16, we use Fast R-CNN to fine-tune, but freeze the thirteen conv layers so that only the fully connected layers learn. This ablation emulates single-scale SPPnet training and decreases mAP from 66.9% to 61.4% (Table 5). This experiment verifies our hypothesis: training through the RoI pooling layer is important for very deep nets. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_45", "text": " Does this mean that all conv layers should be fine-tuned? In short, no. In the smaller networks (S and M) we find that conv1 is generic and task independent (a well-known fact ). Allowing conv1 to learn, or not, has no meaningful effect on mAP. For VGG16, we found it only necessary to update layers from conv3_1 and up (9 of the 13 conv layers). This observation is pragmatic: (1) updating from conv2_1 slows training by 1.3×\\times (12.5 vs. 9.5 hours) compared to learning from conv3_1; and (2) updating from conv1_1 over-runs GPU memory. The difference in mAP when learning from conv2_1 up was only +0.30.3+0.3 points (Table 5, last column). All Fast R-CNN results in this paper using VGG16 fine-tune layers conv3_1 and up; all experiments with models S and M fine-tune layers conv2 and up. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_46", "text": " We conducted experiments to understand how Fast R-CNN compares to R-CNN and SPPnet, as well as to evaluate design decisions. Following best practices, we performed these experiments on the PASCAL VOC07 dataset. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_47", "text": " Multi-task training is convenient because it avoids managing a pipeline of sequentially-trained tasks. But it also has the potential to improve results because the tasks influence each other through a shared representation (the ConvNet) . Does multi-task training improve object detection accuracy in Fast R-CNN? ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_48", "text": " To test this question, we train baseline networks that use only the classification loss, Lclssubscript𝐿clsL_{\\textrm{cls}}, in Eq. 1 (i.e., setting λ=0𝜆0\\lambda=0). These baselines are printed for models S, M, and L in the first column of each group in Table 6. Note that these models do not have bounding-box regressors. Next (second column per group), we take networks that were trained with the multi-task loss (Eq. 1, λ=1𝜆1\\lambda=1), but we disable bounding-box regression at test time. This isolates the networks’ classification accuracy and allows an apples-to-apples comparison with the baseline networks. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_49", "text": " Across all three networks we observe that multi-task training improves pure classification accuracy relative to training for classification alone. The improvement ranges from +0.80.8+0.8 to +1.11.1+1.1 mAP points, showing a consistent positive effect from multi-task learning. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_50", "text": " Finally, we take the baseline models (trained with only the classification loss), tack on the bounding-box regression layer, and train them with Ll​o​csubscript𝐿𝑙𝑜𝑐L_{loc} while keeping all other network parameters frozen. The third column in each group shows the results of this stage-wise training scheme: mAP improves over column one, but stage-wise training underperforms multi-task training (forth column per group). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_51", "text": " We compare two strategies for achieving scale-invariant object detection: brute-force learning (single scale) and image pyramids (multi-scale). In either case, we define the scale s𝑠s of an image to be the length of its shortest side. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_52", "text": " All single-scale experiments use s=600𝑠600s=600 pixels; s𝑠s may be less than 600600600 for some images as we cap the longest image side at 100010001000 pixels and maintain the image’s aspect ratio. These values were selected so that VGG16 fits in GPU memory during fine-tuning. The smaller models are not memory bound and can benefit from larger values of s𝑠s; however, optimizing s𝑠s for each model is not our main concern. We note that PASCAL images are 384×473384473384\\times 473 pixels on average and thus the single-scale setting typically upsamples images by a factor of 1.6. The average effective stride at the RoI pooling layer is thus ≈10absent10\\approx 10 pixels. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_53", "text": " In the multi-scale setting, we use the same five scales specified in (s∈{480,576,688,864,1200}𝑠4805766888641200s\\in\\{480,576,688,864,1200\\}) to facilitate comparison with SPPnet. However, we cap the longest side at 200020002000 pixels to avoid exceeding GPU memory. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_54", "text": " Table 7 shows models S and M when trained and tested with either one or five scales. Perhaps the most surprising result in was that single-scale detection performs almost as well as multi-scale detection. Our findings confirm their result: deep ConvNets are adept at directly learning scale invariance. The multi-scale approach offers only a small increase in mAP at a large cost in compute time (Table 7). In the case of VGG16 (model L), we are limited to using a single scale by implementation details. Yet it achieves a mAP of 66.9%, which is slightly higher than the 66.0% reported for R-CNN , even though R-CNN uses “infinite” scales in the sense that each proposal is warped to a canonical size. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_55", "text": " Since single-scale processing offers the best tradeoff between speed and accuracy, especially for very deep models, all experiments outside of this sub-section use single-scale training and testing with s=600𝑠600s=600 pixels. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_56", "text": " A good object detector should improve when supplied with more training data. Zhu et al. found that DPM mAP saturates after only a few hundred to thousand training examples. Here we augment the VOC07 trainval set with the VOC12 trainval set, roughly tripling the number of images to 16.5k, to evaluate Fast R-CNN. Enlarging the training set improves mAP on VOC07 test from 66.9% to 70.0% (Table 1). When training on this dataset we use 60k mini-batch iterations instead of 40k. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_57", "text": " We perform similar experiments for VOC10 and 2012, for which we construct a dataset of 21.5k images from the union of VOC07 trainval, test, and VOC12 trainval. When training on this dataset, we use 100k SGD iterations and lower the learning rate by 0.1×0.1\\times each 40k iterations (instead of each 30k). For VOC10 and 2012, mAP improves from 66.1% to 68.8% and from 65.7% to 68.4%, respectively. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_58", "text": " Fast R-CNN uses the softmax classifier learnt during fine-tuning instead of training one-vs-rest linear SVMs post-hoc, as was done in R-CNN and SPPnet. To understand the impact of this choice, we implemented post-hoc SVM training with hard negative mining in Fast R-CNN. We use the same training algorithm and hyper-parameters as in R-CNN. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_59", "text": " Table 8 shows softmax slightly outperforming SVM for all three networks, by +0.10.1+0.1 to +0.80.8+0.8 mAP points. This effect is small, but it demonstrates that “one-shot” fine-tuning is sufficient compared to previous multi-stage training approaches. We note that softmax, unlike one-vs-rest SVMs, introduces competition between classes when scoring a RoI. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_60", "text": " There are (broadly) two types of object detectors: those that use a sparse set of object proposals (e.g., selective search ) and those that use a dense set (e.g., DPM ). Classifying sparse proposals is a type of cascade in which the proposal mechanism first rejects a vast number of candidates leaving the classifier with a small set to evaluate. This cascade improves detection accuracy when applied to DPM detections . We find evidence that the proposal-classifier cascade also improves Fast R-CNN accuracy. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_61", "text": " Using selective search’s quality mode, we sweep from 1k to 10k proposals per image, each time re-training and re-testing model M. If proposals serve a purely computational role, increasing the number of proposals per image should not harm mAP. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_62", "text": " We find that mAP rises and then falls slightly as the proposal count increases (Fig. 3, solid blue line). This experiment shows that swamping the deep classifier with more proposals does not help, and even slightly hurts, accuracy. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_63", "text": " This result is difficult to predict without actually running the experiment. The state-of-the-art for measuring object proposal quality is Average Recall (AR) . AR correlates well with mAP for several proposal methods using R-CNN, when using a fixed number of proposals per image. Fig. 3 shows that AR (solid red line) does not correlate well with mAP as the number of proposals per image is varied. AR must be used with care; higher AR due to more proposals does not imply that mAP will increase. Fortunately, training and testing with model M takes less than 2.5 hours. Fast R-CNN thus enables efficient, direct evaluation of object proposal mAP, which is preferable to proxy metrics. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_64", "text": " We also investigate Fast R-CNN when using densely generated boxes (over scale, position, and aspect ratio), at a rate of about 45k boxes / image. This dense set is rich enough that when each selective search box is replaced by its closest (in IoU) dense box, mAP drops only 1 point (to 57.7%, Fig. 3, blue triangle). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_65", "text": " The statistics of the dense boxes differ from those of selective search boxes. Starting with 2k selective search boxes, we test mAP when adding a random sample of 1000×{2,4,6,8,10,32,45}100024681032451000\\times\\{2,4,6,8,10,32,45\\} dense boxes. For each experiment we re-train and re-test model M. When these dense boxes are added, mAP falls more strongly than when adding more selective search boxes, eventually reaching 53.0%. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_66", "text": " We also train and test Fast R-CNN using only dense boxes (45k / image). This setting yields a mAP of 52.9% (blue diamond). Finally, we check if SVMs with hard negative mining are needed to cope with the dense box distribution. SVMs do even worse: 49.3% (blue circle). ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_67", "text": " We applied Fast R-CNN (with VGG16) to the MS COCO dataset to establish a preliminary baseline. We trained on the 80k image training set for 240k iterations and evaluated on the “test-dev” set using the evaluation server. The PASCAL-style mAP is 35.9%; the new COCO-style AP, which also averages over IoU thresholds, is 19.7%. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_68", "text": " This paper proposes Fast R-CNN, a clean and fast update to R-CNN and SPPnet. In addition to reporting state-of-the-art detection results, we present detailed experiments that we hope provide new insights. Of particular note, sparse object proposals appear to improve detector quality. This issue was too costly (in time) to probe in the past, but becomes practical with Fast R-CNN. Of course, there may exist yet undiscovered techniques that allow dense boxes to perform as well as sparse proposals. Such methods, if developed, may help further accelerate object detection. ", "title": "Fast R-CNN" }, { "id": "1504.08083_all_69", "text": " I thank Kaiming He, Larry Zitnick, and Piotr Dollár for helpful discussions and encouragement. ", "title": "Fast R-CNN" } ]
How did the authors leverage the CNN architectures designed for color images and to transfer CNN parameters pre-trained on ImageNet to be able to use it on the medical dataset ?
The authors transformed every gray-scale axial CT image using the three CT windows of lung window range [-1400, -200HU], high-attenuation range [-160, 240HU], and low-attenuation range [-1400; -950HU], then encoded the transformed images into RGB images [16].
[ 16 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annotated datasets with representative data distribution characteristics are crucial to learning more accurate or generalizable models (5, 4). Unlike previous image datasets used in computer vision, ImageNet offers a very comprehensive database of more than 1.2 million categorized natural images of 1000+ classes. The CNN models trained upon this database serve as the backbone for significantly improving many object detection and image segmentation problems using other datasets (6, 7), e.g., PASCAL and medical image categorization (9, 10, 11, 12). However, there exists no large-scale annotated medical image dataset comparable to ImageNet, as data acquisition is difficult, and quality annotation is costly. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_1", "text": " There are currently three major techniques that successfully employ CNNs to medical image classification: 1) training the “CNN from scratch” (13, 14, 15, 16, 17); 2) using “off-the-shelf CNN” features (without retraining the CNN) as complementary information channels to existing hand-crafted image features, for Chest X-rays and CT lung nodule identification (9, 12); and 3) performing unsupervised pre-training on natural or medical images and fine-tuning on medical target images using CNN or other types of deep learning models (18, 19, 20, 21). A decompositional 2.5D view resampling and an aggregation of random view classification scores are used to eliminate the “curse-of-dimensionality” issue in , in order to acquire a sufficient number of training image samples. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_2", "text": " Previous studies have analyzed three-dimensional patch creation for LN detection (23, 24), atlas creation from chest CT and the extraction of multi-level image features (26, 27). At present, there are several extensions or variations of the decompositional view representation introduced in (22, 28), such as: using a novel vessel-aligned multi-planar image representation for pulmonary embolism detection , fusing unregistered multiview for mammogram analysis and classifying pulmonary peri-fissural nodules via an ensemble of 2D views . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_3", "text": " Although natural images and medical images differ significantly, conventional image descriptors developed for object recognition in natural images, such as the scale-invariant feature transform (SIFT) and the histogram of oriented gradients (HOG) , have been widely used for object detection and segmentation in medical image analysis. Recently, ImageNet pre-trained CNNs have been used for chest pathology identification and detection in X-ray and CT modalities (10, 9, 12). They have yielded the best performance results by integrating low-level image features (e.g., GIST , bag of visual words (BoVW) and bag-of-frequency ). However, the fine-tuning of an ImageNet pre-trained CNN model on medical image datasets has not yet been exploited. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_4", "text": " In this paper, we exploit three important, but previously under-studied factors of employing deep convolutional neural networks to computer-aided detection problems. Particularly, we explore and evaluate different CNN architectures varying in width (ranging from 5 thousand to 160 million parameters) and depth (various numbers of layers), describe the effects of varying dataset scale and spatial image context on performance, and discuss when and why transfer learning from pre-trained ImageNet CNN models can be valuable. We further verify our hypothesis by inheriting and adapting rich hierarchical image features (5, 33) from the large-scale ImageNet dataset for computer aided diagnosis (CAD). We also explore CNN architectures of the most studied seven-layered “AlexNet-CNN” , a shallower “Cifar-CNN” , and a much deeper version of “GoogLeNet-CNN” (with our modifications on CNN structures). This study is partially motivated by recent studies (34, 35) in computer vision. The thorough quantitative analysis and evaluation on deep CNN or sparsity image coding methods elucidate the emerging techniques of the time and provide useful suggestions for their future stages of development, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_5", "text": " Two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied in this work. On mediastinal LN detection, we surpass all currently reported results. We obtain 86%percent8686\\% sensitivity on 3 false positives (FP) per patient, versus the prior state-of-art sensitivities of 78%percent7878\\% (stacked shallow learning) and 70%percent7070\\% (CNN), as prior state-of-the-art. For the first time, ILD classification results under the patient-level five-fold cross-validation protocol (CV5) are investigated and reported. The ILD dataset contains 905 annotated image slices with 120 patients and 6 ILD labels. Such sparsely annotated datasets are generally difficult for CNN learning, due to the paucity of labeled instances. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_6", "text": " Evaluation protocols and details are critical to deriving significant empirical findings . Our experimental results suggest that different CNN architectures and dataset re-sampling protocols are critical for the LN detection tasks where the amount of labeled training data is sufficient and spatial contexts are local. Since LN images are more flexible than ILD images with respect to resampling and reformatting, LN datasets may be more readily augmented by such image transformations. As a result, LN datasets contain more training and testing data instances (due to data auugmentation) than ILD datasets. They nonetheless remain less comprehensive than natural image datasets, such as ImageNet. Fine-tuning ImageNet-trained models for ILD classification is clearly advantageous and yields early promising results, when the amount of labeled training data is highly insufficient and multi-class categorization is used, as opposed to the LN dataset’s binary class categorization. Another significant finding is that CNNs trained from scratch or fine-tuned from ImageNet models consistently outperform CNNs that merely use off-the-shelf CNN features, in both the LN and ILD classification problems. We further analyze, via CNN activation visualizations, when and why transfer learning from non-medical to medical images in CADe problems can be valuable. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_7", "text": " We employ CNNs (with the characteristics defined above) to thoraco-abdominal lymph node (LN) detection (evaluated separately on the mediastinal and abdominal regions) and interstitial lung disease (ILD) detection. For LN detection, we use randomly sampled 2.5D views in CT . We use 2D CT slices (38, 39, 40) for ILD detection. We then evaluate and compare CNN performance results. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_8", "text": " Until the detection aggregation approach (22, 41), thoracoabdominal lymph node (LN) detection via CADe mechanisms has yielded poor performance results. In , each 3D LN candidate produces up to 100 random 2.5D orthogonally sampled images or views which are then used to train an effective CNN model. The best performance on abdominal LN detection is achieved at 83%percent8383\\% recall on 3FP per patient , using a “Cifar-10” CNN. Using the thoracoabdominal LN detection datasets , we aim to surpass this CADe performance level, by testing different CNN architectures, exploring various dataset re-sampling protocols, and applying transfer learning from ImageNet pre-trained CNN models. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_9", "text": " Interstitial lung disease (ILD) comprises more than 150 lung diseases affecting the interstitium, which can severely impair the patient’s ability to breathe. Gao et al. investigate the ILD classification problem in two scenarios: 1) slice-level classification: assigning a holistic two-dimensional axial CT slice image with its occurring ILD disease label(s); and 2) patch-level classification: a/ sampling patches within the 2D ROIs (Regions of Interest provided by ), then b/ classifying patches into seven category labels ( six disease labels and one “healthy” label). Song et al. (38, 39) only address the second sub-task of patch-level classification under the “leave-one-patient-out” (LOO) criterion. By training on the moderate-to-small scale ILD dataset , our main objective is to exploit and benchmark CNN based ILD classification performances under the CV5 metric (which is more realistic and unbiased than LOO (38, 39) and hard-split ), with and without transfer learning. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_10", "text": " Thoracoabdominal Lymph Node Datasets. We use the publicly available dataset from (22, 41). There are 388 mediastinal LNs labeled by radiologists in 90 patient CT scans, and 595 abdominal LNs in 86 patient CT scans. To facilitate comparison, we adopt the data preparation protocol of , where positive and negative LN candidates are sampled with the fields-of-view (FOVs) of 30mm to 45mm, surrounding the annotated and detected LN centers (obtained by a candidate generation process). More precisely, (22, 41, 36) follow a coarse-to-fine CADe scheme, partially inspired by , which operates with ∼100%similar-toabsentpercent100\\sim 100\\% detection recalls at the cost of approximately 40 false or negative LN candidates per patient scan. In this work, positive and negative LN candidate are first sampled up to 200 times with translations and rotations. Afterwards, negative LN samples are randomly re-selected at a lower rate close to the total number of positives. LN candidates are randomly extracted from fields-of-view (FOVs) spanning 35mm to 128mm in soft-tissue window (-100, 200HU). This allows us to capture multiple spatial scales of image context (43, 44)). The samples are then rescaled to a 64×64646464\\times 64 pixel resolution via B-spline interpolation. A few examples of LNs with axial, coronal, and sagittal views encoded in RGB color images are shown in Figure 1. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_11", "text": " Unlike the heart or the liver, lymph nodes have no pre-determined anatomic orientation. Hence, the purely random image resampling (with respect to scale, displacement and orientation) and reformatting (the axial, coronal, and sagittal views are in any system randomly resampled coordinates) is a natural choice, which also happens to yield high CNN performance. Although we integrate three channels of information from three orthogonal views for LN detection, the pixel-wise spatial correlations between or among channels are not necessary. The convolutional kernels in the lower level CNN architectures can learn the optimal weights to linearly combine the observations from the axial, coronal, and sagittal channels by computing their dot-products. Transforming axial, coronal, and sagittal representations to RGB also facilitates transfer learning from CNN models trained on ImageNet. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_12", "text": " This learning representation (i.e., “built-in CNN”) is flexible, in that it naturally combines multiple sources or channels of information. In the recent literature , even heterogeneous class-conditional probability maps can be combined with raw images to improve performance. This set-up is similar to that of other works in computer vision, such as , where heterogeneous image information channels are jointly fed into the CNN convolutional layers for high-accuracy human parsing and segmentation. Finally, if there are correlations among CNN input channels, one may observe the corresponding correlated patterns in the learned filters. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_13", "text": " In summary, the assumption that there are or must be pixel-wise spatial correlations among input channels does not apply to the CNN model representation. For other medical imaging problems, such as pulmonary embolism detection , in which orientation can be constrained along the attached vessel axis, vessel-aligned multi-planar image representation (MPR) is more effective than randomly aligned MPR. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_14", "text": " Interstitial Lung Disease Dataset. We utilize the publicly available dataset of . It contains 905 image slices from 120 patients, with six lung tissue types annotations containing at least one of the following: healthy (NM), emphysema (EM), ground glass (GG), fibrosis (FB), micronodules (MN) and consolidation (CD) (Figure 3). At the slice level, the objective is to classify the status of “presence/absence” of any of the six ILD classes for an input axial CT slice . Characterizing an arbitrary CT slice against any possible ILD type, without any manual ROI (in contrast to (38, 39)), can be useful for large-scale patient screening. For slice-level ILD classification, we sampled the slices 12 times with random translations and rotations. After this, we balanced the numbers of CT slice samples for the six classes by randomly sampling several instances at various rates. For patch-based classification, we sampled up to 100 patches of size 64×64646464\\times 64 from each ROI. This dataset is divided into five folds with disjoint patient subsets. The average number of CT slices (training instances) per fold is small, as shown in Table I. Slice-level ILD classification is a very challenging task where CNN models need to learn from very small numbers of training examples and predict ILD labels on unseen patients. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_15", "text": " In the publicly available ILD dataset, very few CT slices are labeled as normal or healthy. The remaining CT slices cannot be simply classified as normal, because many ILD disease regions or slices have not yet been labeled. ILD is a partially labeled database; this is one of its main limitations. Research is being conducted to address this issue. In particular, has proposed to fully label the ILD dataset pixel-wise via proposed segmentation label propagation. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_16", "text": " To leverage the CNN architectures designed for color images and to transfer CNN parameters pre-trained on ImageNet, we transform all gray-scale axial CT slice images via three CT window ranges: lung window range (-1400, -200HU), high-attenuation range (-160, 240HU), and low-attenuation range (-1400; -950HU). We then encode the transformed images into RGB channels (to be aligned with the input channels of CNN models (4, 33) pre-trained from natural image datasets ). The low-attenuation CT window is useful for visualizing certain texture patterns of lung diseases (especially emphysema). The usage of different CT attenuation channels improves classification results over the usage of a single CT windowing channel, as demonstrated in . More importantly, these CT windowing processes do not depend on the lung segmentation, which instead is directly defined in the CT HU space. Figure 4 shows a representative example of lung, high-attenuation, and low-attenuation CT windowing for an axis lung CT slice. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_17", "text": " As observed in , lung segmentation is crucial to holistic slice-level ILD classification. We empirically compare performance in two scenarios with a rough lung segmentation111This can be achieved by segmenting the lung using simple label-fusion methods . In the first case, we overlay the target image slice with the average lung mask among the training folds. In the second, we perform simple morphology operations to obtain the lung boundary. In order to retain information from the inside of the lung, we apply Gaussian smoothing to the regions outside of the lung boundary. There is no significant difference between two setups. Due to the high precision of CNN based image processing, highly accurate lung segmentation is not necessary . The localization of ILD regions within the lung is simultaneously learned through selectively weighted CNN reception fields in the deepest convolutional layers during the classification based CNN training (49, 50). Some areas outside of the lung appear in both healthy or diseased images. CNN training learns to ignore them by setting very small filter weights around the corresponding regions (Figure 13). This observation is validated by . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_18", "text": " In this study, we explore, evaluate and analyze the influence of various CNN Architectures, dataset characteristics (when we need more training data or better models for object detection ) and CNN transfer learning from non-medical to medical image domains. These three key elements of building effective deep CNN models for CADe problems are described below. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_19", "text": " We mainly explore three convolutional neural network architectures (CifarNet (5, 22), AlexNet and GoogLeNet ) with different model training parameter values. The current deep learning models (22, 52, 53) in medical image tasks are at least 2∼5similar-to252\\sim 5 orders of magnitude smaller than even AlexNet . More complex CNN models (22, 52) have only about 150K or 15K parameters. Roth et al. adopt the CNN architecture tailored to the Cifar-10 dataset and operate on image windows of 32×32×33232332\\times 32\\times 3 pixels for lymph node detection, while the simplest CNN in has only one convolutional, pooling, and FC layer, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_20", "text": " We use CifarNet as used in as a baseline for the LN detection. AlexNet and GoogLeNet are also modified to evaluate these state-of-the-art CNN architecture from ImageNet classification task to our CADe problems and datasets. A simplified illustration of three CNN architectures exploited is shown in Figure 5. CifarNet always takes 32×32×33232332\\times 32\\times 3 image patches as input while AlexNet and GoogLeNet are originally designed for the fixed image dimension of 256×256×32562563256\\times 256\\times 3 pixels. We also reduced the filter size, stride and pooling parameters of AlexNet and GoogLeNet to accommodate a smaller input size of 64×64×36464364\\times 64\\times 3 pixels. We do so to produce and evaluate “simplified” AlexNet and GoogLeNet versions that are better suited to the smaller scale training datasets common in CADe problems. Throughout the paper, we refer to the models as CifarNet (32x32) or CifarNet (dropping 32x32); AlexNet (256x256) or AlexNet-H (high resolution); AlexNet (64x64) or AlexNet-L (low resolution); GoogLeNet (256x256) or GoogLeNet-H and GoogLeNet (64x64) or GoogLeNet-L (dropping 3 since all image inputs are three channels). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_21", "text": " CifarNet, introduced in , was the state-of-the-art model for object recognition on the Cifar10 dataset, which consists of 32×32323232\\times 32 images of 10 object classes. The objects are normally centered in the images. Some example images and class categories from the Cifar10 dataset are shown in Figure 7. CifarNet has three convolution layers, three pooling layers, and one fully-connected layer. This CNN architecture, also used in has about 0.15 million free parameters. We adopt it as a baseline model for the LN detection. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_22", "text": " The AlexNet architecture was published in , achieved significantly improved performance over the other non-deep learning methods for ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012. This success has revived the interest in CNNs in computer vision. ImageNet consists of 1.2 million 256×256256256256\\times 256 images belonging to 1000 categories. At times, the objects in the image are small and obscure, and thus pose more challenges for learning a successful classification model. More details about the ImageNet dataset will be discussed in Sec. III-B. AlexNet has five convolution layers, three pooling layers, and two fully-connected layers with approximately 60 million free parameters. AlexNet is our default CNN architecture for evaluation and analysis in the remainder of the paper. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_23", "text": " The GoogLeNet model proposed in , is significantly more complex and deep than all previous CNN architectures. More importantly, it also introduces a new module called “Inception”, which concatenates filters of different sizes and dimensions into a single new filter (refer to Figure 6). Overall, GoogLeNet has two convolution layers, two pooling layers, and nine “Inception” layers. Each “Inception” layer consists of six convolution layers and one pooling layer. An illustration of an “Inception” layer (inception3a) from GoogLeNet is shown in Figure 6. GoogLeNet is the current state-of-the-art CNN architecture for the ILSVRC challenge, where it achieved 5.5% top-5 classification error on the ImageNet challenge, compared to AlexNet’s 15.3% top-5 classification error. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_24", "text": " ImageNet has more than 1.2 million 256×256256256256\\times 256 images categorized under 1000 object class categories. There are more than 1000 training images per class. The database is organized according to the WordNet hierarchy, which currently contains only nouns in 1000 object categories. The image-object labels are obtained largely through crowd-sourcing, e.g., Amazon Mechanical Turk, and human inspection. Some examples of object categories in ImageNet are “sea snake”, “sandwich”, “vase”, “leopard”, etc. ImageNet is currently the largest image dataset among other standard datasets for visual recognition. Indeed, the Caltech101, Caltech256 and Cifar10 dataset merely contain 60000 32×32323232\\times 32 images and 10 object classes. Furthermore, due to the large number (1000+) of object classes, the objects belonging to each ImageNet class category can be occluded, partial and small, relative to those in the previous public image datasets. This significant intra-class variation poses greater challenges to any data-driven learning system that builds a classifier to fit given data and generalize to unseen data. For comparison, some example images of Cifar10 dataset and ImageNet images in the “tennis ball” class category are shown in Figure 7. The ImageNet dataset is publicly available, and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has become the standard benchmark for large-scale object recognition. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_25", "text": " When learned from scratch, all the parameters of CNN models are initialized with random Gaussian distributions and trained for 30 epochs with the mini-batch size of 50 image instances. Training convergence can be observed within 30 epochs. The other hyperparameters are momentum: 0.9; weight decay: 0.0005; (base) learning rate: 0.01, decreased by a factor of 10 at every 10 epochs. We use the Caffe framework and NVidia K40 GPUs to train the CNNs. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_26", "text": " AlexNet and GoogLeNet CNN models can be either learned from scratch or fine-tuned from pre-trained models. Girshick et al. find that, by applying ImageNet pre-trained ALexNet to PASCAL dataset , performances of semantic 20-class object detection and segmentation tasks significantly improve over previous methods that use no deep CNNs. AlexNet can be fine-tuned on the PASCAL dataset to surpass the performance of the ImageNet pre-trained AlexNet, although the difference is not as significant as that between the CNN and non-CNN methods. Similarly, (57, 58) also demonstrate that better performing deep models are learned via CNN transfer learning from ImageNet to other datasets of limited scales. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_27", "text": " Our hypothesis on CNN parameter transfer learning is the following: despite the disparity between natural images and natural images, CNNs comprehensively trained on the large scale well-annotated ImageNet may still be transferred to make medical image recognition tasks more effective. Collecting and annotating large numbers of medical images still poses significant challenges. On the other hand, the mainstream deep CNN architectures (e.g., AlexNet and GoogLeNet) contain tens of millions of free parameters to train, and thus require sufficiently large numbers of labeled medical images. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_28", "text": " For transfer learning, we follow the approach of (57, 6) where all CNN layers except the last are fine-tuned at a learning rate 10 times smaller than the default learning rate. The last fully-connected layer is random initialized and freshly trained, in order to accommodate the new object categories in our CADe applications. Its learning rate is kept at the original 0.01. We denote the models with random initialization or transfer learning as AlexNet-RI and AlexNet-TL, and GoogLeNet-RI and GoogLeNet-TL. We found that the transfer learning strategy yields the best performance results. Determining the optimal learning rate for different layers is challenging, especially for very deep networks such as GoogLeNet. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_29", "text": " We also perform experiments using “off-the-shelf” CNN features of AlexNet pre-trained on ImageNet and training only the final classifier layer to complete the new CADe classification tasks. Parameters in the convolutional and fully connected layers are fixed and are used as deep image extractors, as in (10, 9, 12). We refer to this model as AlexNet-ImNet in the remainder of the paper. Note that (10, 9, 12) train support vector machines and random forest classifiers using ImageNet pre-trained CNN features. Our simplified implementation is intended to determine whether fine-tuning the “end-to-end” CNN network is necessary to improve performance, as opposed to merely training the final classification layer. This is a slight modification from the method described in (10, 9, 12). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_30", "text": " Finally, transfer learning in CNN representation, as empirically verified in previous literature (59, 60, 61, 11, 62), can be effective in various cross-modality imaging settings (RGB images to depth images (59, 60), natural images to general CT and MRI images , and natural images to neuroimaging or ultrasound data). More thorough theoretical studies on cross-modality imaging statistics and transferability will be needed for future studies. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_31", "text": " In this section, we evaluate and compare the performances of nine CNN model configurations (CifarNet, AlexNet-ImNet, AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-L, GoogLeNet-RI-H, GoogLeNet-TL-H, GoogLeNet-RI-L and combined) on two important CADe problems using publicly available datasets (22, 41, 37). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_32", "text": " We train and evaluate CNNs using three-fold cross-validation (folds are split into disjoint sets of patients), with the different CNN architectures described above. In testing, each LN candidate has multiple random 2.5D views tested by CNN classifiers to generate LN class probability scores. We follow the random view aggregation by averaging probabilities, as in . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_33", "text": " We first sample the LN image patches at a 64×64646464\\times 64 pixel resolution. We then up-sample the 64×64646464\\times 64 pixel LN images via bi-linear interpolation to 256×256256256256\\times 256 pixels, in order to accommodate AlexNet-RI-L, AlexNet-TL-H, GoogLeNet-RI-H and GoogLeNet-TL-H. For the modified AlexNet-RI-L at (64×64646464\\times 64) pixel resolution, we reduce the number of first layer convolution filters from 96 to 64 and reduce the stride from 4 to 2. For the modified GoogLeNet-RI (64×64646464\\times 64), we decrease the number of first layer convolution filters from 64 to 32, the pad size from 3 to 2, the kernel size from 7 to 5, stride from 2 to 1 and the stride of the subsequent pooling layer from 2 to 1. We slightly reduce the number of convolutional filters in order to accommodate the smaller input image sizes of target medical image datasets (22, 37), while preventing over-fitting. This eventually improves performance on patch-based classification. CifarNet is used in to detect LN samples of 32×32×33232332\\times 32\\times 3 images. For consistency purposes, we down-sample 64×64×36464364\\times 64\\times 3 resolution LN sample images to the dimension of 32×32×33232332\\times 32\\times 3. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_34", "text": " Results for lymph node detection in the mediastinum and abdomen are reported in Table II. FROC curves are illustrated in Figure 8. The area-under-the-FROC-curve (AUC) and true positive rate (TPR, recall or sensitivity) at three false positives per patient (TPR/3FP) are used as performance metrics. Of the nine investigated CNN models, CifarNet, AlexNet-ImNet and GoogLeNet-RI-H generally yielded the least competitive detection accuracy results. Our LN datasets are significantly more complex (i.e., display much larger within-class appearance variations), especially due to the extracted fields-of-view (FOVs) of (35mm-128mm) compared to (30mm-45mm) in , where CifarNet is also employed. In this experiment, CifarNet is under-trained with respect to our enhanced LN datasets, due to its limited input resolution and parameter complexity. The inferior performance of AlexNet-ImNet implies that using the pre-trained ImageNet CNNs alone as “off-the-shelf” deep image feature extractors may not be optimal or adequate for mediastinal and abdominal LN detection tasks. To complement “off-the-shelf” CNN features, (10, 9, 12) all add and integrate various other hand-crafted image features as hybrid inputs for the final CADe classification. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_35", "text": " GoogLeNet-RI-H performs poorly, as it is susceptible to over-fitting. No sufficient data samples are available to train GoogLeNet-RI-H with random initialization. Indeed, due to GoogLeNet-RI-H’s complexity and 22-layer depth, million-image datasets may be required to properly train this model. However, GoogLeNet-TL-H significantly improves upon GoogLeNet-RI-H (0.81 versus 0.61 TPR/3FP in mediastinum; 0.70 versus 0.48 TPR/3FP in abdomen). This indicates that transfer learning offers a much better initialization of CNN parameters than random initialization. Likewise, AlexNet-TL-H consistently outperforms AlexNet-RI-H, though by smaller margins (0.81 versus 0.79 TPR/3FP in mediastinum; 0.69 versus 0.67 TPR/3FP in abdomen). This is also consistent with the findings reported for ILD detection in Table III and Figure 11. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_36", "text": " GoogLeNet-TL-H yields results similar to AlexNet-TL-H’s for the mediastinal LN detection, and slightly outperforms Alex-Net-H for abdominal LN detection. AlexNet-RI-H exhibits less severe over-fitting than GoogLeNet-RI-H. We also evaluate a simple ensemble by averaging the probability scores from five CNNs: AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-H, GoogLeNet-TL-H and GoogLeNet-RI-L. This combined ensemble outputs the classification accuracies matching or slightly exceeding the best performing individual CNN models on the mediastinal or abdominal LN detection tasks, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_37", "text": " Many of our CNN models achieve notably better (FROC-AUC and TPR/3FP) results than the previous state-of-the-art models for mediastinal LN detection: GoogLeNet-RI-L obtains an AUC=0.95 and 0.85 TPR/3FP, versus AUC=0.92 and 0.70 TPR/3FP and 0.78 TPR/3FP which uses stacked shallow learning. This difference lies in the fact that annotated lymph node segmentation masks are required to learn a mid-level semantic boundary detector , whereas CNN approaches only need LN locations for training . In abdominal LN detection, obtains the best trade-off between its CNN model complexity and sampled data configuration. Our best performing CNN model is GoogLeNet-TL (256x256) which obtains an AUC=0.92 and 0.70 TPR/3FP. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_38", "text": " The main difference between our dataset preparation protocol and that from is a more aggressive extraction of random views within a much larger range of FOVs. The usage of larger FOVs to capture more image spatial context is inspired by deep zoom-out features that improve semantic segmentation. This image sampling scheme contributes to our best reported performance results in both mediastinal LN detection (in this paper) and automated pancreas segmentation . As shown in Figure 1, abdominal LNs are surrounded by many other similar looking objects. Meanwhile, mediastinal LNs are more easily distinguishable, due to the images’ larger spatial contexts. Finally, from the perspective of the data-model trade-off: “Do We Need More Training Data or Better Models?” , more abdomen CT scans from distinct patient populations need to be acquired and annotated, in order to take full advantage of deep CNN models of high capacity. Nevertheless, deeper and wider CNN models (e.g., GoogLeNet-RI-L and GoogLeNet-TL-H versus Cifar-10 ) have shown improved results in the mediastinal LN detection. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_39", "text": " Figure 9 provides examples of misclassified lymph nodes (in axial view) (both false negatives (Left) and false positives(Right)), from the Abdomen and Mediastinum datasets. The overall reported LN detection results are clinically significant, as indicated in . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_40", "text": " The CNN models evaluated in this experiment are 1) AlexNet-RI (training from scratch on the ILD dataset with random initialization); 2) AlexNet-TL (with transfer learning from ); 3) AlexNet-ImNet: pre-trained ImageNet-CNN model with only the last cost function layer retrained from random initialization, according to the six ILD classes (similar to but without using additional hand-crafted non-deep feature descriptors, such as GIST and BoVW); 4) GoogLeNet-RI (random initialization); 5) GoogLeNet-TL (GoogLeNet with transfer learning from ). All ILD images (patches of 64×64646464\\times 64 and CT axial slices of 512×512512512512\\times 512) are re-sampled to a fixed dimension of 256×256256256256\\times 256 pixels. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_41", "text": " We evaluate the ILD classification task with five-fold CV on patient-level split, as it is more informative for real clinical performance than LOO. The classification accuracy rates for interstitial lung disease detection are shown in Table III. Two sub-tasks on ILD patch and slice classifications are conducted. In general, patch-level ILD classification is less challenging than slice-level classification, as far more data samples can be sampled from the manually annotated ROIs (up to 100 image patches per ROI), available from . From Table III, all five deep models evaluated obtain comparable results within the range of classification accuracy rates (0.74,0.76)0.740.76(0.74,0.76). Their averaged model achieves a slightly better accuracy of 0.79. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_42", "text": " F1-scores (38, 39, 54) and the confusion matrix (Table V) for patch-level ILD classification using GoogLeNet-TL under five-fold cross-validation (we denote as Patch-CV5) are also computed. F1-scores are reported on patch classification only (32×32323232\\times 32 pixel patches extracted from manual ROIs) (38, 39, 54), as shown in Table IV. Both and use the evaluation protocol of “leave-one-patient-out” (LOO), which is arguably much easier and not directly comparable to 10-fold CV or our Patch-CV5. In this study, we classify six ILD classes by adding a consolidation (CD) class to five classes of healthy (normal - NM), emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) in (38, 39, 54). Patch-CV10 and Patch-CV5 report similar medium to high F-scores. This implies that the ILD dataset (although one of the mainstream public medical image datasets) may not adequately represent ILD disease CT lung imaging patterns, over a population of only 120 patients. Patch-CV5 yields higher F-scores than and classifies the extra consolidation (CD) class. At present, the most pressing task is to drastically expand the dataset or to explore across-dataset deep learning on the combined ILD and LTRC datasets . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_43", "text": " Recently, Gao et al. have argued that a new CADe protocol on holistic classification of ILD diseases directly, using axial CT slice attenuation patterns and CNN, may be more realistic for clinical applications. We refer to this as slice-level classification, as image patch sampling from manual ROIs can be completely avoided (hence, no manual ROI inputs will be provided). The experimental results in are conducted with a patient-level hard split of 100 (training) and 20 (testing). The method’s testing F-scores (i.e., Slice-Test) are given in Table IV. Note that the F-scores in are not directly comparable to our results, due to different evaluation criteria. Only Slice-Test is evaluated and reported in , and we find that F-scores can change drastically from different rounds of the five-fold CV. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_44", "text": " While it is a more practical CADe scheme, slice-level CNN learning is very challenging, as it is restricted to only 905 CT image slices with tagged ILD labels. We only benchmark the slice-level ILD classification results in this section. Even with the help of data augmentation (described in Sec. II), the classification accuracy of GoogLeNet-TL from Table III is only 0.57. However, transfer learning from ImageNet pre-trained model is consistently beneficial, as evidenced by AlexNet-TL (0.46) versus AlexNet-RI (0.44), and GoogLeNet-TL (0.57) versus GoogLeNet-RI (0.41). It especially prevents GoogLeNet from over-fitting on the limited CADe datasets. Finally, when the cross-validation is conducted by randomly splitting the set of all 905 CT axial slices into five folds, markedly higher F-scores are obtained (Slice-Random in Table IV). This further validates the claim that the dataset poorly generalizes ILDs for different patients. Figure 10 shows examples of misclassified ILD patches (in axial view), with their ground truth labels and inaccurately classified labels. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_45", "text": " No existing work has reached the performance requirements for a realistic clinical setting , in which simple ROI-guided image patch extraction and classification (which requires manual ROI selection by clinicians) is implemented. The main goal of this paper is to investigate the three factors (CNN architectures, dataset characteristics and transfer learning) that affect performance on a specific medical image analysis problem and to ultimately deliver clinically relevant results. For ILD classification, the most critical performance bottlenecks are the challenge of cross-dataset learning and the limited patient population size. We attempt to overcome these obstacles by merging the ILD and LTRC datasets. Although the ILD and LTRC datasets (used in ) were generated and annotated separately, they contain many common disease labels. For instance, the ILD disease classes emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) belong to both datasets, and thus can be jointly trained/tested to form a larger and unified dataset. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_46", "text": " Adapting fully convolutional CNN or FCNN to parse every pixel location in the ILD lung CT images or slices, or adapting other methods from CNN based semantic image segmentation using PASCAL or ImageNet, may improve accuracy and efficiency. However, current FCNN approaches (65, 66) lack adequate spatial resolution in their directly output label space. A segmentation label propagation method was recently proposed to provide full pixel-wise labeling of the ILD data images. In this work, we sample image patches from the slice using the ROIs for the ILD provided in the dataset, in order to be consistent with previous methods in patch-level (38, 39, 54) and slice-level classification . ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_47", "text": " In this work, we mainly focus on AlexNet and GoogLeNet. AlexNet is the first notably successful CNN architecture on the ImageNet challenge and has rekindled significant research interests on CNN. GoogLeNet is the state-of-the-art deep model, which has outperformed other notable models, such as AlexNet, OverFeat, and VGGNet (67, 68) in various computer vision benchmarks. Likewise, a reasonable assumption is that OverFeat and VGGNet may generate quantitative performance results ranked between AlexNet’s and GoogLeNet’s. For completeness, we include the Overfeat and VGGNet in the following evaluations, to bolster our hypothesis. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_48", "text": " OverFeat is described in as an integrated framework for using CNN for classification, localization and detection. Its architecture is similar to that of AlexNet, but contains far more parameters (e.g., 1024 convolution filters in both “conv4” and “conv5” layers compared to 384 and 256 convolution kernels in the “conv4” and “conv5” layers of AlexNet), and operates more densely (e.g., smaller kernel size of 2 in “pool2” layer “pool5” compared to the kernel size 3 in “pool2” and “pool5” of AlexNet) on the input image. Overfeat is the winning model of the ILSVRC 2013 in detection and classification tasks. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_49", "text": " The VGGNet architecture is introduced in , where it is designed to significantly increase the depth of the existing CNN architectures with 16 or 19 layers. Very small 3×3333\\times 3 size convolutional filters are used in all convolution layers with a convolutional stride of size 1, in order to reduce the number of parameters in deeper networks. Since VGGNet is substantially deeper than the other CNN models, VGGNet is more susceptible to the vanishing gradient problem (69, 70, 71). Hence, the network may be more difficult to train. Training the network requires far more memory and computation time than AlexNet. We use the 16 layer variant as our default VGGNet model in our study. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_50", "text": " The classification accuracy results for ILD slice and patch level classification of five CNN architectures (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet) are shown in Table VI. Based on the analysis in Sec. IV-B, transfer learning is only used for the slice level classification task. From Table VI, quantitative classification accuracy rates increase as the CNN model becomes more complex (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet, in ascending order), for both ILD slice and patch level classification problems. The reported results validate our assumption that OverFeat’s and VGGNet’s performance levels fall between AlexNet’s and GoogLeNet‘s (this observation is consistent with the computer vision findings). CifarNet is designed for images with smaller dimensions (32×32323232\\times 32 images), and thus is not catered to classification tasks involving 256×256256256256\\times 256 images. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_51", "text": " To investigate the performance difference between five-fold cross-validation (CV) in Sec. IV-B and leave-one-patient-out (LOO) validation, this experiment is performed under the LOO protocol. By comparing results in Table III (CV-5) to those in Table VI (LOO), one can see that LOO’s quantitative performances are remarkably better than CV-5’s. For example, in ILD slice-level classification, the accuracy level drastically increases from 0.46 to 0.867 using AlexNet-TL, and from 0.57 to 0.902 for GoogLeNet-TL. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_52", "text": " CNN training is implemented with the Caffe deep learning framework, using a NVidia K40 GPU on Ubuntu 14.04 Linux OS. All models are trained for up to 90 epochs with early stopping criteria, where a model snapshot with low validation loss is taken for the final model. Other hyper-parameters are fixed as follows: momentum: 0.9; weight decay: 0.0005; and a step learning rate schedule with base learning rate of 0.01, decreased by a factor of 10 every 30 epochs. The image batch size is set to 128, except for GoogLeNet’s (64) and VGG-16’s (32), which are the maximum batch sizes that can fit in the NVidia K40 GPU with 12GB of memory capacity. Table VII illustrates the training time and memory requirements of the five CNN architectures on ILD patch-based classification up to 90 epochs. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_53", "text": " Medical datasets are often “biased”, in that the number of healthy samples is much larger than the number of diseased instances, or that the numbers of images per class are uneven. In ILD dataset, the number of fibrosis samples is about 3.5 times greater than the number of emphysema samples. The number of non-LNs is 3∼4similar-to343\\sim 4 times greater than the number of LNs in lymph node detection. Different sampling or resampling rates are routinely applied to both ILD and LN detection to balance the data sample number or scale per class, as in. We refer this as “Equal Prior”. If we use the same sampling rate, that will lead to a “Biased Prior” across different classes. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_54", "text": " Without loss of generality, after GoogLeNet is trained on the training sets under “Equal” or “Biased” priors, we compare its classification results on the balanced validation sets. Evaluating a classifier on a biased validation set will cause unfair assessment of its performance. For instance, a classifier that predicts every image patch as “non-LN” will still achieve a 70%percent7070\\% accuracy rate on a biased set with 3.53.53.5 times as many non-LN samples as LN samples. The classification accuracy results of GoogLeNet trained under two configurations are shown in Table VIII. Overall, it achieves lower accuracy results when trained with a “biased prior” in both tasks, and the accuracy difference for ILD patch-based classification is small. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_55", "text": " In this section, we determine and analyze, via CNN visualization, the reasons for which transfer learning is beneficial to achieve better performance on CAD applications. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_56", "text": " Thoracoabdominal LN Detection. In Figure 12, the first layer convolution filters from five different CNN architectures are visualized. We notice that without transfer learning (57, 6), somewhat blurry filters are learned (AlexNet-RI (256x256), AlexNet-RI (64x64), GoogLeNet-RI (256x256) and GoogLeNet-RI (64x64)). However, in AlexNet-TL (256x256), many higher orders of contrast- or edge-preserving patterns (that enable capturing image appearance details) are evidently learned through fine-tuning from ImageNet. With a smaller input resolution, AlexNet-RI (64x64) and GoogLeNet-RI (64x64) can learn image contrast filters to some degree; whereas, GoogLeNet-RI (256x256) and AlexNet-RI (256x256) have over-smooth low-level filters throughout. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_57", "text": " ILD classification. We focus on analyzing visual CNN optimization traces and activations from the ILD dataset, as its slice-level setting is most similar to ImageNet’s. Indeed, both datasets use full-size images. The traces of the training loss, validation loss and validation accuracy of AlexNet-RI and AlexNet-TL, are shown in Figure 11. For AlexNet-RI in Figure 11 (a), the training loss significantly decreases as the number of training epochs increases, while the validation loss notably increases and the validation accuracy does not improve much before reaching a plateau. With transfer learning and fine-tuning, much better and consistent performances of training loss, validation loss and validation accuracy traces are obtained (see Figure 11 (b)). We begin the optimization problem – that of fine-tuning the ImageNet pre-trained CNN to classify a comprehensive set of images – by initializing the parameters close to an optimal solution. One could compare this process to making adults learn to classify ILDs, as opposed to babies. During the process, the validation loss, having remained at lower values throughout, achieves higher final accuracy levels than the validation loss on a similar problem with random initialization. Meanwhile, the training losses in both cases decrease to values near zero. This indicates that both AlexNet-RI and AlexNet-TL over-fit on the ILD dataset, due to its small instance size. The quantitative results in Table III indicate that AlexNet-TL and GoogLeNet-TL have consistently better classification accuracies than AlexNet-RI and GoogLeNet-RI, respectively. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_58", "text": " The last pooling layer (pool-5) activation maps of the ImageNet pre-trained AlexNet (analogical to AlexNet-ImNet) and AlexNet-TL, obtained by processing two input images of Figure 2 (b,c), are shown in Figure 13 (a,b). The last pooling layer activation map summarizes the entire input image by highlighting which relative locations or neural reception fields relative to the image are activated. There are a total of 256 (6x6) reception fields in AlexNet . Pooling units where the relative image location of the disease region is present in the image are highlighted with green boxes. Next, we reconstruct the original ILD images using the process of de-convolution, back-propagating with convolution and un-pooling from the activation maps of the chosen pooling units . From the reconstructed images (Figure 13 bottom), we observe that with fine-tuning, AlexNet-TL detects and localizes objects of interest (ILD disease regions depicted in in Figure 2 (b) and (c)) better than AlexNet-ImNet. The filters shown in Figure 13 that better localize regions on the input images (Figure 2 (b) and (c)) respectively, produce relatively higher activations (in the top 5%) among all 512 reception field responses in the fine-tuned AlexNet-TL model. As observed in , the final CNN classification score can not be driven solely by a single strong activation in the receptions fields, but often by a sparse set of high activations (i.e., varying selective or sparse activations per input image). ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_59", "text": " We summarize our findings as follows. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_60", "text": " • Deep CNN architectures with 8, even 22 layers (4, 33), can be useful even for CADe problems where the available training datasets are limited. Previously, CNN models used in medical image analysis applications have often been 2∼5similar-to252\\sim 5 orders of magnitude smaller. • The trade-off between using better learning models and using more training data should be carefully considered when searching for an optimal solution to any CADe problem (e.g., mediastinal and abdominal LN detection). • Limited datasets can be a bottleneck to further advancement of CADe. Building progressively growing (in scale), well annotated datasets is at least as crucial as developing new algorithms. This has been accomplished, for instance, in the field of computer vision. The well-known scene recognition problem has made tremendous progress, thanks to the steady and continuous development of Scene-15, MIT Indoor-67, SUN-397 and Place datasets . • Transfer learning from the large scale annotated natural image datasets (ImageNet) to CADe problems has been consistently beneficial in our experiments. This sheds some light on cross-dataset CNN learning in the medical image domain, e.g., the union of the ILD and LTRC datasets , as suggested in this paper. • Finally, applications of off-the-shelf deep CNN image features to CADe problems can be improved by either exploring the performance-complementary properties of hand-crafted features (10, 9, 12), or by training CNNs from scratch and better fine-tuning CNNs on the target medical image dataset, as evaluated in this paper. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" }, { "id": "1602.03409_all_61", "text": " In this paper, we exploit and extensively evaluate three important, previously under-studied factors on deep convolutional neural networks (CNN) architecture, dataset characteristics, and transfer learning. We evaluate CNN performance on two different computer-aided diagnosis applications: thoraco-abdominal lymph node detection and interstitial lung disease classification. The empirical evaluation, CNN model visualization, CNN performance analysis, and conclusive insights can be generalized to the design of high performance CAD systems for other medical imaging tasks. ", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning" } ]
How is a VAE different from an autoencoder ?
[ Disentanglement is independence of feature and it is used in traditional auto encoders whereas it is not used in VAEs] [22].
[ 22 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most work has focused on capturing purely independent factors of variation (10, 7, 16, 25, 4, 57, 3, 8, 17, 15, 59), typically evaluating this using purpose-built, synthetic data (15, 17, 25), whose generative factors are independent by construction. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_1", "text": " This conventional view of disentanglement, as recovering independence, has subsequently motivated the development of formal evaluation metrics for independence (15, 25), which in turn has driven the development of objectives that target these metrics, often by employing regularisers explicitly encouraging independence in the representations (15, 25, 16). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_2", "text": " We argue that such an approach is not generalisable, and potentially even harmful, to learning interpretable representations for more complicated problems, where such simplistic representations cannot accurately mimic the generation of high dimensional data from low dimensional latent spaces, and more richly structured dependencies are required. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_3", "text": " We posit a generalisation of disentanglement in vaes—decomposing their latent representations—that can help avoid such pitfalls. We characterise decomposition in vaes as the fulfilment of two factors: a) the latent encodings of data having an appropriate level of overlap, and b) the aggregate encoding of data conforming to a desired structure, represented through the prior. We emphasize that neither of these factors is sufficient in isolation: without an appropriate level of overlap, encodings can degrade to a lookup table where the latents convey little information about data, and without the aggregate encoding of data following a desired structure, the encodings do not decompose as desired. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_4", "text": " Disentanglement implicitly makes a choice of decomposition: that the latent features are independent of one another. We make this explicit and exploit it to both provide improvement to disentanglement through judicious choices of structure in the prior, and to introduce a more general framework flexible enough to capture alternate, more complex, notions of decomposition such as sparsity, clustering, hierarchical structuring, or independent subspaces. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_5", "text": " To connect our framework with existing approaches for encouraging disentanglement, we provide a theoretical analysis of the β𝛽\\beta-vae (17, 3, 2), and show that it typically only allows control of latent overlap, the first decomposition factor. We show that it can be interpreted, up to a constant offset, as the standard vae objective with its prior annealed as pθ​(𝒛)βsubscript𝑝𝜃superscript𝒛𝛽p_{{\\theta}}\\left(\\bm{z}\\right)^{\\beta} and an additional maximum entropy regularization of the encoder that increases the stochasticity of the encodings. Specialising this result for the typical choice of a Gaussian encoder and isotropic Gaussian prior indicates that the β𝛽\\beta-vae, up to a scaling of the latent space, is equivalent to the vae plus a regulariser encouraging higher encoder variance. Moreover, this objective is invariant to rotations of the learned latent representation, meaning that it does not, on its own, encourage the latent variables to take on meaningful representations any more than an arbitrary rotation of them. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_6", "text": " We confirm these results empirically, while further using our decomposition framework to show that simple manipulations to the prior can improve disentanglement, and other decompositions, with little or no detriment to the reconstruction accuracy. Further, motivated by our analysis, we propose an alternative objective that takes into account the distinct needs of the two factors of decomposition, and use it to learn clustered and sparse representations as demonstrations of alternative forms of decomposition. An implementation of our experiments and suggested methods is provided at http://github.com/iffsid/disentangling-disentanglement. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_7", "text": " Let 𝒙𝒙\\bm{x} be an 𝒳𝒳\\mathcal{X}-valued random variable distributed according to an unknown generative process with density p𝒟​(𝒙)subscript𝑝𝒟𝒙p_{\\mathcal{D}}(\\bm{x}) and from which we have observations, X={𝒙1,…,𝒙n}𝑋subscript𝒙1…subscript𝒙𝑛X=\\{\\bm{x}_{1},\\dots,\\bm{x}_{n}\\}. The aim is to learn a latent-variable model pθ​(𝒙,𝒛)subscript𝑝𝜃𝒙𝒛p_{{\\theta}}\\left(\\bm{x},\\bm{z}\\right) that captures this generative process, comprising of a fixed111Learning the prior is possible, but omitted for simplicity. prior over latents p​(𝒛)𝑝𝒛p(\\bm{z}) and a parametric likelihood pθ​(𝒙|𝒛)subscript𝑝𝜃conditional𝒙𝒛p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right). Learning proceeds by minimising a divergence between the true data generating distribution and the model w.r.t θ𝜃\\theta, typically arg​min𝜽⁡KL⁡(p𝒟​(𝒙)∥pθ​(𝒙))=arg​max𝜽⁡𝔼p𝒟​(𝒙)⁡(log⁡pθ​(𝒙))subscriptargmin𝜽KLconditionalsubscript𝑝𝒟𝒙subscript𝑝𝜃𝒙subscriptargmax𝜽subscript𝔼subscript𝑝𝒟𝒙subscript𝑝𝜃𝒙\\displaystyle\\operatorname*{arg\\,min}_{\\bm{\\theta}}\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(p_{\\mathcal{D}}(\\bm{x})\\,\\|\\;p_{{\\theta}}\\left(\\bm{x}\\right)\\right)=\\operatorname*{arg\\,max}_{\\bm{\\theta}}\\operatorname{{}\\mathbb{E}}_{p_{\\mathcal{D}}(\\bm{x})}\\left(\\log p_{{\\theta}}\\left(\\bm{x}\\right)\\right) where pθ​(𝒙)=∫𝒵pθ​(𝒙|𝒛)​p​(𝒛)​𝑑𝒛subscript𝑝𝜃𝒙subscript𝒵subscript𝑝𝜃conditional𝒙𝒛𝑝𝒛differential-d𝒛p_{{\\theta}}\\left(\\bm{x}\\right)=\\int_{\\mathcal{Z}}p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right)p(\\bm{z})d\\bm{z} is the marginal likelihood, or evidence, of datapoint 𝒙𝒙\\bm{x} under the model, approximated by averaging over the observations. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_8", "text": " However, estimating pθ​(𝒙)subscript𝑝𝜃𝒙p_{{\\theta}}\\left(\\bm{x}\\right) (or its gradients) to any sufficient degree of accuracy is typically infeasible. A common strategy to ameliorate this issue involves the introduction of a parametric inference model qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) to construct a variational evidence lower bound (elbo) on log⁡pθ​(𝒙)subscript𝑝𝜃𝒙\\log p_{{\\theta}}\\left(\\bm{x}\\right) as follows ℒ​(𝒙;θ,ϕ)≜logpθ(𝒙)−KL(qϕ(𝒛|𝒙)∥pθ(𝒛|𝒙))=𝔼qϕ​(𝒛|𝒙)​(log⁡pθ​(𝒙|𝒛))−KL⁡(qϕ​(𝒛|𝒙)∥p​(𝒛)).\\displaystyle\\begin{split}\\mathcal{L}(\\bm{x};\\!\\theta,\\!\\phi)\\!&\\triangleq\\!\\log p_{{\\theta}}\\left(\\bm{x}\\right)-\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\,\\|\\;p_{{\\theta}}\\left(\\bm{z}|\\bm{x}\\right)\\right)\\\\ \\!&=\\!\\mathbb{E}_{q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\!}(\\log p_{{\\theta}}(\\bm{x}|\\bm{z}))\\!-\\!\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}(\\bm{z}|\\bm{x})\\!\\,\\|\\;\\!p(\\bm{z})\\!\\right).\\!\\!\\!\\end{split} (1) A variational autoencoder (vae) (27, 48) views this objective from the perspective of a deep stochastic autoencoder, taking the inference model qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) to be an encoder and the likelihood model pθ​(𝒙|𝒛)subscript𝑝𝜃conditional𝒙𝒛p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right) to be a decoder. Here θ𝜃\\theta and ϕitalic-ϕ\\phi are neural network parameters, and learning happens via stochastic gradient ascent (sga) using unbiased estimates of ∇θ,ϕ1n​∑i=1nℒ​(𝒙i;θ,ϕ)subscript∇𝜃italic-ϕ1𝑛superscriptsubscript𝑖1𝑛ℒsubscript𝒙𝑖𝜃italic-ϕ\\nabla_{\\theta,\\phi}\\frac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\bm{x}_{i};{\\theta},{\\phi}). Note that when clear from the context, we denote ℒ​(𝒙;θ,ϕ)ℒ𝒙𝜃italic-ϕ\\mathcal{L}(\\bm{x};\\theta,\\phi) as simply ℒ​(𝒙)ℒ𝒙\\mathcal{L}(\\bm{x}). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_9", "text": " Disentanglement, as typically employed in literature, refers to independence among features in a representation (5, 15, 18). Conceptually, however, it has a long history, far longer than we could reasonably do justice here, and is far from specific to vaes. The idea stems back to traditional methods such as ICA (58, 23) and conventional autoencoders , through to a range of modern approaches employing deep learning (47, 36, 9, 37, 1, 19, 11). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_10", "text": " Of particular relevance to this work are approaches that explore disentanglement in the context of vaes (17, 3, 51, 25, 8, 16). Here one aims to achieve independence between the dimensions of the aggregate encoding, typically defined as qϕ​(𝒛)≜𝔼p𝒟​(𝒙)⁡(q​(𝒛|𝒙))≈1n​∑inq​(𝒛|𝒙i)≜subscript𝑞italic-ϕ𝒛subscript𝔼subscript𝑝𝒟𝒙𝑞conditional𝒛𝒙1𝑛superscriptsubscript𝑖𝑛𝑞conditional𝒛subscript𝒙𝑖q_{\\phi}(\\bm{z})\\triangleq\\operatorname{{}\\mathbb{E}}_{p_{\\mathcal{D}}(\\bm{x})}\\left(q(\\bm{z}|\\bm{x})\\right)\\approx\\frac{1}{n}\\sum_{i}^{n}q(\\bm{z}|\\bm{x}_{i}). The significance of qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{\\phi}(\\bm{z}) is that it is the marginal distribution induced on the latents by sampling a datapoint and then using the encoder to sample an encoding given that datapoint. It can thus informally be thought of as the pushforward distribution for “sampling” representations in the latent space. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_11", "text": " Within the disentangled vaes literature, there is also a distinction between unsupervised approaches, and semi-supervised approaches wherein one has access to the true generative factor values for some subset of data (28, 51, 6). Our focus, however, is on the unsupervised setting. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_12", "text": " Much of the prior work in the field has either implicitly or explicitly presumed a slightly more ambitious definition of disentanglement than considered above: that it is a measure of how well one captures true factors of variation (which happen to be independent by construction for synthetic data), rather than just independent factors. After all, if we wish for our learned representations to be interpretable, it is necessary for the latent variables to take on clear-cut meaning. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_13", "text": " One such definition is given by Eastwood and Williams , who define it as the extent to which a latent dimension d∈D𝑑𝐷d\\in D in a representation predicts a true generative factor k∈K𝑘𝐾k\\in K, with each latent capturing at most one generative factor. This implicitly assumes D≥K𝐷𝐾D\\geq K, as otherwise the latents are unable to explain all the true generative factors. However, for real data, the association is more likely D≪Kmuch-less-than𝐷𝐾D\\ll K, with one learning a low-dimensional abstraction of a complex process involving many factors. Consequently, such simplistic representations cannot, by definition, be found for more complex datasets that require more richly structured dependencies to be able to encode the information required to generate higher dimensional data. Moreover, for complex datasets involving a finite set of datapoints, it might not be reasonable to presume that one could capture the elements of the true generative process—the data itself might not contain sufficient information to recover these and even if it does, the computation required to achieve this through model learning is unlikely to be tractable. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_14", "text": " The subsequent need for richly structured dependencies between latent dimensions has been reflected in the motivation for a handful of approaches (51, 6, 24, 16) that explore this through graphical models, although employing mutually-inconsistent, and not generalisable, interpretations of disentanglement. This motivates our development of a decomposition framework as a means of extending beyond the limitations of disentanglement. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_15", "text": " The commonly assumed notion of disentanglement is quite restrictive for complex models where the true generative factors are not independent, very large in number, or where it cannot be reasonably assumed that there is a well-defined set of “true” generative factors (as will be the case for many, if not most, real datasets). To this end, we introduce a generalization of disentanglement, decomposition, which at a high-level can be thought of as imposing a desired structure on the learned representations. This permits disentanglement as a special case, for which the desired structure is that qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) factors along its dimensions. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_16", "text": " We characterise the decomposition of latent spaces in vaes to be the fulfilment of two factors (as shown in Figure 1): a. An “appropriate” level of overlap in the latent space—ensuring that the range of latent values capable of encoding a particular datapoint is neither too small, nor too large. This is, in general, dictated by the level of stochasticity in the encoder: the noisier the encoding process is, the higher the number of datapoints which can plausibly give rise to a particular encoding. b. The aggregate encoding qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) matching the prior p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right), where the latter expresses the desired dependency structure between latents. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_17", "text": " The overlap factor Item a is perhaps best understood by considering extremes—too little, and the latents effectively become a lookup table; too much, and the data and latents do not convey information about each other. In either case, meaningfulness of the latent encodings is lost. Thus, without the appropriate level of overlap—dictated both by noise in the true generative process and dataset size—it is not possible to enforce meaningful structure on the latent space. Though quantitatively formalising overlap in general scenarios is surprisingly challenging (c.f.  §§ 7 and D), we note for now that when the encoder distribution is unimodal, it is typically well-characterized by the mutual information between the data and the latents I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_18", "text": " The regularisation factor Item b enforces a congruence between the (aggregate) latent embeddings of data and the dependency structures expressed in the prior. We posit that such structure is best expressed in the prior, as opposed to explicit independence regularisation of the marginal posterior (25, 8), to enable the generative model to express the desired decomposition, and to avoid potentially violating self-consistency between the encoder, decoder, and true data generating distributions. The prior also provides a rich and flexible means of expressing desired structure by defining a generative process that encapsulates dependencies between variables, as with a graphical model. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_19", "text": " Critically, neither factor is sufficient in isolation. An inappropriate level of overlap in the latent space will impede interpretability, irrespective of quality of regularisation, as the latent space need not be meaningful. Conversely, without the pressure to regularise to the prior, the latent space is under no constraint to exhibit the desired structure. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_20", "text": " Decomposition is inherently subjective as we must choose the structure of the prior we regularise to depending on how we intend to use our learned model or what kind of features we would like to uncover from the data. This may at first seem unsatisfactory compared to the seemingly objective adjustments often made to the elbo by disentanglement methods. However, disentanglement is itself a subjective choice for the decomposition. We can embrace this subjective nature through judicious choices of the prior distribution; ignoring this imposes unintended assumptions which can have unwanted effects. For example, as we will later show, the rotational invariance of the standard prior p​(𝒛)=𝒩​(𝒛;0,I)𝑝𝒛𝒩𝒛0𝐼p(\\bm{z})=\\mathcal{N}(\\bm{z};0,I) can actually hinder disentanglement. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_21", "text": " To connect existing approaches to our proposed framework, we now consider, as a case study, the β𝛽\\beta-vae —an adaptation of the vae objective (elbo) to learn better-disentangled representations. Specifically, it scales the KL term in the standard ELBO by a factor β>0𝛽0\\beta>0 as ℒβ​(𝒙)=𝔼qϕ​(𝒛|𝒙)⁡(log⁡pθ​(𝒙|𝒛))−β​KL⁡(qϕ​(𝒛|𝒙)∥p​(𝒛)).subscriptℒ𝛽𝒙subscript𝔼subscript𝑞italic-ϕconditional𝒛𝒙subscript𝑝𝜃conditional𝒙𝒛𝛽KLconditionalsubscript𝑞italic-ϕconditional𝒛𝒙𝑝𝒛\\displaystyle\\mathcal{L}_{\\beta}(\\bm{x})\\!=\\!\\operatorname{{}\\mathbb{E}}_{q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\!}\\left(\\log p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right)\\right)\\!-\\!\\beta\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\!\\,\\|\\;\\!p\\left(\\bm{z}\\right)\\!\\right).\\!\\! (2) Hoffman et al. showed that the β𝛽\\beta-vae target can be viewed as a standard elbo with the alternative prior r​(𝒛)∝qϕ​(𝒛)(1−β)​p​(𝒛)βproportional-to𝑟𝒛subscript𝑞italic-ϕsuperscript𝒛1𝛽𝑝superscript𝒛𝛽r(\\bm{z})\\propto q_{{\\phi}}\\left(\\bm{z}\\right)^{(1-\\beta)}p(\\bm{z})^{\\beta}, along with terms involving the mutual information and the prior’s normalising constant. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_22", "text": " We now introduce an alternate deconstruction as follows ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_23", "text": " Clearly, the second term in Eq. 3, enforcing a maximum entropy regulariser on the posterior qϕ​(𝒛∣𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right), allows the value of β𝛽\\beta to affect the overlap of encodings in the latent space. We thus see that it provides a means of controlling decomposition factor (a). However, it is itself not sufficient to enforce disentanglement. For example, the entropy of qϕ​(𝒛∣𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right) is independent of its mean μθ​(𝒙)subscript𝜇𝜃𝒙\\mu_{\\theta}(\\bm{x}) and is independent to rotations of 𝒛𝒛\\bm{z}, so it is clearly incapable of discouraging certain representations with poor disentanglement. All the same, having the wrong level of regularization can, in turn, lead to an inappropriate level of overlap and undermine the ability to disentangle. Consequently, this term is still important. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_24", "text": " Although the precise impact of prior annealing depends on the original form of the prior, the high-level effect is the same—larger values of β𝛽\\beta cause the effective latent space to collapse towards the modes of the prior. For uni-modal priors, the main effect of annealing is to reduce the scaling of 𝒛𝒛\\bm{z}; indeed this is the only effect for generalized Gaussian distributions. While this would appear not to have any tangible effects, closer inspection suggests otherwise—it ensures that the scaling of the encodings matches that of the prior. Only incorporating the maximum-entropy regularisation will simply cause the scaling of the latent space to increase. The rescaling of the prior now cancels this effect, ensuring the scaling of qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) matches that of p​(𝒛)𝑝𝒛p(\\bm{z}). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_25", "text": " Taken together, this implies that the β𝛽\\beta-vae’s ability to encourage disentanglement is predominantly through direct control over the level of overlap. It places no other direct constraint on the latents to disentangle (although in some cases, the annealed prior may inadvertently encourage better disentanglement), but instead helps avoid the pitfalls of inappropriate overlap. Amongst other things, this explains why large β𝛽\\beta is not universally beneficial for disentanglement, as the level of overlap can be increased too far. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_26", "text": " We can gain further insights into the β𝛽\\beta-vae in the common use case—assuming a Gaussian prior, p​(𝒛)=𝒩​(𝒛;0,Σ)𝑝𝒛𝒩𝒛0Σp(\\bm{z})=\\mathcal{N}(\\bm{z};0,\\Sigma), and Gaussian encoder, qϕ​(𝒛∣𝒙)=𝒩​(𝒛;μϕ​(𝒙),Sϕ​(𝒙))subscript𝑞italic-ϕconditional𝒛𝒙𝒩𝒛subscript𝜇italic-ϕ𝒙subscript𝑆italic-ϕ𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right)=\\mathcal{N}\\left(\\bm{z};\\mu_{\\phi}(\\bm{x}),S_{\\phi}(\\bm{x})\\right). Here it is straightforward to see that annealing simply scales the latent space by 1/β1𝛽1/\\sqrt{\\beta}, i.e. fβ​(𝒛)=𝒩​(𝒛;0,Σ/β)subscript𝑓𝛽𝒛𝒩𝒛0Σ𝛽f_{\\beta}(\\bm{z})=\\mathcal{N}(\\bm{z};0,\\Sigma/\\beta). Given this, it is easy to see that a vae trained with the adjusted target ℒ​(𝒙;πθ,β,qϕ)ℒ𝒙subscript𝜋𝜃𝛽subscript𝑞italic-ϕ\\mathcal{L}\\left(\\bm{x};\\pi_{\\theta,\\beta},q_{\\phi}\\right), but appropriately scaling the latent space, will behave identically to one trained with the original target ℒ​(𝒙)ℒ𝒙\\mathcal{L}(\\bm{x}). It will also have an identical elbo as the expected reconstruction is trivially the same, while the kl between Gaussians is invariant to scaling both equally. More precisely, we have the following result. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_27", "text": " Noting that as c𝑐c is irrelevant to the training process, this indicates an equivalence, up to scaling of the latent space, between training with the β𝛽\\beta-vae objective and a maximum-entropy regularised version of the standard ELBO ℒH,β​(𝒙)≜ℒ​(𝒙)+(β−1)2​log⁡|Sϕ​(𝒙)|,≜subscriptℒ𝐻𝛽𝒙ℒ𝒙𝛽12subscript𝑆italic-ϕ𝒙\\displaystyle\\mathcal{L}_{H,\\beta}(\\bm{x})\\triangleq\\mathcal{L}(\\bm{x})+\\frac{(\\beta-1)}{2}\\log\\lvert S_{\\phi}(\\bm{x})\\rvert, (5) whenever p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right) and qϕ​(𝒛∣𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right) are Gaussian. Note that we implicitly presume suitable adjustment of neural-network hyper-parameters and the stochastic gradient scheme to account for the change of scaling in the optimal networks. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_28", "text": " Moreover, the stationary points for the two objectives ℒβ​(𝒙;θ,ϕ)subscriptℒ𝛽𝒙𝜃italic-ϕ\\mathcal{L}_{\\beta}(\\bm{x};\\theta,\\phi) and ℒH,β​(𝒙;θ′,ϕ′)subscriptℒ𝐻𝛽𝒙superscript𝜃′superscriptitalic-ϕ′\\mathcal{L}_{H,\\beta}\\left(\\bm{x};\\theta^{\\prime},\\phi^{\\prime}\\right) are equivalent (c.f. Corollary 2 in Appendix A), indicating that optimising for (5) leads to networks equivalent to those from optimising the β𝛽\\beta-vae objective Eq. 2, up to scaling the encodings by a factor of β𝛽\\sqrt{\\beta}. Under the isotropic Gaussian prior setting, we further have the following result showing that the β𝛽\\beta-vae objective is invariant to rotations of the latent space. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_29", "text": " This shows that the β𝛽\\beta-vae objective does not directly encourage latent variables to take on meaningful representations when using the standard choice of an isotropic Gaussian prior. In fact, on its own, it encourages latent representations which match the true generative factors no more than it encourages any arbitrary rotation of these factors, with such rotations capable of exhibiting strong correlations between latents. This view is further supported by our empirical results (see Figure 2), where we did not observe any gains in disentanglement (using the metric from Kim and Mnih ) from increasing β>0𝛽0\\beta>0 with an isotropic Gaussian prior trained on the 2D Shapes dataset . It may also go some way to explaining the extremely high levels of variation we found in the disentanglement-metric scores between different random seeds at train time. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_30", "text": " It should be noted, however, that the value of β𝛽\\beta can indirectly influence the level of disentanglement when using a mean-field assumption for the encoder distribution (i.e. restricting Sϕ​(x)subscript𝑆italic-ϕ𝑥S_{\\phi}(x) to be diagonal). As noted by Stühmer et al. , Rolinek et al. , increasing β𝛽\\beta can reinforce existing inductive biases, wherein mean-field assumptions encourage representations which reduce dependence between the latent dimensions . ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_31", "text": " Given the characterisation set out above, we now develop an objective that incorporates the effect of both factors (a) and (b). Our analysis of the β𝛽\\beta-vae tells us that its objective allows direct control over the level of overlap, i.e. factor Item a. To incorporate direct control over the regularisation Item b between the marginal posterior and the prior, we add a divergence term 𝔻​(qϕ​(z),p​(𝒛))𝔻subscript𝑞italic-ϕ𝑧𝑝𝒛\\mathbb{D}(q_{{\\phi}}\\left(z\\right),p(\\bm{z})), yielding ℒα,β(𝒙)=𝔼qϕ​(𝒛∣𝒙)⁡(log⁡pθ​(𝒙∣𝒛))−β​KL⁡(qϕ​(𝒛∣𝒙)∥p​(𝒛))−α​𝔻​(qϕ​(𝒛),p​(𝒛))subscriptℒ𝛼𝛽𝒙subscript𝔼subscript𝑞italic-ϕconditional𝒛𝒙subscript𝑝𝜃conditional𝒙𝒛𝛽KLconditionalsubscript𝑞italic-ϕconditional𝒛𝒙𝑝𝒛𝛼𝔻subscript𝑞italic-ϕ𝒛𝑝𝒛\\displaystyle\\begin{split}\\mathcal{L}_{\\alpha,\\beta}&(\\bm{x})=\\operatorname{{}\\mathbb{E}}_{q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right)}\\left(\\log p_{{\\theta}}\\left(\\bm{x}\\mid\\bm{z}\\right)\\right)\\\\ &-\\beta~{}\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right)\\,\\|\\;p(\\bm{z})\\right)-\\alpha~{}\\mathbb{D}(q_{{\\phi}}\\left(\\bm{z}\\right),p(\\bm{z}))\\end{split} (7) allowing control over how much factors (a) and (b) are enforced, through appropriate setting of β𝛽\\beta and α𝛼\\alpha respectively. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_32", "text": " Note that such an additional term has been previously considered by Kumar et al. , with 𝔻​(qϕ​(𝒛),p​(𝒛))=KL⁡(qϕ​(𝒛)∥p​(𝒛))𝔻subscript𝑞italic-ϕ𝒛𝑝𝒛KLconditionalsubscript𝑞italic-ϕ𝒛𝑝𝒛\\mathbb{D}(q_{{\\phi}}\\left(\\bm{z}\\right),p(\\bm{z}))=\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}\\right)\\,\\|\\;p(\\bm{z})\\right), although for the sake of tractability they rely instead on moment matching using covariances. There have also been a number of approaches that decompose the standard vae objective in different ways (e.g.  20, 16, 13) to expose KL⁡(qϕ​(𝒛)∥p​(𝒛))KLconditionalsubscript𝑞italic-ϕ𝒛𝑝𝒛\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}\\right)\\,\\|\\;p(\\bm{z})\\right) as a component, but, as we discuss in Appendix C, this can be difficult to compute correctly in practice, with common approaches leading to highly biased estimates whose practical behaviour is very different than the divergence they are estimating, unless very large batch sizes are used. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_33", "text": " Wasserstein Auto-Encoders  formulate an objective that includes a general divergence term between the prior and marginal posterior, computed using either maximum mean discrepancy (mmd) or a variational formulation of the Jensen-Shannon divergence (a.k.a gan loss). However, we find that empirically, choosing the mmd’s kernel and numerically stabilising its U-statistics estimator to be tricky, and designing and learning a gan to be cumbersome and unstable. Consequently, the problems of choosing an appropriate 𝔻​(qϕ​(𝒛),p​(𝒛))𝔻subscript𝑞italic-ϕ𝒛𝑝𝒛\\mathbb{D}(q_{{\\phi}}\\left(\\bm{z}\\right),p(\\bm{z})) and generating reliable estimates for this choice are tightly coupled, with a general purpose solution remaining an important open problem; see further discussion in Appendix C. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_34", "text": " We first show how subtle changes to the prior distribution can yield improvements in disentanglement. The standard choice of an isotropic Gaussian has previously been justified by the correct assertion that the latents are independent under the prior . However, as explained in § 4.1, the rotational invariance of this prior means that it does not directly encourage axis-aligned representations. Priors that break this rotational invariance should be better suited for learning disentangled representations. We assess this hypothesis by training a β𝛽\\beta-vae (i.e. (7) with α=0𝛼0\\alpha=0) on the 2D Shapes dataset  and evaluating disentanglement using the metric of Kim and Mnih . ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_35", "text": " Figure 2 demonstrates that notable improvements in disentanglement can be achieved by using non-isotropic priors: for a given reconstruction loss, implicitly fixed by β𝛽\\beta, non-isotropic Gaussian priors got better disentanglement scores, with further improvement achieved when the prior variance is learnt. With a product of Student-t priors pν​(𝒛)subscript𝑝𝜈𝒛p_{\\nu}(\\bm{z}) (noting pν​(𝒛)→𝒩​(𝒛;𝟎,𝐈)→subscript𝑝𝜈𝒛𝒩𝒛0𝐈p_{\\nu}(\\bm{z})\\rightarrow\\mathcal{N}(\\bm{z};\\mathbf{0},\\mathbf{I}) as ν→∞→𝜈\\nu\\rightarrow\\infty), reducing ν𝜈\\nu only incurred a minor reconstruction penalty, for improved disentanglement. Interestingly, very low values of ν𝜈\\nu caused the disentanglement score to drop again (though still giving higher values than the Gaussian). We speculate that this may be related to the effect of heavy tails on the disentanglement metric itself, rather than being an objectively worse disentanglement. Another interesting result was that for an isotropic Gaussian prior, as per the original β𝛽\\beta-vae setup, no gains at all were achieved in disentanglement by increasing β𝛽\\beta. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_36", "text": " We next consider an alternative decomposition one might wish to impose—clustering of the latent space. For this, we use the “pinwheels” dataset from  and a mixture of four equally-weighted Gaussians as our prior. We then conduct an ablation study to observe the effect of varying α𝛼\\alpha and β𝛽\\beta in ℒα,β​(𝐱)subscriptℒ𝛼𝛽𝐱\\mathcal{L}_{\\alpha,\\beta}(\\mathbf{x}) (as per (7)) on the learned representations, taking the divergence to be KL(p(𝒛)||qϕ(𝒛))\\text{KL}\\left(p(\\bm{z})||q_{{\\phi}}\\left(\\bm{z}\\right)\\right) (see Appendix B for details). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_37", "text": " We see in Figure 3 that increasing β𝛽\\beta increases the level of overlap in qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right), as a consequence of increasing the encoder variance for individual datapoints. When β𝛽\\beta is too large, the encoding of a datapoint loses meaning. Also, as a single datapoint encodes to a Gaussian distribution, qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) is unable to match p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right) exactly. Because qϕ​(𝒛|𝒙)→qϕ​(𝒛)→subscript𝑞italic-ϕconditional𝒛𝒙subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\rightarrow q_{{\\phi}}\\left(\\bm{z}\\right) when β→∞→𝛽\\beta\\rightarrow\\infty, this in turn means that overly large values of β𝛽\\beta actually cause a mismatch between qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) and p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right) (see top right of Figure 3). Increasing α𝛼\\alpha, instead always improved the match between qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) and p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right). Here, the finiteness of the dataset and the choice of divergence results in an increase in overlap with increasing α𝛼\\alpha, but only up to the level required for a non-negligible overlap between the nearby datapoints: large values of α𝛼\\alpha did not cause the encodings to collapse to a mode. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_38", "text": " Finally, we consider a commonly desired decomposition—sparsity, which stipulates that only a small fraction of available factors are employed. That is, a sparse representation can be thought of as one where each embedding has a significant proportion of its dimensions off, i.e. close to 00. Sparsity has often been considered for feature-learning (31, 12) and employed in the probabilistic modelling literature (45, 32). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_39", "text": " Common ways to achieve sparsity are through a specific penalty (e.g. l1subscript𝑙1l_{1}) or a careful choice of prior (peaked at 0). Concomitant with our overarching desire to encode requisite structure in the prior, we adopt the latter, constructing a sparse prior as p​(𝒛)=∏d(1−γ)​𝒩​(zd;0,1)+γ​𝒩​(zd;0,σ02)𝑝𝒛subscriptproduct𝑑1𝛾𝒩subscript𝑧𝑑01𝛾𝒩subscript𝑧𝑑0superscriptsubscript𝜎02p(\\bm{z})=\\prod\\nolimits_{d}~{}(1-\\gamma)~{}\\mathcal{N}(z_{d};0,1)+\\gamma~{}\\mathcal{N}(z_{d};0,\\sigma_{0}^{2}) with σ02=0.05superscriptsubscript𝜎020.05\\sigma_{0}^{2}=0.05. This mixture distribution can be interpreted as a mixture of samples being either off or on, whose proportion is set by the weight parameter γ𝛾\\gamma. We use this prior to learn a vae for the Fashion-MNIST dataset  using the objective ℒα,β​(𝐱)subscriptℒ𝛼𝛽𝐱\\mathcal{L}_{\\alpha,\\beta}(\\mathbf{x}) (as per (7)), taking the divergence to be an mmd with a kernel that only considers difference between the marginal distributions (see Appendix B for details). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_40", "text": " We measure a representation’s sparsity using the Hoyer extrinsic metric . For 𝒚∈ℝd𝒚superscriptℝ𝑑\\bm{y}\\in\\mathbb{R}^{d}, Hoyer​(𝒚)=d−‖𝒚‖1/‖𝒚‖2d−1∈(0,1),Hoyer𝒚𝑑subscriptnorm𝒚1subscriptnorm𝒚2𝑑101\\displaystyle\\text{Hoyer}~{}(\\bm{y})=\\frac{\\sqrt{d}-\\|\\bm{y}\\|_{1}/\\|\\bm{y}\\|_{2}}{\\sqrt{d}-1}\\in(0,1), yielding 00 for a fully dense vector and 111 for a fully sparse vector. Rather than employing this metric directly to the mean encoding of each datapoint, we first normalise each dimension to have a standard deviation of 111 under its aggregate distribution, i.e. we use z¯d=zd/σ​(zd)subscript¯𝑧𝑑subscript𝑧𝑑𝜎subscript𝑧𝑑\\bar{z}_{d}=z_{d}/\\sigma(z_{d}) where σ​(zd)𝜎subscript𝑧𝑑\\sigma(z_{d}) is the standard deviation of dimension d𝑑d of the latent encoding taken over the dataset. This normalisation is important as one could achieve a “sparse” representation simply by having different dimensions vary along different length scales (something the β𝛽\\beta-vae encourages through its pruning of dimensions ), whereas we desire a representation where different datapoints “activate” different features. We then compute overall sparsity by averaging over the dataset as Sparsity=1n​∑inHoyer​(𝒛¯i)Sparsity1𝑛superscriptsubscript𝑖𝑛Hoyersubscript¯𝒛𝑖\\text{Sparsity}=\\frac{1}{n}\\sum\\nolimits_{i}^{n}\\text{Hoyer}~{}(\\bar{\\bm{z}}_{i}). Figure 4 (left) shows that substantial sparsity can be gained by replacing a Gaussian prior (γ=0𝛾0\\gamma=0) by a sparse prior (γ=0.8𝛾0.8\\gamma=0.8). It further shows substantial gains from the inclusion of the aggregate posterior regularization, with α=0𝛼0\\alpha=0 giving far low sparsity than α>0𝛼0\\alpha>0, when using our sparse prior. The use of our sparse prior did not generally harm the reconstruction compared. Large values of α𝛼\\alpha did slightly worsen the reconstruction, but this drop-off was much slower than increases in β𝛽\\beta (note that α𝛼\\alpha is increased to much higher levels than β𝛽\\beta). Interestingly, we see that β𝛽\\beta being either too low or too high also harmed the sparsity. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_41", "text": " We explore the qualitative effects of sparsity in Figure 5, using a network trained with α=1000,β=1,formulae-sequence𝛼1000𝛽1\\alpha=1000,\\beta=1, and γ=0.8𝛾0.8\\gamma=0.8, corresponding to one of the models in Figure 4 (left). The top plot shows the average encoding magnitude for data corresponding to 3 of the 10 classes in the Fashion-MNIST dataset. It clearly shows that the different classes (trousers, dress, and shirt) predominantly encode information along different sets of dimensions, as expected for sparse representations (c.f. Appendix B for plots for all classes). For each of these classes, we explore the latent space along a particular ‘active’ dimension—one with high average encoding magnitude—to observe if they capture meaningful features in the image. We first identify a suitable ‘active’ dimension for a given instance (top row) from the dimension-wise magnitudes of its encoding, by choosing one, say d𝑑d, where the magnitude far exceeds σ02superscriptsubscript𝜎02\\sigma_{0}^{2}. Given encoding value 𝒛dsubscript𝒛𝑑\\bm{z}_{d}, we then interpolate along this dimension (keeping all others fixed) in the range (𝒛d,𝒛d+sign​(𝒛d))subscript𝒛𝑑subscript𝒛𝑑signsubscript𝒛𝑑(\\bm{z}_{d},\\bm{z}_{d}+\\mathrm{sign}(\\bm{z}_{d})); the sign of 𝒛dsubscript𝒛𝑑\\bm{z}_{d} indicating the direction of interpolation. Exploring the latent space in such a manner demonstrates a variety of consistent feature transformations in the image, both within class (a, b, c), and across classes (d), indicating that these sparse dimensions do capture meaningful features in the image. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_42", "text": " Concurrent to our work, Tonolini et al. also considered imposing sparsity in vaes with a spike-slab prior (such that σ0→0→subscript𝜎00\\sigma_{0}\\rightarrow 0). In contrast to our work, they do not impose a constraint on the aggregate encoder, nor do they evaluate their results with a quantitative sparsity metric that accounts for the varying length scales of different latent dimensions ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_43", "text": " Precisely formalising what constitutes the level of overlap in the latent space is surprisingly subtle. Prior work has typically instead considered controlling the level of compression through the mutual information between data and latents I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}) (3, 2, 20, 41), with, for example,  going on to discuss how controlling the compression can “explicitly encourage useful representations.” Although I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}) provides a perfectly serviceable characterisation of overlap in a number of cases, the two are not universally equivalent and we argue that it is the latter which is important in achieving useful representations. In particular, if the form of the encoding distribution is not fixed—as when employing normalising flows, for example—I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}) does not necessarily characterise overlap well. We discuss this in greater detail in Appendix D. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_44", "text": " However, when the encoder is unimodal with fixed form (in particularly the tail behaviour is fixed) and the prior is well-characterised by Euclidean distances, then these factors have a substantially reduced ability to vary for a given I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}), which subsequently becomes a good characterisation of the level of overlap. When qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) is Gaussian, controlling the variance of qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) (with a fixed qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right)) should similarly provide an effective means of achieving the desired overlap behaviour. As this is the most common use case, we leave the development of more a general definition of overlap to future work, simply noting that this is an important consideration when using flexible encoder distributions. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_45", "text": " In concurrently published work, Locatello et al. question the plausibility of learning unsupervised disentangled representations with meaningful features, based on theoretical analyses showing an equivalence class of generative models, many members of which could be entangled. Though their analysis is sound, we posit a counterargument to their conclusions, based on the stochastic nature of the encodings used during training. Namely, that this stochasticity means that they need not give rise to the same elbo scores (an important exception is the rotational invariance for isotropic Gaussian priors). Essentially, the encoding noise forces nearby encodings to relate to similar datapoints, while standard choices for the likelihood distribution (e.g. assuming conditional independence) ensure that information is stored in the encodings, not just in the generative network. These restrictions mean that the elbo prefers smooth representations and, provided the prior is not rotationally invariant, means that there no longer need be a class of different representations with the same elbo; simpler representations are preferred to more complex ones. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_46", "text": " The exact form of the encoding distribution is also important here. For example, imagine we restrict the encoder variance to be isotropic and then use a two dimensional prior where one latent dimension has a much larger variance than the other. It will be possible to store more information in the prior dimension with higher variance (as we can spread points out more relative to the encoder variance). Consequently, that dimension is more likely to correspond to an important factor of the generative process than the other. Of course, this does not imply that this is a true factor of variation in the generative process, but neither is the meaning that can be attributed to each dimension completely arbitrary. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_47", "text": " All the same, we agree that an important area for future work is to assess when, and to what extent, one might expect learned representations to mimic the true generative process, and, critically, when it should not. For this reason, we actively avoid including any notion of a true generative process in our definition of decomposition, but note that, analogously to disentanglement, it permits such extension in scenarios where doing so can be shown to be appropriate. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_48", "text": " In this work, we explored and analysed the fundamental characteristics of learning disentangled representations, and showed how these can be generalised to a more general framework of decomposition . We characterised the learning of decomposed latent representation with vaes in terms of the control of two factors: i) overlap in the latent space between encodings of different datapoints, and ii) regularisation of the aggregate encoding distribution to the given prior, which encodes the structure one would wish for the latent space to have. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_49", "text": " Connecting prior work on disentanglement to this framework, we analysed the β𝛽\\beta-vae objective to show that its contribution to disentangling is primarily through direct control of the level of overlap between encodings of the data, expressed by maximising the entropy of the encoding distribution. In the commonly encountered case of assuming an isotropic Gaussian prior and an independent Gaussian posterior, we showed that control of overlap is the only effect of the β𝛽\\beta-vae. Motivated by this observation, we developed an alternate objective for the elbo that allows control of the two factors of decomposability through an additional regularisation term. We then conducted empirical evaluations using this objective, targeting alternate forms of decompositions such as clustering and sparsity, and observed the effect of varying the extent of regularisation to the prior on the quality of the resulting clustering and sparseness of the learnt embeddings. The results indicate that we were successful in attaining those decompositions. ", "title": "Disentangling Disentanglement in Variational Autoencoders" } ]
What are the factors of fullfilement for the decomposition of latent spaces in VAEs ?
The decomposition in vaes as the fulfilment of two factors: a) the latent encodings of data having an appropriate level of overlap—ensuring that the range of latent values capable of encoding a particular datapoint is neither too small, nor too largeThis is, in general, dictated by the level of stochasticity in the encoder: the noisier the encoding process is, the higher the number of datapoints which can plausibly give rise to a particular encoding, and b) the aggregate encoding of data conforming to a desired structure, represented through the prior [16].
[ 16 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most work has focused on capturing purely independent factors of variation (10, 7, 16, 25, 4, 57, 3, 8, 17, 15, 59), typically evaluating this using purpose-built, synthetic data (15, 17, 25), whose generative factors are independent by construction. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_1", "text": " This conventional view of disentanglement, as recovering independence, has subsequently motivated the development of formal evaluation metrics for independence (15, 25), which in turn has driven the development of objectives that target these metrics, often by employing regularisers explicitly encouraging independence in the representations (15, 25, 16). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_2", "text": " We argue that such an approach is not generalisable, and potentially even harmful, to learning interpretable representations for more complicated problems, where such simplistic representations cannot accurately mimic the generation of high dimensional data from low dimensional latent spaces, and more richly structured dependencies are required. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_3", "text": " We posit a generalisation of disentanglement in vaes—decomposing their latent representations—that can help avoid such pitfalls. We characterise decomposition in vaes as the fulfilment of two factors: a) the latent encodings of data having an appropriate level of overlap, and b) the aggregate encoding of data conforming to a desired structure, represented through the prior. We emphasize that neither of these factors is sufficient in isolation: without an appropriate level of overlap, encodings can degrade to a lookup table where the latents convey little information about data, and without the aggregate encoding of data following a desired structure, the encodings do not decompose as desired. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_4", "text": " Disentanglement implicitly makes a choice of decomposition: that the latent features are independent of one another. We make this explicit and exploit it to both provide improvement to disentanglement through judicious choices of structure in the prior, and to introduce a more general framework flexible enough to capture alternate, more complex, notions of decomposition such as sparsity, clustering, hierarchical structuring, or independent subspaces. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_5", "text": " To connect our framework with existing approaches for encouraging disentanglement, we provide a theoretical analysis of the β𝛽\\beta-vae (17, 3, 2), and show that it typically only allows control of latent overlap, the first decomposition factor. We show that it can be interpreted, up to a constant offset, as the standard vae objective with its prior annealed as pθ​(𝒛)βsubscript𝑝𝜃superscript𝒛𝛽p_{{\\theta}}\\left(\\bm{z}\\right)^{\\beta} and an additional maximum entropy regularization of the encoder that increases the stochasticity of the encodings. Specialising this result for the typical choice of a Gaussian encoder and isotropic Gaussian prior indicates that the β𝛽\\beta-vae, up to a scaling of the latent space, is equivalent to the vae plus a regulariser encouraging higher encoder variance. Moreover, this objective is invariant to rotations of the learned latent representation, meaning that it does not, on its own, encourage the latent variables to take on meaningful representations any more than an arbitrary rotation of them. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_6", "text": " We confirm these results empirically, while further using our decomposition framework to show that simple manipulations to the prior can improve disentanglement, and other decompositions, with little or no detriment to the reconstruction accuracy. Further, motivated by our analysis, we propose an alternative objective that takes into account the distinct needs of the two factors of decomposition, and use it to learn clustered and sparse representations as demonstrations of alternative forms of decomposition. An implementation of our experiments and suggested methods is provided at http://github.com/iffsid/disentangling-disentanglement. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_7", "text": " Let 𝒙𝒙\\bm{x} be an 𝒳𝒳\\mathcal{X}-valued random variable distributed according to an unknown generative process with density p𝒟​(𝒙)subscript𝑝𝒟𝒙p_{\\mathcal{D}}(\\bm{x}) and from which we have observations, X={𝒙1,…,𝒙n}𝑋subscript𝒙1…subscript𝒙𝑛X=\\{\\bm{x}_{1},\\dots,\\bm{x}_{n}\\}. The aim is to learn a latent-variable model pθ​(𝒙,𝒛)subscript𝑝𝜃𝒙𝒛p_{{\\theta}}\\left(\\bm{x},\\bm{z}\\right) that captures this generative process, comprising of a fixed111Learning the prior is possible, but omitted for simplicity. prior over latents p​(𝒛)𝑝𝒛p(\\bm{z}) and a parametric likelihood pθ​(𝒙|𝒛)subscript𝑝𝜃conditional𝒙𝒛p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right). Learning proceeds by minimising a divergence between the true data generating distribution and the model w.r.t θ𝜃\\theta, typically arg​min𝜽⁡KL⁡(p𝒟​(𝒙)∥pθ​(𝒙))=arg​max𝜽⁡𝔼p𝒟​(𝒙)⁡(log⁡pθ​(𝒙))subscriptargmin𝜽KLconditionalsubscript𝑝𝒟𝒙subscript𝑝𝜃𝒙subscriptargmax𝜽subscript𝔼subscript𝑝𝒟𝒙subscript𝑝𝜃𝒙\\displaystyle\\operatorname*{arg\\,min}_{\\bm{\\theta}}\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(p_{\\mathcal{D}}(\\bm{x})\\,\\|\\;p_{{\\theta}}\\left(\\bm{x}\\right)\\right)=\\operatorname*{arg\\,max}_{\\bm{\\theta}}\\operatorname{{}\\mathbb{E}}_{p_{\\mathcal{D}}(\\bm{x})}\\left(\\log p_{{\\theta}}\\left(\\bm{x}\\right)\\right) where pθ​(𝒙)=∫𝒵pθ​(𝒙|𝒛)​p​(𝒛)​𝑑𝒛subscript𝑝𝜃𝒙subscript𝒵subscript𝑝𝜃conditional𝒙𝒛𝑝𝒛differential-d𝒛p_{{\\theta}}\\left(\\bm{x}\\right)=\\int_{\\mathcal{Z}}p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right)p(\\bm{z})d\\bm{z} is the marginal likelihood, or evidence, of datapoint 𝒙𝒙\\bm{x} under the model, approximated by averaging over the observations. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_8", "text": " However, estimating pθ​(𝒙)subscript𝑝𝜃𝒙p_{{\\theta}}\\left(\\bm{x}\\right) (or its gradients) to any sufficient degree of accuracy is typically infeasible. A common strategy to ameliorate this issue involves the introduction of a parametric inference model qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) to construct a variational evidence lower bound (elbo) on log⁡pθ​(𝒙)subscript𝑝𝜃𝒙\\log p_{{\\theta}}\\left(\\bm{x}\\right) as follows ℒ​(𝒙;θ,ϕ)≜logpθ(𝒙)−KL(qϕ(𝒛|𝒙)∥pθ(𝒛|𝒙))=𝔼qϕ​(𝒛|𝒙)​(log⁡pθ​(𝒙|𝒛))−KL⁡(qϕ​(𝒛|𝒙)∥p​(𝒛)).\\displaystyle\\begin{split}\\mathcal{L}(\\bm{x};\\!\\theta,\\!\\phi)\\!&\\triangleq\\!\\log p_{{\\theta}}\\left(\\bm{x}\\right)-\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\,\\|\\;p_{{\\theta}}\\left(\\bm{z}|\\bm{x}\\right)\\right)\\\\ \\!&=\\!\\mathbb{E}_{q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\!}(\\log p_{{\\theta}}(\\bm{x}|\\bm{z}))\\!-\\!\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}(\\bm{z}|\\bm{x})\\!\\,\\|\\;\\!p(\\bm{z})\\!\\right).\\!\\!\\!\\end{split} (1) A variational autoencoder (vae) (27, 48) views this objective from the perspective of a deep stochastic autoencoder, taking the inference model qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) to be an encoder and the likelihood model pθ​(𝒙|𝒛)subscript𝑝𝜃conditional𝒙𝒛p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right) to be a decoder. Here θ𝜃\\theta and ϕitalic-ϕ\\phi are neural network parameters, and learning happens via stochastic gradient ascent (sga) using unbiased estimates of ∇θ,ϕ1n​∑i=1nℒ​(𝒙i;θ,ϕ)subscript∇𝜃italic-ϕ1𝑛superscriptsubscript𝑖1𝑛ℒsubscript𝒙𝑖𝜃italic-ϕ\\nabla_{\\theta,\\phi}\\frac{1}{n}\\sum_{i=1}^{n}\\mathcal{L}(\\bm{x}_{i};{\\theta},{\\phi}). Note that when clear from the context, we denote ℒ​(𝒙;θ,ϕ)ℒ𝒙𝜃italic-ϕ\\mathcal{L}(\\bm{x};\\theta,\\phi) as simply ℒ​(𝒙)ℒ𝒙\\mathcal{L}(\\bm{x}). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_9", "text": " Disentanglement, as typically employed in literature, refers to independence among features in a representation (5, 15, 18). Conceptually, however, it has a long history, far longer than we could reasonably do justice here, and is far from specific to vaes. The idea stems back to traditional methods such as ICA (58, 23) and conventional autoencoders , through to a range of modern approaches employing deep learning (47, 36, 9, 37, 1, 19, 11). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_10", "text": " Of particular relevance to this work are approaches that explore disentanglement in the context of vaes (17, 3, 51, 25, 8, 16). Here one aims to achieve independence between the dimensions of the aggregate encoding, typically defined as qϕ​(𝒛)≜𝔼p𝒟​(𝒙)⁡(q​(𝒛|𝒙))≈1n​∑inq​(𝒛|𝒙i)≜subscript𝑞italic-ϕ𝒛subscript𝔼subscript𝑝𝒟𝒙𝑞conditional𝒛𝒙1𝑛superscriptsubscript𝑖𝑛𝑞conditional𝒛subscript𝒙𝑖q_{\\phi}(\\bm{z})\\triangleq\\operatorname{{}\\mathbb{E}}_{p_{\\mathcal{D}}(\\bm{x})}\\left(q(\\bm{z}|\\bm{x})\\right)\\approx\\frac{1}{n}\\sum_{i}^{n}q(\\bm{z}|\\bm{x}_{i}). The significance of qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{\\phi}(\\bm{z}) is that it is the marginal distribution induced on the latents by sampling a datapoint and then using the encoder to sample an encoding given that datapoint. It can thus informally be thought of as the pushforward distribution for “sampling” representations in the latent space. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_11", "text": " Within the disentangled vaes literature, there is also a distinction between unsupervised approaches, and semi-supervised approaches wherein one has access to the true generative factor values for some subset of data (28, 51, 6). Our focus, however, is on the unsupervised setting. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_12", "text": " Much of the prior work in the field has either implicitly or explicitly presumed a slightly more ambitious definition of disentanglement than considered above: that it is a measure of how well one captures true factors of variation (which happen to be independent by construction for synthetic data), rather than just independent factors. After all, if we wish for our learned representations to be interpretable, it is necessary for the latent variables to take on clear-cut meaning. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_13", "text": " One such definition is given by Eastwood and Williams , who define it as the extent to which a latent dimension d∈D𝑑𝐷d\\in D in a representation predicts a true generative factor k∈K𝑘𝐾k\\in K, with each latent capturing at most one generative factor. This implicitly assumes D≥K𝐷𝐾D\\geq K, as otherwise the latents are unable to explain all the true generative factors. However, for real data, the association is more likely D≪Kmuch-less-than𝐷𝐾D\\ll K, with one learning a low-dimensional abstraction of a complex process involving many factors. Consequently, such simplistic representations cannot, by definition, be found for more complex datasets that require more richly structured dependencies to be able to encode the information required to generate higher dimensional data. Moreover, for complex datasets involving a finite set of datapoints, it might not be reasonable to presume that one could capture the elements of the true generative process—the data itself might not contain sufficient information to recover these and even if it does, the computation required to achieve this through model learning is unlikely to be tractable. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_14", "text": " The subsequent need for richly structured dependencies between latent dimensions has been reflected in the motivation for a handful of approaches (51, 6, 24, 16) that explore this through graphical models, although employing mutually-inconsistent, and not generalisable, interpretations of disentanglement. This motivates our development of a decomposition framework as a means of extending beyond the limitations of disentanglement. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_15", "text": " The commonly assumed notion of disentanglement is quite restrictive for complex models where the true generative factors are not independent, very large in number, or where it cannot be reasonably assumed that there is a well-defined set of “true” generative factors (as will be the case for many, if not most, real datasets). To this end, we introduce a generalization of disentanglement, decomposition, which at a high-level can be thought of as imposing a desired structure on the learned representations. This permits disentanglement as a special case, for which the desired structure is that qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) factors along its dimensions. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_16", "text": " We characterise the decomposition of latent spaces in vaes to be the fulfilment of two factors (as shown in Figure 1): a. An “appropriate” level of overlap in the latent space—ensuring that the range of latent values capable of encoding a particular datapoint is neither too small, nor too large. This is, in general, dictated by the level of stochasticity in the encoder: the noisier the encoding process is, the higher the number of datapoints which can plausibly give rise to a particular encoding. b. The aggregate encoding qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) matching the prior p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right), where the latter expresses the desired dependency structure between latents. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_17", "text": " The overlap factor Item a is perhaps best understood by considering extremes—too little, and the latents effectively become a lookup table; too much, and the data and latents do not convey information about each other. In either case, meaningfulness of the latent encodings is lost. Thus, without the appropriate level of overlap—dictated both by noise in the true generative process and dataset size—it is not possible to enforce meaningful structure on the latent space. Though quantitatively formalising overlap in general scenarios is surprisingly challenging (c.f.  §§ 7 and D), we note for now that when the encoder distribution is unimodal, it is typically well-characterized by the mutual information between the data and the latents I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_18", "text": " The regularisation factor Item b enforces a congruence between the (aggregate) latent embeddings of data and the dependency structures expressed in the prior. We posit that such structure is best expressed in the prior, as opposed to explicit independence regularisation of the marginal posterior (25, 8), to enable the generative model to express the desired decomposition, and to avoid potentially violating self-consistency between the encoder, decoder, and true data generating distributions. The prior also provides a rich and flexible means of expressing desired structure by defining a generative process that encapsulates dependencies between variables, as with a graphical model. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_19", "text": " Critically, neither factor is sufficient in isolation. An inappropriate level of overlap in the latent space will impede interpretability, irrespective of quality of regularisation, as the latent space need not be meaningful. Conversely, without the pressure to regularise to the prior, the latent space is under no constraint to exhibit the desired structure. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_20", "text": " Decomposition is inherently subjective as we must choose the structure of the prior we regularise to depending on how we intend to use our learned model or what kind of features we would like to uncover from the data. This may at first seem unsatisfactory compared to the seemingly objective adjustments often made to the elbo by disentanglement methods. However, disentanglement is itself a subjective choice for the decomposition. We can embrace this subjective nature through judicious choices of the prior distribution; ignoring this imposes unintended assumptions which can have unwanted effects. For example, as we will later show, the rotational invariance of the standard prior p​(𝒛)=𝒩​(𝒛;0,I)𝑝𝒛𝒩𝒛0𝐼p(\\bm{z})=\\mathcal{N}(\\bm{z};0,I) can actually hinder disentanglement. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_21", "text": " To connect existing approaches to our proposed framework, we now consider, as a case study, the β𝛽\\beta-vae —an adaptation of the vae objective (elbo) to learn better-disentangled representations. Specifically, it scales the KL term in the standard ELBO by a factor β>0𝛽0\\beta>0 as ℒβ​(𝒙)=𝔼qϕ​(𝒛|𝒙)⁡(log⁡pθ​(𝒙|𝒛))−β​KL⁡(qϕ​(𝒛|𝒙)∥p​(𝒛)).subscriptℒ𝛽𝒙subscript𝔼subscript𝑞italic-ϕconditional𝒛𝒙subscript𝑝𝜃conditional𝒙𝒛𝛽KLconditionalsubscript𝑞italic-ϕconditional𝒛𝒙𝑝𝒛\\displaystyle\\mathcal{L}_{\\beta}(\\bm{x})\\!=\\!\\operatorname{{}\\mathbb{E}}_{q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\!}\\left(\\log p_{{\\theta}}\\left(\\bm{x}|\\bm{z}\\right)\\right)\\!-\\!\\beta\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\!\\,\\|\\;\\!p\\left(\\bm{z}\\right)\\!\\right).\\!\\! (2) Hoffman et al. showed that the β𝛽\\beta-vae target can be viewed as a standard elbo with the alternative prior r​(𝒛)∝qϕ​(𝒛)(1−β)​p​(𝒛)βproportional-to𝑟𝒛subscript𝑞italic-ϕsuperscript𝒛1𝛽𝑝superscript𝒛𝛽r(\\bm{z})\\propto q_{{\\phi}}\\left(\\bm{z}\\right)^{(1-\\beta)}p(\\bm{z})^{\\beta}, along with terms involving the mutual information and the prior’s normalising constant. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_22", "text": " We now introduce an alternate deconstruction as follows ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_23", "text": " Clearly, the second term in Eq. 3, enforcing a maximum entropy regulariser on the posterior qϕ​(𝒛∣𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right), allows the value of β𝛽\\beta to affect the overlap of encodings in the latent space. We thus see that it provides a means of controlling decomposition factor (a). However, it is itself not sufficient to enforce disentanglement. For example, the entropy of qϕ​(𝒛∣𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right) is independent of its mean μθ​(𝒙)subscript𝜇𝜃𝒙\\mu_{\\theta}(\\bm{x}) and is independent to rotations of 𝒛𝒛\\bm{z}, so it is clearly incapable of discouraging certain representations with poor disentanglement. All the same, having the wrong level of regularization can, in turn, lead to an inappropriate level of overlap and undermine the ability to disentangle. Consequently, this term is still important. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_24", "text": " Although the precise impact of prior annealing depends on the original form of the prior, the high-level effect is the same—larger values of β𝛽\\beta cause the effective latent space to collapse towards the modes of the prior. For uni-modal priors, the main effect of annealing is to reduce the scaling of 𝒛𝒛\\bm{z}; indeed this is the only effect for generalized Gaussian distributions. While this would appear not to have any tangible effects, closer inspection suggests otherwise—it ensures that the scaling of the encodings matches that of the prior. Only incorporating the maximum-entropy regularisation will simply cause the scaling of the latent space to increase. The rescaling of the prior now cancels this effect, ensuring the scaling of qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) matches that of p​(𝒛)𝑝𝒛p(\\bm{z}). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_25", "text": " Taken together, this implies that the β𝛽\\beta-vae’s ability to encourage disentanglement is predominantly through direct control over the level of overlap. It places no other direct constraint on the latents to disentangle (although in some cases, the annealed prior may inadvertently encourage better disentanglement), but instead helps avoid the pitfalls of inappropriate overlap. Amongst other things, this explains why large β𝛽\\beta is not universally beneficial for disentanglement, as the level of overlap can be increased too far. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_26", "text": " We can gain further insights into the β𝛽\\beta-vae in the common use case—assuming a Gaussian prior, p​(𝒛)=𝒩​(𝒛;0,Σ)𝑝𝒛𝒩𝒛0Σp(\\bm{z})=\\mathcal{N}(\\bm{z};0,\\Sigma), and Gaussian encoder, qϕ​(𝒛∣𝒙)=𝒩​(𝒛;μϕ​(𝒙),Sϕ​(𝒙))subscript𝑞italic-ϕconditional𝒛𝒙𝒩𝒛subscript𝜇italic-ϕ𝒙subscript𝑆italic-ϕ𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right)=\\mathcal{N}\\left(\\bm{z};\\mu_{\\phi}(\\bm{x}),S_{\\phi}(\\bm{x})\\right). Here it is straightforward to see that annealing simply scales the latent space by 1/β1𝛽1/\\sqrt{\\beta}, i.e. fβ​(𝒛)=𝒩​(𝒛;0,Σ/β)subscript𝑓𝛽𝒛𝒩𝒛0Σ𝛽f_{\\beta}(\\bm{z})=\\mathcal{N}(\\bm{z};0,\\Sigma/\\beta). Given this, it is easy to see that a vae trained with the adjusted target ℒ​(𝒙;πθ,β,qϕ)ℒ𝒙subscript𝜋𝜃𝛽subscript𝑞italic-ϕ\\mathcal{L}\\left(\\bm{x};\\pi_{\\theta,\\beta},q_{\\phi}\\right), but appropriately scaling the latent space, will behave identically to one trained with the original target ℒ​(𝒙)ℒ𝒙\\mathcal{L}(\\bm{x}). It will also have an identical elbo as the expected reconstruction is trivially the same, while the kl between Gaussians is invariant to scaling both equally. More precisely, we have the following result. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_27", "text": " Noting that as c𝑐c is irrelevant to the training process, this indicates an equivalence, up to scaling of the latent space, between training with the β𝛽\\beta-vae objective and a maximum-entropy regularised version of the standard ELBO ℒH,β​(𝒙)≜ℒ​(𝒙)+(β−1)2​log⁡|Sϕ​(𝒙)|,≜subscriptℒ𝐻𝛽𝒙ℒ𝒙𝛽12subscript𝑆italic-ϕ𝒙\\displaystyle\\mathcal{L}_{H,\\beta}(\\bm{x})\\triangleq\\mathcal{L}(\\bm{x})+\\frac{(\\beta-1)}{2}\\log\\lvert S_{\\phi}(\\bm{x})\\rvert, (5) whenever p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right) and qϕ​(𝒛∣𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right) are Gaussian. Note that we implicitly presume suitable adjustment of neural-network hyper-parameters and the stochastic gradient scheme to account for the change of scaling in the optimal networks. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_28", "text": " Moreover, the stationary points for the two objectives ℒβ​(𝒙;θ,ϕ)subscriptℒ𝛽𝒙𝜃italic-ϕ\\mathcal{L}_{\\beta}(\\bm{x};\\theta,\\phi) and ℒH,β​(𝒙;θ′,ϕ′)subscriptℒ𝐻𝛽𝒙superscript𝜃′superscriptitalic-ϕ′\\mathcal{L}_{H,\\beta}\\left(\\bm{x};\\theta^{\\prime},\\phi^{\\prime}\\right) are equivalent (c.f. Corollary 2 in Appendix A), indicating that optimising for (5) leads to networks equivalent to those from optimising the β𝛽\\beta-vae objective Eq. 2, up to scaling the encodings by a factor of β𝛽\\sqrt{\\beta}. Under the isotropic Gaussian prior setting, we further have the following result showing that the β𝛽\\beta-vae objective is invariant to rotations of the latent space. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_29", "text": " This shows that the β𝛽\\beta-vae objective does not directly encourage latent variables to take on meaningful representations when using the standard choice of an isotropic Gaussian prior. In fact, on its own, it encourages latent representations which match the true generative factors no more than it encourages any arbitrary rotation of these factors, with such rotations capable of exhibiting strong correlations between latents. This view is further supported by our empirical results (see Figure 2), where we did not observe any gains in disentanglement (using the metric from Kim and Mnih ) from increasing β>0𝛽0\\beta>0 with an isotropic Gaussian prior trained on the 2D Shapes dataset . It may also go some way to explaining the extremely high levels of variation we found in the disentanglement-metric scores between different random seeds at train time. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_30", "text": " It should be noted, however, that the value of β𝛽\\beta can indirectly influence the level of disentanglement when using a mean-field assumption for the encoder distribution (i.e. restricting Sϕ​(x)subscript𝑆italic-ϕ𝑥S_{\\phi}(x) to be diagonal). As noted by Stühmer et al. , Rolinek et al. , increasing β𝛽\\beta can reinforce existing inductive biases, wherein mean-field assumptions encourage representations which reduce dependence between the latent dimensions . ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_31", "text": " Given the characterisation set out above, we now develop an objective that incorporates the effect of both factors (a) and (b). Our analysis of the β𝛽\\beta-vae tells us that its objective allows direct control over the level of overlap, i.e. factor Item a. To incorporate direct control over the regularisation Item b between the marginal posterior and the prior, we add a divergence term 𝔻​(qϕ​(z),p​(𝒛))𝔻subscript𝑞italic-ϕ𝑧𝑝𝒛\\mathbb{D}(q_{{\\phi}}\\left(z\\right),p(\\bm{z})), yielding ℒα,β(𝒙)=𝔼qϕ​(𝒛∣𝒙)⁡(log⁡pθ​(𝒙∣𝒛))−β​KL⁡(qϕ​(𝒛∣𝒙)∥p​(𝒛))−α​𝔻​(qϕ​(𝒛),p​(𝒛))subscriptℒ𝛼𝛽𝒙subscript𝔼subscript𝑞italic-ϕconditional𝒛𝒙subscript𝑝𝜃conditional𝒙𝒛𝛽KLconditionalsubscript𝑞italic-ϕconditional𝒛𝒙𝑝𝒛𝛼𝔻subscript𝑞italic-ϕ𝒛𝑝𝒛\\displaystyle\\begin{split}\\mathcal{L}_{\\alpha,\\beta}&(\\bm{x})=\\operatorname{{}\\mathbb{E}}_{q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right)}\\left(\\log p_{{\\theta}}\\left(\\bm{x}\\mid\\bm{z}\\right)\\right)\\\\ &-\\beta~{}\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}\\mid\\bm{x}\\right)\\,\\|\\;p(\\bm{z})\\right)-\\alpha~{}\\mathbb{D}(q_{{\\phi}}\\left(\\bm{z}\\right),p(\\bm{z}))\\end{split} (7) allowing control over how much factors (a) and (b) are enforced, through appropriate setting of β𝛽\\beta and α𝛼\\alpha respectively. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_32", "text": " Note that such an additional term has been previously considered by Kumar et al. , with 𝔻​(qϕ​(𝒛),p​(𝒛))=KL⁡(qϕ​(𝒛)∥p​(𝒛))𝔻subscript𝑞italic-ϕ𝒛𝑝𝒛KLconditionalsubscript𝑞italic-ϕ𝒛𝑝𝒛\\mathbb{D}(q_{{\\phi}}\\left(\\bm{z}\\right),p(\\bm{z}))=\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}\\right)\\,\\|\\;p(\\bm{z})\\right), although for the sake of tractability they rely instead on moment matching using covariances. There have also been a number of approaches that decompose the standard vae objective in different ways (e.g.  20, 16, 13) to expose KL⁡(qϕ​(𝒛)∥p​(𝒛))KLconditionalsubscript𝑞italic-ϕ𝒛𝑝𝒛\\operatorname{\\scalebox{0.95}{\\text{KL}}}\\left(q_{{\\phi}}\\left(\\bm{z}\\right)\\,\\|\\;p(\\bm{z})\\right) as a component, but, as we discuss in Appendix C, this can be difficult to compute correctly in practice, with common approaches leading to highly biased estimates whose practical behaviour is very different than the divergence they are estimating, unless very large batch sizes are used. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_33", "text": " Wasserstein Auto-Encoders  formulate an objective that includes a general divergence term between the prior and marginal posterior, computed using either maximum mean discrepancy (mmd) or a variational formulation of the Jensen-Shannon divergence (a.k.a gan loss). However, we find that empirically, choosing the mmd’s kernel and numerically stabilising its U-statistics estimator to be tricky, and designing and learning a gan to be cumbersome and unstable. Consequently, the problems of choosing an appropriate 𝔻​(qϕ​(𝒛),p​(𝒛))𝔻subscript𝑞italic-ϕ𝒛𝑝𝒛\\mathbb{D}(q_{{\\phi}}\\left(\\bm{z}\\right),p(\\bm{z})) and generating reliable estimates for this choice are tightly coupled, with a general purpose solution remaining an important open problem; see further discussion in Appendix C. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_34", "text": " We first show how subtle changes to the prior distribution can yield improvements in disentanglement. The standard choice of an isotropic Gaussian has previously been justified by the correct assertion that the latents are independent under the prior . However, as explained in § 4.1, the rotational invariance of this prior means that it does not directly encourage axis-aligned representations. Priors that break this rotational invariance should be better suited for learning disentangled representations. We assess this hypothesis by training a β𝛽\\beta-vae (i.e. (7) with α=0𝛼0\\alpha=0) on the 2D Shapes dataset  and evaluating disentanglement using the metric of Kim and Mnih . ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_35", "text": " Figure 2 demonstrates that notable improvements in disentanglement can be achieved by using non-isotropic priors: for a given reconstruction loss, implicitly fixed by β𝛽\\beta, non-isotropic Gaussian priors got better disentanglement scores, with further improvement achieved when the prior variance is learnt. With a product of Student-t priors pν​(𝒛)subscript𝑝𝜈𝒛p_{\\nu}(\\bm{z}) (noting pν​(𝒛)→𝒩​(𝒛;𝟎,𝐈)→subscript𝑝𝜈𝒛𝒩𝒛0𝐈p_{\\nu}(\\bm{z})\\rightarrow\\mathcal{N}(\\bm{z};\\mathbf{0},\\mathbf{I}) as ν→∞→𝜈\\nu\\rightarrow\\infty), reducing ν𝜈\\nu only incurred a minor reconstruction penalty, for improved disentanglement. Interestingly, very low values of ν𝜈\\nu caused the disentanglement score to drop again (though still giving higher values than the Gaussian). We speculate that this may be related to the effect of heavy tails on the disentanglement metric itself, rather than being an objectively worse disentanglement. Another interesting result was that for an isotropic Gaussian prior, as per the original β𝛽\\beta-vae setup, no gains at all were achieved in disentanglement by increasing β𝛽\\beta. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_36", "text": " We next consider an alternative decomposition one might wish to impose—clustering of the latent space. For this, we use the “pinwheels” dataset from  and a mixture of four equally-weighted Gaussians as our prior. We then conduct an ablation study to observe the effect of varying α𝛼\\alpha and β𝛽\\beta in ℒα,β​(𝐱)subscriptℒ𝛼𝛽𝐱\\mathcal{L}_{\\alpha,\\beta}(\\mathbf{x}) (as per (7)) on the learned representations, taking the divergence to be KL(p(𝒛)||qϕ(𝒛))\\text{KL}\\left(p(\\bm{z})||q_{{\\phi}}\\left(\\bm{z}\\right)\\right) (see Appendix B for details). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_37", "text": " We see in Figure 3 that increasing β𝛽\\beta increases the level of overlap in qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right), as a consequence of increasing the encoder variance for individual datapoints. When β𝛽\\beta is too large, the encoding of a datapoint loses meaning. Also, as a single datapoint encodes to a Gaussian distribution, qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) is unable to match p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right) exactly. Because qϕ​(𝒛|𝒙)→qϕ​(𝒛)→subscript𝑞italic-ϕconditional𝒛𝒙subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right)\\rightarrow q_{{\\phi}}\\left(\\bm{z}\\right) when β→∞→𝛽\\beta\\rightarrow\\infty, this in turn means that overly large values of β𝛽\\beta actually cause a mismatch between qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) and p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right) (see top right of Figure 3). Increasing α𝛼\\alpha, instead always improved the match between qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right) and p​(𝒛)𝑝𝒛p\\left(\\bm{z}\\right). Here, the finiteness of the dataset and the choice of divergence results in an increase in overlap with increasing α𝛼\\alpha, but only up to the level required for a non-negligible overlap between the nearby datapoints: large values of α𝛼\\alpha did not cause the encodings to collapse to a mode. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_38", "text": " Finally, we consider a commonly desired decomposition—sparsity, which stipulates that only a small fraction of available factors are employed. That is, a sparse representation can be thought of as one where each embedding has a significant proportion of its dimensions off, i.e. close to 00. Sparsity has often been considered for feature-learning (31, 12) and employed in the probabilistic modelling literature (45, 32). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_39", "text": " Common ways to achieve sparsity are through a specific penalty (e.g. l1subscript𝑙1l_{1}) or a careful choice of prior (peaked at 0). Concomitant with our overarching desire to encode requisite structure in the prior, we adopt the latter, constructing a sparse prior as p​(𝒛)=∏d(1−γ)​𝒩​(zd;0,1)+γ​𝒩​(zd;0,σ02)𝑝𝒛subscriptproduct𝑑1𝛾𝒩subscript𝑧𝑑01𝛾𝒩subscript𝑧𝑑0superscriptsubscript𝜎02p(\\bm{z})=\\prod\\nolimits_{d}~{}(1-\\gamma)~{}\\mathcal{N}(z_{d};0,1)+\\gamma~{}\\mathcal{N}(z_{d};0,\\sigma_{0}^{2}) with σ02=0.05superscriptsubscript𝜎020.05\\sigma_{0}^{2}=0.05. This mixture distribution can be interpreted as a mixture of samples being either off or on, whose proportion is set by the weight parameter γ𝛾\\gamma. We use this prior to learn a vae for the Fashion-MNIST dataset  using the objective ℒα,β​(𝐱)subscriptℒ𝛼𝛽𝐱\\mathcal{L}_{\\alpha,\\beta}(\\mathbf{x}) (as per (7)), taking the divergence to be an mmd with a kernel that only considers difference between the marginal distributions (see Appendix B for details). ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_40", "text": " We measure a representation’s sparsity using the Hoyer extrinsic metric . For 𝒚∈ℝd𝒚superscriptℝ𝑑\\bm{y}\\in\\mathbb{R}^{d}, Hoyer​(𝒚)=d−‖𝒚‖1/‖𝒚‖2d−1∈(0,1),Hoyer𝒚𝑑subscriptnorm𝒚1subscriptnorm𝒚2𝑑101\\displaystyle\\text{Hoyer}~{}(\\bm{y})=\\frac{\\sqrt{d}-\\|\\bm{y}\\|_{1}/\\|\\bm{y}\\|_{2}}{\\sqrt{d}-1}\\in(0,1), yielding 00 for a fully dense vector and 111 for a fully sparse vector. Rather than employing this metric directly to the mean encoding of each datapoint, we first normalise each dimension to have a standard deviation of 111 under its aggregate distribution, i.e. we use z¯d=zd/σ​(zd)subscript¯𝑧𝑑subscript𝑧𝑑𝜎subscript𝑧𝑑\\bar{z}_{d}=z_{d}/\\sigma(z_{d}) where σ​(zd)𝜎subscript𝑧𝑑\\sigma(z_{d}) is the standard deviation of dimension d𝑑d of the latent encoding taken over the dataset. This normalisation is important as one could achieve a “sparse” representation simply by having different dimensions vary along different length scales (something the β𝛽\\beta-vae encourages through its pruning of dimensions ), whereas we desire a representation where different datapoints “activate” different features. We then compute overall sparsity by averaging over the dataset as Sparsity=1n​∑inHoyer​(𝒛¯i)Sparsity1𝑛superscriptsubscript𝑖𝑛Hoyersubscript¯𝒛𝑖\\text{Sparsity}=\\frac{1}{n}\\sum\\nolimits_{i}^{n}\\text{Hoyer}~{}(\\bar{\\bm{z}}_{i}). Figure 4 (left) shows that substantial sparsity can be gained by replacing a Gaussian prior (γ=0𝛾0\\gamma=0) by a sparse prior (γ=0.8𝛾0.8\\gamma=0.8). It further shows substantial gains from the inclusion of the aggregate posterior regularization, with α=0𝛼0\\alpha=0 giving far low sparsity than α>0𝛼0\\alpha>0, when using our sparse prior. The use of our sparse prior did not generally harm the reconstruction compared. Large values of α𝛼\\alpha did slightly worsen the reconstruction, but this drop-off was much slower than increases in β𝛽\\beta (note that α𝛼\\alpha is increased to much higher levels than β𝛽\\beta). Interestingly, we see that β𝛽\\beta being either too low or too high also harmed the sparsity. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_41", "text": " We explore the qualitative effects of sparsity in Figure 5, using a network trained with α=1000,β=1,formulae-sequence𝛼1000𝛽1\\alpha=1000,\\beta=1, and γ=0.8𝛾0.8\\gamma=0.8, corresponding to one of the models in Figure 4 (left). The top plot shows the average encoding magnitude for data corresponding to 3 of the 10 classes in the Fashion-MNIST dataset. It clearly shows that the different classes (trousers, dress, and shirt) predominantly encode information along different sets of dimensions, as expected for sparse representations (c.f. Appendix B for plots for all classes). For each of these classes, we explore the latent space along a particular ‘active’ dimension—one with high average encoding magnitude—to observe if they capture meaningful features in the image. We first identify a suitable ‘active’ dimension for a given instance (top row) from the dimension-wise magnitudes of its encoding, by choosing one, say d𝑑d, where the magnitude far exceeds σ02superscriptsubscript𝜎02\\sigma_{0}^{2}. Given encoding value 𝒛dsubscript𝒛𝑑\\bm{z}_{d}, we then interpolate along this dimension (keeping all others fixed) in the range (𝒛d,𝒛d+sign​(𝒛d))subscript𝒛𝑑subscript𝒛𝑑signsubscript𝒛𝑑(\\bm{z}_{d},\\bm{z}_{d}+\\mathrm{sign}(\\bm{z}_{d})); the sign of 𝒛dsubscript𝒛𝑑\\bm{z}_{d} indicating the direction of interpolation. Exploring the latent space in such a manner demonstrates a variety of consistent feature transformations in the image, both within class (a, b, c), and across classes (d), indicating that these sparse dimensions do capture meaningful features in the image. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_42", "text": " Concurrent to our work, Tonolini et al. also considered imposing sparsity in vaes with a spike-slab prior (such that σ0→0→subscript𝜎00\\sigma_{0}\\rightarrow 0). In contrast to our work, they do not impose a constraint on the aggregate encoder, nor do they evaluate their results with a quantitative sparsity metric that accounts for the varying length scales of different latent dimensions ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_43", "text": " Precisely formalising what constitutes the level of overlap in the latent space is surprisingly subtle. Prior work has typically instead considered controlling the level of compression through the mutual information between data and latents I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}) (3, 2, 20, 41), with, for example,  going on to discuss how controlling the compression can “explicitly encourage useful representations.” Although I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}) provides a perfectly serviceable characterisation of overlap in a number of cases, the two are not universally equivalent and we argue that it is the latter which is important in achieving useful representations. In particular, if the form of the encoding distribution is not fixed—as when employing normalising flows, for example—I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}) does not necessarily characterise overlap well. We discuss this in greater detail in Appendix D. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_44", "text": " However, when the encoder is unimodal with fixed form (in particularly the tail behaviour is fixed) and the prior is well-characterised by Euclidean distances, then these factors have a substantially reduced ability to vary for a given I​(𝒙;𝒛)𝐼𝒙𝒛I(\\bm{x};\\bm{z}), which subsequently becomes a good characterisation of the level of overlap. When qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) is Gaussian, controlling the variance of qϕ​(𝒛|𝒙)subscript𝑞italic-ϕconditional𝒛𝒙q_{{\\phi}}\\left(\\bm{z}|\\bm{x}\\right) (with a fixed qϕ​(𝒛)subscript𝑞italic-ϕ𝒛q_{{\\phi}}\\left(\\bm{z}\\right)) should similarly provide an effective means of achieving the desired overlap behaviour. As this is the most common use case, we leave the development of more a general definition of overlap to future work, simply noting that this is an important consideration when using flexible encoder distributions. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_45", "text": " In concurrently published work, Locatello et al. question the plausibility of learning unsupervised disentangled representations with meaningful features, based on theoretical analyses showing an equivalence class of generative models, many members of which could be entangled. Though their analysis is sound, we posit a counterargument to their conclusions, based on the stochastic nature of the encodings used during training. Namely, that this stochasticity means that they need not give rise to the same elbo scores (an important exception is the rotational invariance for isotropic Gaussian priors). Essentially, the encoding noise forces nearby encodings to relate to similar datapoints, while standard choices for the likelihood distribution (e.g. assuming conditional independence) ensure that information is stored in the encodings, not just in the generative network. These restrictions mean that the elbo prefers smooth representations and, provided the prior is not rotationally invariant, means that there no longer need be a class of different representations with the same elbo; simpler representations are preferred to more complex ones. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_46", "text": " The exact form of the encoding distribution is also important here. For example, imagine we restrict the encoder variance to be isotropic and then use a two dimensional prior where one latent dimension has a much larger variance than the other. It will be possible to store more information in the prior dimension with higher variance (as we can spread points out more relative to the encoder variance). Consequently, that dimension is more likely to correspond to an important factor of the generative process than the other. Of course, this does not imply that this is a true factor of variation in the generative process, but neither is the meaning that can be attributed to each dimension completely arbitrary. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_47", "text": " All the same, we agree that an important area for future work is to assess when, and to what extent, one might expect learned representations to mimic the true generative process, and, critically, when it should not. For this reason, we actively avoid including any notion of a true generative process in our definition of decomposition, but note that, analogously to disentanglement, it permits such extension in scenarios where doing so can be shown to be appropriate. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_48", "text": " In this work, we explored and analysed the fundamental characteristics of learning disentangled representations, and showed how these can be generalised to a more general framework of decomposition . We characterised the learning of decomposed latent representation with vaes in terms of the control of two factors: i) overlap in the latent space between encodings of different datapoints, and ii) regularisation of the aggregate encoding distribution to the given prior, which encodes the structure one would wish for the latent space to have. ", "title": "Disentangling Disentanglement in Variational Autoencoders" }, { "id": "1812.02833_all_49", "text": " Connecting prior work on disentanglement to this framework, we analysed the β𝛽\\beta-vae objective to show that its contribution to disentangling is primarily through direct control of the level of overlap between encodings of the data, expressed by maximising the entropy of the encoding distribution. In the commonly encountered case of assuming an isotropic Gaussian prior and an independent Gaussian posterior, we showed that control of overlap is the only effect of the β𝛽\\beta-vae. Motivated by this observation, we developed an alternate objective for the elbo that allows control of the two factors of decomposability through an additional regularisation term. We then conducted empirical evaluations using this objective, targeting alternate forms of decompositions such as clustering and sparsity, and observed the effect of varying the extent of regularisation to the prior on the quality of the resulting clustering and sparseness of the learnt embeddings. The results indicate that we were successful in attaining those decompositions. ", "title": "Disentangling Disentanglement in Variational Autoencoders" } ]
Why are GANs so difficult to train?
GANs are often difficult to train, collapsing without carefully selected hyperparameters and regularizers [1]. Much work has been done to achieve GAN-like sample quality with likelihood-based models and they are typically easier to scale and train than GANs [2].
[ 1, 2 ]
[ { "id": "2105.05233_all_0", "text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and music van den Oord et al. (2016); Dhariwal et al. (2020). These models can be used in a variety of ways, such as generating images from text prompts Zhang et al. (2016); Ramesh et al. (2021) or learning useful feature representations Donahue and Simonyan (2019); Chen et al. (2020a). While these models are already capable of producing realistic images and sound, there is still much room for improvement beyond the current state-of-the-art, and better generative models could have wide-ranging impacts on graphic design, games, music production, and countless other fields. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_1", "text": " GANs Goodfellow et al. (2014) currently hold the state-of-the-art on most image generation tasks Brock et al. (2018); Wu et al. (2019); Karras et al. (2019b) as measured by sample quality metrics such as FID Heusel et al. (2017), Inception Score Salimans et al. (2016) and Precision Kynkäänniemi et al. (2019). However, some of these metrics do not fully capture diversity, and it has been shown that GANs capture less diversity than state-of-the-art likelihood-based models Razavi et al. (2019); Nichol and Dhariwal (2021); Nash et al. (2021). Furthermore, GANs are often difficult to train, collapsing without carefully selected hyperparameters and regularizers Brock et al. (2018); Miyato et al. (2018); Brock et al. (2016). ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_2", "text": " While GANs hold the state-of-the-art, their drawbacks make them difficult to scale and apply to new domains. As a result, much work has been done to achieve GAN-like sample quality with likelihood-based models Razavi et al. (2019); Ho et al. (2020); Nash et al. (2021); Child (2021). While these models capture more diversity and are typically easier to scale and train than GANs, they still fall short in terms of visual sample quality. Furthermore, except for VAEs, sampling from these models is slower than GANs in terms of wall-clock time. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_3", "text": " Diffusion models are a class of likelihood-based models which have recently been shown to produce high-quality images Sohl-Dickstein et al. (2015); Song and Ermon (2020b); Ho et al. (2020) while offering desirable properties such as distribution coverage, a stationary training objective, and easy scalability. These models generate samples by gradually removing noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound Ho et al. (2020). This class of models already holds the state-of-the-art Song et al. (2020b) on CIFAR-10 Krizhevsky et al. (2009), but still lags behind GANs on difficult generation datasets like LSUN and ImageNet. Nichol and Dhariwal found that these models improve reliably with increased compute, and can produce high-quality samples even on the difficult ImageNet 256×\\times256 dataset using an upsampling stack. However, the FID of this model is still not competitive with BigGAN-deep Brock et al. (2018), the current state-of-the-art on this dataset. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_4", "text": " We hypothesize that the gap between diffusion models and GANs stems from at least two factors: first, that the model architectures used by recent GAN literature have been heavily explored and refined; second, that GANs are able to trade off diversity for fidelity, producing high quality samples but not covering the whole distribution. We aim to bring these benefits to diffusion models, first by improving model architecture and then by devising a scheme for trading off diversity for fidelity. With these improvements, we achieve a new state-of-the-art, surpassing GANs on several different metrics and datasets. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_5", "text": " The rest of the paper is organized as follows. In Section 2, we give a brief background of diffusion models based on Ho et al. and the improvements from Nichol and Dhariwal and Song et al. , and we describe our evaluation setup. In Section 3, we introduce simple architecture improvements that give a substantial boost to FID. In Section 4, we describe a method for using gradients from a classifier to guide a diffusion model during sampling. We find that a single hyperparameter, the scale of the classifier gradients, can be tuned to trade off diversity for fidelity, and we can increase this gradient scale factor by an order of magnitude without obtaining adversarial examples Szegedy et al. (2013). Finally, in Section 5 we show that models with our improved architecture achieve state-of-the-art on unconditional image synthesis tasks, and with classifier guidance achieve state-of-the-art on conditional image synthesis. When using classifier guidance, we find that we can sample with as few as 25 forward passes while maintaining FIDs comparable to BigGAN. We also compare our improved models to upsampling stacks, finding that the two approaches give complementary improvements and that combining them gives the best results on ImageNet 256×\\times256 and 512×\\times512. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_6", "text": " In this section, we provide a brief overview of diffusion models. For a more detailed mathematical description, we refer the reader to Appendix B. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_7", "text": " On a high level, diffusion models sample from a distribution by reversing a gradual noising process. In particular, sampling starts with noise xTsubscript𝑥𝑇x_{T} and produces gradually less-noisy samples xT−1,xT−2,…subscript𝑥𝑇1subscript𝑥𝑇2…x_{T-1},x_{T-2},... until reaching a final sample x0subscript𝑥0x_{0}. Each timestep t𝑡t corresponds to a certain noise level, and xtsubscript𝑥𝑡x_{t} can be thought of as a mixture of a signal x0subscript𝑥0x_{0} with some noise ϵitalic-ϵ\\epsilon where the signal to noise ratio is determined by the timestep t𝑡t. For the remainder of this paper, we assume that the noise ϵitalic-ϵ\\epsilon is drawn from a diagonal Gaussian distribution, which works well for natural images and simplifies various derivations. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_8", "text": " A diffusion model learns to produce a slightly more “denoised” xt−1subscript𝑥𝑡1x_{t-1} from xtsubscript𝑥𝑡x_{t}. Ho et al. parameterize this model as a function ϵθ​(xt,t)subscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡\\epsilon_{\\theta}(x_{t},t) which predicts the noise component of a noisy sample xtsubscript𝑥𝑡x_{t}. To train these models, each sample in a minibatch is produced by randomly drawing a data sample x0subscript𝑥0x_{0}, a timestep t𝑡t, and noise ϵitalic-ϵ\\epsilon, which together give rise to a noised sample xtsubscript𝑥𝑡x_{t} (Equation 17). The training objective is then ‖ϵθ​(xt,t)−ϵ‖2superscriptnormsubscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡italic-ϵ2||\\epsilon_{\\theta}(x_{t},t)-\\epsilon||^{2}, i.e. a simple mean-squared error loss between the true noise and the predicted noise (Equation 26). ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_9", "text": " It is not immediately obvious how to sample from a noise predictor ϵθ​(xt,t)subscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡\\epsilon_{\\theta}(x_{t},t). Recall that diffusion sampling proceeds by repeatedly predicting xt−1subscript𝑥𝑡1x_{t-1} from xtsubscript𝑥𝑡x_{t}, starting from xTsubscript𝑥𝑇x_{T}. Ho et al. show that, under reasonable assumptions, we can model the distribution pθ​(xt−1|xt)subscript𝑝𝜃conditionalsubscript𝑥𝑡1subscript𝑥𝑡p_{\\theta}(x_{t-1}|x_{t}) of xt−1subscript𝑥𝑡1x_{t-1} given xtsubscript𝑥𝑡x_{t} as a diagonal Gaussian 𝒩​(xt−1;μθ​(xt,t),Σθ​(xt,t))𝒩subscript𝑥𝑡1subscript𝜇𝜃subscript𝑥𝑡𝑡subscriptΣ𝜃subscript𝑥𝑡𝑡\\mathcal{N}(x_{t-1};\\mu_{\\theta}(x_{t},t),\\Sigma_{\\theta}(x_{t},t)), where the mean μθ​(xt,t)subscript𝜇𝜃subscript𝑥𝑡𝑡\\mu_{\\theta}(x_{t},t) can be calculated as a function of ϵθ​(xt,t)subscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡\\epsilon_{\\theta}(x_{t},t) (Equation 27). The variance Σθ​(xt,t)subscriptΣ𝜃subscript𝑥𝑡𝑡\\Sigma_{\\theta}(x_{t},t) of this Gaussian distribution can be fixed to a known constant Ho et al. (2020) or learned with a separate neural network head Nichol and Dhariwal (2021), and both approaches yield high-quality samples when the total number of diffusion steps T𝑇T is large enough. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_10", "text": " Ho et al. observe that the simple mean-sqaured error objective, Lsimplesubscript𝐿simpleL_{\\text{simple}}, works better in practice than the actual variational lower bound Lvlbsubscript𝐿vlbL_{\\text{vlb}} that can be derived from interpreting the denoising diffusion model as a VAE. They also note that training with this objective and using their corresponding sampling procedure is equivalent to the denoising score matching model from Song and Ermon , who use Langevin dynamics to sample from a denoising model trained with multiple noise levels to produce high quality image samples. We often use “diffusion models” as shorthand to refer to both classes of models. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_11", "text": " Following the breakthrough work of Song and Ermon and Ho et al. , several recent papers have proposed improvements to diffusion models. Here we describe a few of these improvements, which we employ for our models. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_12", "text": " Nichol and Dhariwal find that fixing the variance Σθ​(xt,t)subscriptΣ𝜃subscript𝑥𝑡𝑡\\Sigma_{\\theta}(x_{t},t) to a constant as done in Ho et al. is sub-optimal for sampling with fewer diffusion steps, and propose to parameterize Σθ​(xt,t)subscriptΣ𝜃subscript𝑥𝑡𝑡\\Sigma_{\\theta}(x_{t},t) as a neural network whose output v𝑣v is interpolated as: Σθ​(xt,t)subscriptΣ𝜃subscript𝑥𝑡𝑡\\displaystyle\\Sigma_{\\theta}(x_{t},t) =exp⁡(v​log⁡βt+(1−v)​log⁡β~t)absent𝑣subscript𝛽𝑡1𝑣subscript~𝛽𝑡\\displaystyle=\\exp(v\\log\\beta_{t}+(1-v)\\log\\tilde{\\beta}_{t}) (1) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_13", "text": " Here, βtsubscript𝛽𝑡\\beta_{t} and β~tsubscript~𝛽𝑡\\tilde{\\beta}_{t} (Equation 19) are the variances in Ho et al. corresponding to upper and lower bounds for the reverse process variances. Additionally, Nichol and Dhariwal propose a hybrid objective for training both ϵθ​(xt,t)subscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡\\epsilon_{\\theta}(x_{t},t) and Σθ​(xt,t)subscriptΣ𝜃subscript𝑥𝑡𝑡\\Sigma_{\\theta}(x_{t},t) using the weighted sum Lsimple+λ​Lvlbsubscript𝐿simple𝜆subscript𝐿vlbL_{\\text{simple}}+\\lambda L_{\\text{vlb}}. Learning the reverse process variances with their hybrid objective allows sampling with fewer steps without much drop in sample quality. We adopt this objective and parameterization, and use it throughout our experiments. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_14", "text": " Song et al. propose DDIM, which formulates an alternative non-Markovian noising process that has the same forward marginals as DDPM, but allows producing different reverse samplers by changing the variance of the reverse noise. By setting this noise to 0, they provide a way to turn any model ϵθ​(xt,t)subscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡\\epsilon_{\\theta}(x_{t},t) into a deterministic mapping from latents to images, and find that this provides an alternative way to sample with fewer steps. We adopt this sampling approach when using fewer than 50 sampling steps, since Nichol and Dhariwal found it to be beneficial in this regime. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_15", "text": " For comparing sample quality across models, we perform quantitative evaluations using the following metrics. While these metrics are often used in practice and correspond well with human judgement, they are not a perfect proxy, and finding better metrics for sample quality evaluation is still an open problem. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_16", "text": " Inception Score (IS) was proposed by Salimans et al. , and it measures how well a model captures the full ImageNet class distribution while still producing individual samples that are convincing examples of a single class. One drawback of this metric is that it does not reward covering the whole distribution or capturing diversity within a class, and models which memorize a small subset of the full dataset will still have high IS Barratt and Sharma (2018). To better capture diversity than IS, Fréchet Inception Distance (FID) was proposed by Heusel et al. , who argued that it is more consistent with human judgement than Inception Score. FID provides a symmetric measure of the distance between two image distributions in the Inception-V3 Szegedy et al. (2015) latent space. Recently, sFID was proposed by Nash et al. as a version of FID that uses spatial features rather than the standard pooled features. They find that this metric better captures spatial relationships, rewarding image distributions with coherent high-level structure. Finally, Kynkäänniemi et al. proposed Improved Precision and Recall metrics to separately measure sample fidelity as the fraction of model samples which fall into the data manifold (precision), and diversity as the fraction of data samples which fall into the sample manifold (recall). ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_17", "text": " We use FID as our default metric for overall sample quality comparisons as it captures both diversity and fidelity and has been the de facto standard metric for state-of-the-art generative modeling work Karras et al. (2019a, b); Brock et al. (2018); Ho et al. (2020). We use Precision or IS to measure fidelity, and Recall to measure diversity or distribution coverage. When comparing against other methods, we re-compute these metrics using public samples or models whenever possible. This is for two reasons: first, some papers Karras et al. (2019a, b); Ho et al. (2020) compare against arbitrary subsets of the training set which are not readily available; and second, subtle implementation differences can affect the resulting FID values Parmar et al. (2021). To ensure consistent comparisons, we use the entire training set as the reference batch Heusel et al. (2017); Brock et al. (2018), and evaluate metrics for all models using the same codebase. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_18", "text": " In this section we conduct several architecture ablations to find the model architecture that provides the best sample quality for diffusion models. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_19", "text": " Ho et al. introduced the UNet architecture for diffusion models, which Jolicoeur-Martineau et al. found to substantially improve sample quality over the previous architectures Song and Ermon (2020a); Lin et al. (2016) used for denoising score matching. The UNet model uses a stack of residual layers and downsampling convolutions, followed by a stack of residual layers with upsampling colvolutions, with skip connections connecting the layers with the same spatial size. In addition, they use a global attention layer at the 16×\\times16 resolution with a single head, and add a projection of the timestep embedding into each residual block. Song et al. found that further changes to the UNet architecture improved performance on the CIFAR-10 Krizhevsky et al. (2009) and CelebA-64 Liu et al. (2015) datasets. We show the same result on ImageNet 128×\\times128, finding that architecture can indeed give a substantial boost to sample quality on much larger and more diverse datasets at a higher resolution. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_20", "text": " We explore the following architectural changes: ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_21", "text": " • Increasing depth versus width, holding model size relatively constant. • Increasing the number of attention heads. • Using attention at 32×\\times32, 16×\\times16, and 8×\\times8 resolutions rather than only at 16×\\times16. • Using the BigGAN Brock et al. (2018) residual block for upsampling and downsampling the activations, following Song et al. (2020b). • Rescaling residual connections with 1212\\frac{1}{\\sqrt{2}}, following Song et al. (2020b); Karras et al. (2019a, b). ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_22", "text": " For all comparisons in this section, we train models on ImageNet 128×\\times128 with batch size 256, and sample using 250 sampling steps. We train models with the above architecture changes and compare them on FID, evaluated at two different points of training, in Table 1. Aside from rescaling residual connections, all of the other modifications improve performance and have a positive compounding effect. We observe in Figure 2 that while increased depth helps performance, it increases training time and takes longer to reach the same performance as a wider model, so we opt not to use this change in further experiments. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_23", "text": " We also study other attention configurations that better match the Transformer architecture Vaswani et al. (2017). To this end, we experimented with either fixing attention heads to a constant, or fixing the number of channels per head. For the rest of the architecture, we use 128 base channels, 2 residual blocks per resolution, multi-resolution attention, and BigGAN up/downsampling, and we train the models for 700K iterations. Table 2 shows our results, indicating that more heads or fewer channels per head improves FID. In Figure 2, we see 64 channels is best for wall-clock time, so we opt to use 64 channels per head as our default. We note that this choice also better matches modern transformer architectures, and is on par with our other configurations in terms of final FID. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_24", "text": " We also experiment with a layer Nichol and Dhariwal (2021) that we refer to as adaptive group normalization (AdaGN), which incorporates the timestep and class embedding into each residual block after a group normalization operation Wu and He (2018), similar to adaptive instance norm Karras et al. (2019a) and FiLM Perez et al. (2017). We define this layer as AdaGN​(h,y)=ys​ GroupNorm​(h)+ybAdaGNℎ𝑦subscript𝑦𝑠 GroupNormℎsubscript𝑦𝑏\\text{AdaGN}(h,y)=y_{s}\\text{ GroupNorm}(h)+y_{b}, where hℎh is the intermediate activations of the residual block following the first convolution, and y=(ys,yb)𝑦subscript𝑦𝑠subscript𝑦𝑏y=(y_{s},y_{b}) is obtained from a linear projection of the timestep and class embedding. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_25", "text": " We had already seen AdaGN improve our earliest diffusion models, and so had it included by default in all our runs. In Table 3, we explicitly ablate this choice, and find that the adaptive group normalization layer indeed improved FID. Both models use 128 base channels and 2 residual blocks per resolution, multi-resolution attention with 64 channels per head, and BigGAN up/downsampling, and were trained for 700K iterations. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_26", "text": " In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions, BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_27", "text": " In addition to employing well designed architectures, GANs for conditional image synthesis Mirza and Osindero (2014); Brock et al. (2018) make heavy use of class labels. This often takes the form of class-conditional normalization statistics Dumoulin et al. (2017); de Vries et al. (2017) as well as discriminators with heads that are explicitly designed to behave like classifiers p​(y|x)𝑝conditional𝑦𝑥p(y|x) Miyato and Koyama (2018). As further evidence that class information is crucial to the success of these models, Lucic et al. find that it is helpful to generate synthetic labels when working in a label-limited regime. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_28", "text": " Given this observation for GANs, it makes sense to explore different ways to condition diffusion models on class labels. We already incorporate class information into normalization layers (Section 3.1). Here, we explore a different approach: exploiting a classifier p​(y|x)𝑝conditional𝑦𝑥p(y|x) to improve a diffusion generator. Sohl-Dickstein et al. and Song et al. show one way to achieve this, wherein a pre-trained diffusion model can be conditioned using the gradients of a classifier. In particular, we can train a classifier pϕ​(y|xt,t)subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡𝑡p_{\\phi}(y|x_{t},t) on noisy images xtsubscript𝑥𝑡x_{t}, and then use gradients ∇xtlog⁡pϕ​(y|xt,t)subscript∇subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡𝑡\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\phi}(y|x_{t},t) to guide the diffusion sampling process towards an arbitrary class label y𝑦y. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_29", "text": " In this section, we first review two ways of deriving conditional sampling processes using classifiers. We then describe how we use such classifiers in practice to improve sample quality. We choose the notation pϕ​(y|xt,t)=pϕ​(y|xt)subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡p_{\\phi}(y|x_{t},t)=p_{\\phi}(y|x_{t}) and ϵθ​(xt,t)=ϵθ​(xt)subscriptitalic-ϵ𝜃subscript𝑥𝑡𝑡subscriptitalic-ϵ𝜃subscript𝑥𝑡\\epsilon_{\\theta}(x_{t},t)=\\epsilon_{\\theta}(x_{t}) for brevity, noting that they refer to separate functions for each timestep t𝑡t and at training time the models must be conditioned on the input t𝑡t. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_30", "text": " We start with a diffusion model with an unconditional reverse noising process pθ​(xt|xt+1)subscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥𝑡1p_{\\theta}(x_{t}|x_{t+1}). To condition this on a label y𝑦y, it suffices to sample each transition111We must also sample xTsubscript𝑥𝑇x_{T} conditioned on y𝑦y, but a noisy enough diffusion process causes xTsubscript𝑥𝑇x_{T} to be nearly Gaussian even in the conditional case. according to pθ,ϕ​(xt|xt+1,y)=Z​pθ​(xt|xt+1)​pϕ​(y|xt)subscript𝑝𝜃italic-ϕconditionalsubscript𝑥𝑡subscript𝑥𝑡1𝑦𝑍subscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥𝑡1subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle p_{\\theta,\\phi}(x_{t}|x_{t+1},y)=Zp_{\\theta}(x_{t}|x_{t+1})p_{\\phi}(y|x_{t}) (2) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_31", "text": " where Z𝑍Z is a normalizing constant (proof in Appendix H). It is typically intractable to sample from this distribution exactly, but Sohl-Dickstein et al. show that it can be approximated as a perturbed Gaussian distribution. Here, we review this derivation. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_32", "text": " Recall that our diffusion model predicts the previous timestep xtsubscript𝑥𝑡x_{t} from timestep xt+1subscript𝑥𝑡1x_{t+1} using a Gaussian distribution: pθ​(xt|xt+1)subscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥𝑡1\\displaystyle p_{\\theta}(x_{t}|x_{t+1}) =𝒩​(μ,Σ)absent𝒩𝜇Σ\\displaystyle=\\mathcal{N}(\\mu,\\Sigma) (3) log⁡pθ​(xt|xt+1)subscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥𝑡1\\displaystyle\\log p_{\\theta}(x_{t}|x_{t+1}) =−12​(xt−μ)T​Σ−1​(xt−μ)+Cabsent12superscriptsubscript𝑥𝑡𝜇𝑇superscriptΣ1subscript𝑥𝑡𝜇𝐶\\displaystyle=-\\frac{1}{2}(x_{t}-\\mu)^{T}\\Sigma^{-1}(x_{t}-\\mu)+C (4) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_33", "text": " We can assume that logϕ⁡p​(y|xt)subscriptitalic-ϕ𝑝conditional𝑦subscript𝑥𝑡\\log_{\\phi}p(y|x_{t}) has low curvature compared to Σ−1superscriptΣ1\\Sigma^{-1}. This assumption is reasonable in the limit of infinite diffusion steps, where ‖Σ‖→0→normΣ0||\\Sigma||\\to 0. In this case, we can approximate log⁡pϕ​(y|xt)subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\log p_{\\phi}(y|x_{t}) using a Taylor expansion around xt=μsubscript𝑥𝑡𝜇x_{t}=\\mu as log⁡pϕ​(y|xt)subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle\\log p_{\\phi}(y|x_{t}) ≈log⁡pϕ​(y|xt)|xt=μ+(xt−μ)​∇xtlog⁡pϕ​(y|xt)|xt=μabsentevaluated-atsubscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡subscript𝑥𝑡𝜇evaluated-atsubscript𝑥𝑡𝜇subscript∇subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡subscript𝑥𝑡𝜇\\displaystyle\\approx\\log p_{\\phi}(y|x_{t})|_{x_{t}=\\mu}+(x_{t}-\\mu)\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\phi}(y|x_{t})|_{x_{t}=\\mu} (5) =(xt−μ)​g+C1absentsubscript𝑥𝑡𝜇𝑔subscript𝐶1\\displaystyle=(x_{t}-\\mu)g+C_{1} (6) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_34", "text": " Here, g=∇xtlog⁡pϕ​(y|xt)|xt=μ𝑔evaluated-atsubscript∇subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡subscript𝑥𝑡𝜇g=\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\phi}(y|x_{t})|_{x_{t}=\\mu}, and C1subscript𝐶1C_{1} is a constant. This gives log⁡(pθ​(xt|xt+1)​pϕ​(y|xt))subscript𝑝𝜃conditionalsubscript𝑥𝑡subscript𝑥𝑡1subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle\\log(p_{\\theta}(x_{t}|x_{t+1})p_{\\phi}(y|x_{t})) ≈−12​(xt−μ)T​Σ−1​(xt−μ)+(xt−μ)​g+C2absent12superscriptsubscript𝑥𝑡𝜇𝑇superscriptΣ1subscript𝑥𝑡𝜇subscript𝑥𝑡𝜇𝑔subscript𝐶2\\displaystyle\\approx-\\frac{1}{2}(x_{t}-\\mu)^{T}\\Sigma^{-1}(x_{t}-\\mu)+(x_{t}-\\mu)g+C_{2} (7) =−12​(xt−μ−Σ​g)T​Σ−1​(xt−μ−Σ​g)+12​gT​Σ​g+C2absent12superscriptsubscript𝑥𝑡𝜇Σ𝑔𝑇superscriptΣ1subscript𝑥𝑡𝜇Σ𝑔12superscript𝑔𝑇Σ𝑔subscript𝐶2\\displaystyle=-\\frac{1}{2}(x_{t}-\\mu-\\Sigma g)^{T}\\Sigma^{-1}(x_{t}-\\mu-\\Sigma g)+\\frac{1}{2}g^{T}\\Sigma g+C_{2} (8) =−12​(xt−μ−Σ​g)T​Σ−1​(xt−μ−Σ​g)+C3absent12superscriptsubscript𝑥𝑡𝜇Σ𝑔𝑇superscriptΣ1subscript𝑥𝑡𝜇Σ𝑔subscript𝐶3\\displaystyle=-\\frac{1}{2}(x_{t}-\\mu-\\Sigma g)^{T}\\Sigma^{-1}(x_{t}-\\mu-\\Sigma g)+C_{3} (9) =log⁡p​(z)+C4,z∼𝒩​(μ+Σ​g,Σ)formulae-sequenceabsent𝑝𝑧subscript𝐶4similar-to𝑧𝒩𝜇Σ𝑔Σ\\displaystyle=\\log p(z)+C_{4},z\\sim\\mathcal{N}(\\mu+\\Sigma g,\\Sigma) (10) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_35", "text": " We can safely ignore the constant term C4subscript𝐶4C_{4}, since it corresponds to the normalizing coefficient Z𝑍Z in Equation 2. We have thus found that the conditional transition operator can be approximated by a Gaussian similar to the unconditional transition operator, but with its mean shifted by Σ​gΣ𝑔\\Sigma g. Algorithm 1 summaries the corresponding sampling algorithm. We include an optional scale factor s𝑠s for the gradients, which we describe in more detail in Section 4.3. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_36", "text": " The above derivation for conditional sampling is only valid for the stochastic diffusion sampling process, and cannot be applied to deterministic sampling methods like DDIM Song et al. (2020a). To this end, we use a score-based conditioning trick adapted from Song et al. , which leverages the connection between diffusion models and score matching Song and Ermon (2020b). In particular, if we have a model ϵθ​(xt)subscriptitalic-ϵ𝜃subscript𝑥𝑡\\epsilon_{\\theta}(x_{t}) that predicts the noise added to a sample, then this can be used to derive a score function: ∇xtlog⁡pθ​(xt)=−11−α¯t​ϵθ​(xt)subscript∇subscript𝑥𝑡subscript𝑝𝜃subscript𝑥𝑡11subscript¯𝛼𝑡subscriptitalic-ϵ𝜃subscript𝑥𝑡\\displaystyle\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\theta}(x_{t})=-\\frac{1}{\\sqrt{1-\\bar{\\alpha}_{t}}}\\epsilon_{\\theta}(x_{t}) (11) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_37", "text": " We can now substitute this into the score function for p​(xt)​p​(y|xt)𝑝subscript𝑥𝑡𝑝conditional𝑦subscript𝑥𝑡p(x_{t})p(y|x_{t}): ∇xtlog⁡(pθ​(xt)​pϕ​(y|xt))subscript∇subscript𝑥𝑡subscript𝑝𝜃subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle\\mathop{}\\!\\nabla_{\\!x_{t}}\\log(p_{\\theta}(x_{t})p_{\\phi}(y|x_{t})) =∇xtlog⁡pθ​(xt)+∇xtlog⁡pϕ​(y|xt)absentsubscript∇subscript𝑥𝑡subscript𝑝𝜃subscript𝑥𝑡subscript∇subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle=\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\theta}(x_{t})+\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\phi}(y|x_{t}) (12) =−11−α¯t​ϵθ​(xt)+∇xtlog⁡pϕ​(y|xt)absent11subscript¯𝛼𝑡subscriptitalic-ϵ𝜃subscript𝑥𝑡subscript∇subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle=-\\frac{1}{\\sqrt{1-\\bar{\\alpha}_{t}}}\\epsilon_{\\theta}(x_{t})+\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\phi}(y|x_{t}) (13) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_38", "text": " Finally, we can define a new epsilon prediction ϵ^​(xt)^italic-ϵsubscript𝑥𝑡\\hat{\\epsilon}(x_{t}) which corresponds to the score of the joint distribution: ϵ^​(xt)≔ϵθ​(xt)−1−α¯t​∇xtlog⁡pϕ​(y|xt)≔^italic-ϵsubscript𝑥𝑡subscriptitalic-ϵ𝜃subscript𝑥𝑡1subscript¯𝛼𝑡subscript∇subscript𝑥𝑡subscript𝑝italic-ϕconditional𝑦subscript𝑥𝑡\\displaystyle\\hat{\\epsilon}(x_{t})\\coloneqq\\epsilon_{\\theta}(x_{t})-\\sqrt{1-\\bar{\\alpha}_{t}}\\mathop{}\\!\\nabla_{\\!x_{t}}\\log p_{\\phi}(y|x_{t}) (14) ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_39", "text": " We can then use the exact same sampling procedure as used for regular DDIM, but with the modified noise predictions ϵ^θ​(xt)subscript^italic-ϵ𝜃subscript𝑥𝑡\\hat{\\epsilon}_{\\theta}(x_{t}) instead of ϵθ​(xt)subscriptitalic-ϵ𝜃subscript𝑥𝑡\\epsilon_{\\theta}(x_{t}). Algorithm 2 summaries the corresponding sampling algorithm. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_40", "text": " To apply classifier guidance to a large scale generative task, we train classification models on ImageNet. Our classifier architecture is simply the downsampling trunk of the UNet model with an attention pool Radford et al. (2021) at the 8x8 layer to produce the final output. We train these classifiers on the same noising distribution as the corresponding diffusion model, and also add random crops to reduce overfitting. After training, we incorporate the classifier into the sampling process of the diffusion model using Equation 10, as outlined by Algorithm 1. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_41", "text": " In initial experiments with unconditional ImageNet models, we found it necessary to scale the classifier gradients by a constant factor larger than 1. When using a scale of 1, we observed that the classifier assigned reasonable probabilities (around 50%) to the desired classes for the final samples, but these samples did not match the intended classes upon visual inspection. Scaling up the classifier gradients remedied this problem, and the class probabilities from the classifier increased to nearly 100%. Figure 3 shows an example of this effect. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_42", "text": " To understand the effect of scaling classifier gradients, note that s⋅∇xlog⁡p​(y|x)=∇xlog⁡1Z​p​(y|x)s⋅𝑠subscript∇𝑥𝑝conditional𝑦𝑥subscript∇𝑥1𝑍𝑝superscriptconditional𝑦𝑥𝑠s\\cdot\\mathop{}\\!\\nabla_{\\!x}\\log p(y|x)=\\mathop{}\\!\\nabla_{\\!x}\\log\\frac{1}{Z}p(y|x)^{s}, where Z𝑍Z is an arbitrary constant. As a result, the conditioning process is still theoretically grounded in a re-normalized classifier distribution proportional to p​(y|x)s𝑝superscriptconditional𝑦𝑥𝑠p(y|x)^{s}. When s>1𝑠1s>1, this distribution becomes sharper than p​(y|x)𝑝conditional𝑦𝑥p(y|x), since larger values are amplified by the exponent. In other words, using a larger gradient scale focuses more on the modes of the classifier, which is potentially desirable for producing higher fidelity (but less diverse) samples. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_43", "text": " In the above derivations, we assumed that the underlying diffusion model was unconditional, modeling p​(x)𝑝𝑥p(x). It is also possible to train conditional diffusion models, p​(x|y)𝑝conditional𝑥𝑦p(x|y), and use classifier guidance in the exact same way. Table 4 shows that the sample quality of both unconditional and conditional models can be greatly improved by classifier guidance. We see that, with a high enough scale, the guided unconditional model can get quite close to the FID of an unguided conditional model, although training directly with the class labels still helps. Guiding a conditional model further improves FID. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_44", "text": " Table 4 also shows that classifier guidance improves precision at the cost of recall, thus introducing a trade-off in sample fidelity versus diversity. We explicitly evaluate how this trade-off varies with the gradient scale in Figure 4. We see that scaling the gradients beyond 1.0 smoothly trades off recall (a measure of diversity) for higher precision and IS (measures of fidelity). Since FID and sFID depend on both diversity and fidelity, their best values are obtained at an intermediate point. We also compare our guidance with the truncation trick from BigGAN in Figure 5. We find that classifier guidance is strictly better than BigGAN-deep when trading off FID for Inception Score. Less clear cut is the precision/recall trade-off, which shows that classifier guidance is only a better choice up until a certain precision threshold, after which point it cannot achieve better precision. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_45", "text": " To evaluate our improved model architecture on unconditional image generation, we train separate diffusion models on three LSUN Yu et al. (2015) classes: bedroom, horse, and cat. To evaluate classifier guidance, we train conditional diffusion models on the ImageNet Russakovsky et al. (2014) dataset at 128×\\times128, 256×\\times256, and 512×\\times512 resolution. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_46", "text": " Table 5 summarizes our results. Our diffusion models can obtain the best FID on each task, and the best sFID on all but one task. With the improved architecture, we already obtain state-of-the-art image generation on LSUN and ImageNet 64×\\times64. For higher resolution ImageNet, we observe that classifier guidance allows our models to substantially outperform the best GANs. These models obtain perceptual quality similar to GANs, while maintaining a higher coverage of the distribution as measured by recall, and can even do so using only 25 diffusion steps. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_47", "text": " Figure 6 compares random samples from the best BigGAN-deep model to our best diffusion model. While the samples are of similar perceptual quality, the diffusion model contains more modes than the GAN, such as zoomed ostrich heads, single flamingos, different orientations of cheeseburgers, and a tinca fish with no human holding it. We also check our generated samples for nearest neighbors in the Inception-V3 feature space in Appendix C, and we show additional samples in Appendices K-M. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_48", "text": " We also compare guidance to using a two-stage upsampling stack. Nichol and Dhariwal and Saharia et al. train two-stage diffusion models by combining a low-resolution diffusion model with a corresponding upsampling diffusion model. In this approach, the upsampling model is trained to upsample images from the training set, and conditions on low-resolution images that are concatenated channel-wise to the model input using a simple interpolation (e.g. bilinear). During sampling, the low-resolution model produces a sample, and then the upsampling model is conditioned on this sample. This greatly improves FID on ImageNet 256×\\times256, but does not reach the same performance as state-of-the-art models like BigGAN-deep Nichol and Dhariwal (2021); Saharia et al. (2021), as seen in Table 5. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_49", "text": " In Table 6, we show that guidance and upsampling improve sample quality along different axes. While upsampling improves precision while keeping a high recall, guidance provides a knob to trade off diversity for much higher precision. We achieve the best FIDs by using guidance at a lower resolution before upsampling to a higher resolution, indicating that these approaches complement one another. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_50", "text": " Score based generative models were introduced by Song and Ermon as a way of modeling a data distribution using its gradients, and then sampling using Langevin dynamics Welling and Teh (2011). Ho et al. found a connection between this method and diffusion models Sohl-Dickstein et al. (2015), and achieved excellent sample quality by leveraging this connection. After this breakthrough work, many works followed up with more promising results: Kong et al. and Chen et al. demonstrated that diffusion models work well for audio; Jolicoeur-Martineau et al. found that a GAN-like setup could improve samples from these models; Song et al. explored ways to leverage techniques from stochastic differential equations to improve the sample quality obtained by score-based models; Song et al. and Nichol and Dhariwal proposed methods to improve sampling speed; Nichol and Dhariwal and Saharia et al. demonstrated promising results on the difficult ImageNet generation task using upsampling diffusion models. Also related to diffusion models, and following the work of Sohl-Dickstein et al. , Goyal et al. described a technique for learning a model with learned iterative generation steps, and found that it could achieve good image samples when trained with a likelihood objective. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_51", "text": " One missing element from previous work on diffusion models is a way to trade off diversity for fidelity. Other generative techniques provide natural levers for this trade-off. Brock et al. introduced the truncation trick for GANs, wherein the latent vector is sampled from a truncated normal distribution. They found that increasing truncation naturally led to a decrease in diversity but an increase in fidelity. More recently, Razavi et al. proposed to use classifier rejection sampling to filter out bad samples from an autoregressive likelihood-based model, and found that this technique improved FID. Most likelihood-based models also allow for low-temperature sampling Ackley et al. (1985), which provides a natural way to emphasize modes of the data distribution (see Appendix G). ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_52", "text": " Other likelihood-based models have been shown to produce high-fidelity image samples. VQ-VAE van den Oord et al. (2017) and VQ-VAE-2 Razavi et al. (2019) are autoregressive models trained on top of quantized latent codes, greatly reducing the computational resources required to train these models on large images. These models produce diverse and high quality images, but still fall short of GANs without expensive rejection sampling and special metrics to compensate for blurriness. DCTransformer Nash et al. (2021) is a related method which relies on a more intelligent compression scheme. VAEs are another promising class of likelihood-based models, and recent methods such as NVAE Vahdat and Kautz (2020) and VDVAE Child (2021) have successfully been applied to difficult image generation domains. Energy-based models are another class of likelihood-based models with a rich history Ackley et al. (1985); Dayan et al. (1995); Hinton (2002). Sampling from the EBM distribution is challenging, and Xie et al. demonstrate that Langevin dynamics can be used to sample coherent images from these models. Du and Mordatch further improve upon this approach, obtaining high quality images. More recently, Gao et al. incorporate diffusion steps into an energy-based model, and find that doing so improves image samples from these models. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_53", "text": " Other works have controlled generative models with a pre-trained classifier. For example, an emerging body of work Galatolo et al. (2021); Patashnik et al. (2021); Adverb (2021) aims to optimize GAN latent spaces for text prompts using pre-trained CLIP Radford et al. (2021) models. More similar to our work, Song et al. uses a classifier to generate class-conditional CIFAR-10 images with a diffusion model. In some cases, classifiers can act as stand-alone generative models. For example, Santurkar et al. demonstrate that a robust image classifier can be used as a stand-alone generative model, and Grathwohl et al. train a model which is jointly a classifier and an energy-based model. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_54", "text": " While we believe diffusion models are an extremely promising direction for generative modeling, they are still slower than GANs at sampling time due to the use of multiple denoising steps (and therefore forward passes). One promising work in this direction is from Luhman and Luhman , who explore a way to distill the DDIM sampling process into a single step model. The samples from the single step model are not yet competitive with GANs, but are much better than previous single-step likelihood-based models. Future work in this direction might be able to completely close the sampling speed gap between diffusion models and GANs without sacrificing image quality. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_55", "text": " Our proposed classifier guidance technique is currently limited to labeled datasets, and we have provided no effective strategy for trading off diversity for fidelity on unlabeled datasets. In the future, our method could be extended to unlabeled data by clustering samples to produce synthetic labels Lucic et al. (2019) or by training discriminative models to predict when samples are in the true data distribution or from the sampling distribution. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_56", "text": " The effectiveness of classifier guidance demonstrates that we can obtain powerful generative models from the gradients of a classification function. This could be used to condition pre-trained models in a plethora of ways, for example by conditioning an image generator with a text caption using a noisy version of CLIP Radford et al. (2021), similar to recent methods that guide GANs using text prompts Galatolo et al. (2021); Patashnik et al. (2021); Adverb (2021). It also suggests that large unlabeled datasets could be leveraged in the future to pre-train powerful diffusion models that can later be improved by using a classifier with desirable properties. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_57", "text": " We have shown that diffusion models, a class of likelihood-based models with a stationary training objective, can obtain better sample quality than state-of-the-art GANs. Our improved architecture is sufficient to achieve this on unconditional image generation tasks, and our classifier guidance technique allows us to do so on class-conditional tasks. In the latter case, we find that the scale of the classifier gradients can be adjusted to trade off diversity for fidelity. These guided diffusion models can reduce the sampling time gap between GANs and diffusion models, although diffusion models still require multiple forward passes during sampling. Finally, by combining guidance with upsampling, we can further improve sample quality on high-resolution conditional image synthesis. ", "title": "Diffusion Models Beat GANs on Image Synthesis" }, { "id": "2105.05233_all_58", "text": " We thank Alec Radford, Mark Chen, Pranav Shyam and Raul Puri for providing feedback on this work. ", "title": "Diffusion Models Beat GANs on Image Synthesis" } ]
What differences exist between the approach of this paper and open-loop variants?
The approach of this paper: does not require knowledge of the camera to hand calibration and we do not require either the calibration or depth images, achieves continuous hand-eye coordination by observing the gripper and choosing the best motor command to move the gripper toward a successful grasp, does not require proposals or crops of image patches and, most importantly, does not require calibration between the robot and the camera, learning continuous visual servoing for robotic grasping from monocular cameras, entirely data-driven, and does not rely on any human annotation either at training or test time, continuously adjusts the motor commands to maximize grasp success, providing continuous feedback [30]. The open-loop variants: observe the scene prior to the grasp, extracts image patches, chooses the patch with the highest probability of a successful grasp, and then uses a known camera calibration to move the gripper to that location, making open-loop predictions [4].
[ 30, 4 ]
[ { "id": "1603.02199_all_0", "text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardly any advance planning, relying instead on feedback from touch and vision. In contrast, robotic manipulation often (though not always) relies more heavily on advance planning and analysis, with relatively simple feedback, such as trajectory following, to ensure stability during execution (Srinivasa et al., 2012). Part of the reason for this is that incorporating complex sensory inputs such as vision directly into a feedback controller is exceedingly challenging. Techniques such as visual servoing (Siciliano & Khatib, 2007) perform continuous feedback on visual features, but typically require the features to be specified by hand, and both open loop perception and feedback (e.g. via visual servoing) requires manual or automatic calibration to determine the precise geometric relationship between the camera and the robot’s end-effector. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_1", "text": " In this paper, we propose a learning-based approach to hand-eye coordination, which we demonstrate on a robotic grasping task. Our approach is data-driven and goal-centric: our method learns to servo a robotic gripper to poses that are likely to produce successful grasps, with end-to-end training directly from image pixels to task-space gripper motion. By continuously recomputing the most promising motor commands, our method continuously integrates sensory cues from the environment, allowing it to react to perturbations and adjust the grasp to maximize the probability of success. Furthermore, the motor commands are issued in the frame of the robot, which is not known to the model at test time. This means that the model does not require the camera to be precisely calibrated with respect to the end-effector, but instead uses visual cues to determine the spatial relationship between the gripper and graspable objects in the scene. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_2", "text": " Our method consists of two components: a grasp success predictor, which uses a deep convolutional neural network (CNN) to determine how likely a given motion is to produce a successful grasp, and a continuous servoing mechanism that uses the CNN to continuously update the robot’s motor commands. By continuously choosing the best predicted path to a successful grasp, the servoing mechanism provides the robot with fast feedback to perturbations and object motion, as well as robustness to inaccurate actuation. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_3", "text": " The grasp prediction CNN was trained using a dataset of over 800,000 grasp attempts, collected using a cluster of similar (but not identical) robotic manipulators, shown in Figure 1, over the course of several months. Although the hardware parameters of each robot were initially identical, each unit experienced different wear and tear over the course of data collection, interacted with different objects, and used a slightly different camera pose relative to the robot base. These differences provided a diverse dataset for learning continuous hand-eye coordination for grasping. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_4", "text": " The main contributions of this work are a method for learning continuous visual servoing for robotic grasping from monocular cameras, a novel convolutional neural network architecture for learning to predict the outcome of a grasp attempt, and a large-scale data collection framework for robotic grasps. Our experimental evaluation demonstrates that our convolutional neural network grasping controller achieves a high success rate when grasping in clutter on a wide range of objects, including objects that are large, small, hard, soft, deformable, and translucent. Supplemental videos of our grasping system show that the robot employs continuous feedback to constantly adjust its grasp, accounting for motion of the objects and inaccurate actuation commands. We also compare our approach to open-loop variants to demonstrate the importance of continuous feedback, as well as a hand-engineering grasping baseline that uses manual hand-to-eye calibration and depth sensing. Our method achieves the highest success rates in our experiments. Our dataset is available here: https://sites.google.com/site/brainrobotdata/home ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_5", "text": " Robotic grasping is one of the most widely explored areas of manipulation. While a complete survey of grasping is outside the scope of this work, we refer the reader to standard surveys on the subject for a more complete treatment (Bohg et al., 2014). Broadly, grasping methods can be categorized as geometrically driven and data-driven. Geometric methods analyze the shape of a target object and plan a suitable grasp pose, based on criteria such as force closure (Weisz & Allen, 2012) or caging (Rodriguez et al., 2012). These methods typically need to understand the geometry of the scene, using depth or stereo sensors and matching of previously scanned models to observations (Goldfeder et al., 2009b). ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_6", "text": " Data-driven methods take a variety of different forms, including human-supervised methods that predict grasp configurations (Herzog et al., 2014; Lenz et al., 2015) and methods that predict finger placement from geometric criteria computed offline (Goldfeder et al., 2009a). Both types of data-driven grasp selection have recently incorporated deep learning (Kappler et al., 2015; Lenz et al., 2015; Redmon & Angelova, 2015). Feedback has been incorporated into grasping primarily as a way to achieve the desired forces for force closure and other dynamic grasping criteria (Hudson et al., 2012), as well as in the form of standard servoing mechanisms, including visual servoing (described below) to servo the gripper to a pre-planned grasp pose (Kragic & Christensen, 2002). The method proposed in this work is entirely data-driven, and does not rely on any human annotation either at training or test time, in contrast to prior methods based on grasp points. Furthermore, our approach continuously adjusts the motor commands to maximize grasp success, providing continuous feedback. Comparatively little prior work has addressed direct visual feedback for grasping, most of which requires manually designed features to track the end effector (Vahrenkamp et al., 2008; Hebert et al., 2012). ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_7", "text": " Our approach is most closely related to recent work on self-supervised learning of grasp poses by Pinto & Gupta (2015). This prior work proposed to learn a network to predict the optimal grasp orientation for a given image patch, trained with self-supervised data collected using a heuristic grasping system based on object proposals. In contrast to this prior work, our approach achieves continuous hand-eye coordination by observing the gripper and choosing the best motor command to move the gripper toward a successful grasp, rather than making open-loop predictions. Furthermore, our approach does not require proposals or crops of image patches and, most importantly, does not require calibration between the robot and the camera, since the closed-loop servoing mechanism can compensate for offsets due to differences in camera pose by continuously adjusting the motor commands. We trained our method using over 800,000 grasp attempts on a very large variety of objects, which is more than an order of magnitude larger than prior methods based on direct self-supervision (Pinto & Gupta, 2015) and more than double the dataset size of prior methods based on synthetic grasps from 3D scans (Kappler et al., 2015). ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_8", "text": " In order to collect our grasp dataset, we parallelized data collection across up to 14 separate robots. Aside from the work of Pinto & Gupta (2015), prior large-scale grasp data collection efforts have focused on collecting datasets of object scans. For example, Dex-Net used a dataset of 10,000 3D models, combined with a learning framework to acquire force closure grasps (Mahler et al., 2016), while the work of Oberlin & Tellex (2015) proposed autonomously collecting object scans using a Baxter robot. Oberlin & Tellex (2015) also proposed parallelizing data collection across multiple robots. More broadly, the ability of robotic systems to learn more quickly by pooling their collective experience has been proposed in a number of prior works, and has been referred to as collective robot learning and an instance of cloud robotics (Inaba et al., 2000; Kuffner, 2010; Kehoe et al., 2013, 2015). ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_9", "text": " Another related area to our method is visual servoing, which addresses moving a camera or end-effector to a desired pose using visual feedback (Kragic & Christensen, 2002). In contrast to our approach, visual servoing methods are typically concerned with reaching a pose relative to objects in the scene, and often (though not always) rely on manually designed or specified features for feedback control (Espiau et al., 1992; Wilson et al., 1996; Vahrenkamp et al., 2008; Hebert et al., 2012; Mohta et al., 2014). Photometric visual servoing uses a target image rather than features (Caron et al., 2013), and several visual servoing methods have been proposed that do not directly require prior calibration between the robot and camera (Yoshimi & Allen, 1994; Jägersand et al., 1997; Kragic & Christensen, 2002). To the best of our knowledge, no prior learning-based method has been proposed that uses visual servoing to directly move into a pose that maximizes the probability of success on a given task (such as grasping). ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_10", "text": " In order to predict the optimal motor commands to maximize grasp success, we use convolutional neural networks (CNNs) trained on grasp success prediction. Although the technology behind CNNs has been known for decades (LeCun & Bengio, 1995), they have achieved remarkable success in recent years on a wide range of challenging computer vision benchmarks (Krizhevsky et al., 2012), becoming the de facto standard for computer vision systems. However, applications of CNNs to robotic control problems has been less prevalent, compared to applications to passive perception tasks such as object recognition (Krizhevsky et al., 2012; Wohlhart & Lepetit, 2015), localization (Girshick et al., 2014), and segmentation (Chen et al., 2014). Several works have proposed to use CNNs for deep reinforcement learning applications, including playing video games (Mnih et al., 2015), executing simple task-space motions for visual servoing (Lampe & Riedmiller, 2013), controlling simple simulated robotic systems (Watter et al., 2015; Lillicrap et al., 2016), and performing a variety of robotic manipulation tasks (Levine et al., 2015). Many of these applications have been in simple or synthetic domains, and all of them have focused on relatively constrained environments with small datasets. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_11", "text": " Our approach to learning hand-eye coordination for grasping consists of two parts. The first part is a prediction network g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}) that accepts visual input 𝐈tsubscript𝐈𝑡\\mathbf{I}_{t} and a task-space motion command 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t}, and outputs the predicted probability that executing the command 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t} will produce a successful grasp. The second part is a servoing function f​(𝐈t)𝑓subscript𝐈𝑡f(\\mathbf{I}_{t}) that uses the prediction network to continuously control the robot to servo the gripper to a success grasp. We describe each of these components below: Section 4.1 formally defines the task solved by the prediction network and describes the network architecture, Section 4.2 describes how the servoing function can use the prediction network to perform continuous control. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_12", "text": " By breaking up the hand-eye coordination system into components, we can train the CNN grasp predictor using a standard supervised learning objective, and design the servoing mechanism to utilize this predictor to optimize grasp performance. The resulting method can be interpreted as a type of reinforcement learning, and we discuss this interpretation, together with the underlying assumptions, in Section 4.3. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_13", "text": " In order to train our prediction network, we collected over 800,000 grasp attempts using a set of similar (but not identical) robotic manipulators, shown in Figure 1. We discuss the details of our hardware setup in Section 5.1, and discuss the data collection process in Section 5.2. To ensure generalization of the learned prediction network, the specific parameters of each robot varied in terms of the camera pose relative to the robot, providing independence to camera calibration. Furthermore, uneven wear and tear on each robot resulted in differences in the shape of the gripper fingers. Although accurately predicting optimal motion vectors in open-loop is not possible with this degree of variation, as demonstrated in our experiments, our continuous servoing method can correct mistakes by observing the outcomes of its past actions, achieving a high success rate even without knowledge of the precise camera calibration. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_14", "text": " In this section, we discuss each component of our approach, including a description of the neural network architecture and the servoing mechanism, and conclude with an interpretation of the method as a form of reinforcement learning, including the corresponding assumptions on the structure of the decision problem. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_15", "text": " The grasp prediction network g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}) is trained to predict whether a given task-space motion 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t} will result in a successful grasp, based on the current camera observation 𝐈tsubscript𝐈𝑡\\mathbf{I}_{t}. In order to make accurate predictions, g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}) must be able to parse the current camera image, locate the gripper, and determine whether moving the gripper according to 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t} will put it in a position where closing the fingers will pick up an object. This is a complex spatial reasoning task that requires not only the ability to parse the geometry of the scene from monocular images, but also the ability to interpret material properties and spatial relationships between objects, which strongly affect the success of a given grasp. A pair of example input images for the network is shown in Figure 2, overlaid with lines colored accordingly to the inferred grasp success probabilities. Importantly, the movement vectors provided to the network are not transformed into the frame of the camera, which means that the method does not require hand-to-eye camera calibration. However, this also means that the network must itself infer the outcome of a task-space motor command by determining the orientation and position of the robot and gripper. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_16", "text": " Data for training the CNN grasp predictor is obtained by attempting grasps using real physical robots. Each grasp consists of T𝑇T time steps. At each time step, the robot records the current image 𝐈tisuperscriptsubscript𝐈𝑡𝑖\\mathbf{I}_{t}^{i} and the current pose 𝐩tisuperscriptsubscript𝐩𝑡𝑖\\mathbf{p}_{t}^{i}, and then chooses a direction along which to move the gripper. At the final time step T𝑇T, the robot closes the gripper and evaluates the success of the grasp (as described in Appendix B), producing a label ℓisubscriptℓ𝑖\\ell_{i}. Each grasp attempt results in T𝑇T training samples, given by (𝐈ti,𝐩Ti−𝐩ti,ℓi)superscriptsubscript𝐈𝑡𝑖superscriptsubscript𝐩𝑇𝑖superscriptsubscript𝐩𝑡𝑖subscriptℓ𝑖(\\mathbf{I}_{t}^{i},\\mathbf{p}_{T}^{i}-\\mathbf{p}_{t}^{i},\\ell_{i}). That is, each sample includes the image observed at that time step, the vector from the current pose to the one that is eventually reached, and the success of the entire grasp. This process is illustrated in Figure 3. This procedure trains the network to predict whether moving a gripper along a given vector and then grasping will produce a successful grasp. Note that this differs from the standard reinforcement-learning setting, where the prediction is based on the current state and motor command, which in this case is given by 𝐩t+1−𝐩tsubscript𝐩𝑡1subscript𝐩𝑡\\mathbf{p}_{t+1}-\\mathbf{p}_{t}. We discuss the interpretation of this approach in the context of reinforcement learning in Section 4.3. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_17", "text": " The architecture of our grasp prediction CNN is shown in Figure 4. The network takes the current image 𝐈tsubscript𝐈𝑡\\mathbf{I}_{t} as input, as well as an additional image 𝐈0subscript𝐈0\\mathbf{I}_{0} that is recorded before the grasp begins, and does not contain the gripper. This additional image provides an unoccluded view of the scene. The two input images are concatenated and processed by 5 convolutional layers with batch normalization (Ioffe & Szegedy, 2015), following by max pooling. After the 5thsuperscript5th5^{\\text{th}} layer, we provide the vector 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t} as input to the network. The vector is represented by 5 values: a 3D translation vector, and a sine-cosine encoding of the change in orientation of the gripper about the vertical axis.111In this work, we only consider vertical pinch grasps, though extensions to other grasp parameterizations would be straightforward. To provide this vector to the convolutional network, we pass it through one fully connected layer and replicate it over the spatial dimensions of the response map after layer 5, concatenating it with the output of the pooling layer. After this concatenation, further convolution and pooling operations are applied, as described in Figure 4, followed by a set of small fully connected layers that output the probability of grasp success, trained with a cross-entropy loss to match ℓisubscriptℓ𝑖\\ell_{i}, causing the network to output p​(ℓi=1)𝑝subscriptℓ𝑖1p(\\ell_{i}=1). The input matches are 512×512512512512\\times 512 pixels, and we randomly crop the images to a 472×472472472472\\times 472 region during training to provide for translation invariance. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_18", "text": " Once trained the network g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}) can predict the probability of success of a given motor command, independently of the exact camera pose. In the next section, we discuss how this grasp success predictor can be used to continuous servo the gripper to a graspable object. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_19", "text": " In this section, we describe the servoing mechanism f​(𝐈t)𝑓subscript𝐈𝑡f(\\mathbf{I}_{t}) that uses the grasp prediction network to choose the motor commands for the robot that will maximize the probability of a success grasp. The most basic operation for the servoing mechanism is to perform inference in the grasp predictor, in order to determine the motor command 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t} given an image 𝐈tsubscript𝐈𝑡\\mathbf{I}_{t}. The simplest way of doing this is to randomly sample a set of candidate motor commands 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t} and then evaluate g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}), taking the command with the highest probability of success. However, we can obtain better results by running a small optimization on 𝐯tsubscript𝐯𝑡\\mathbf{v}_{t}, which we perform using the cross-entropy method (CEM) (Rubinstein & Kroese, 2004). CEM is a simple derivative-free optimization algorithm that samples a batch of N𝑁N values at each iteration, fits a Gaussian distribution to M<N𝑀𝑁M<N of these samples, and then samples a new batch of N𝑁N from this Gaussian. We use N=64𝑁64N=64 and M=6𝑀6M=6 in our implementation, and perform three iterations of CEM to determine the best available command 𝐯t⋆superscriptsubscript𝐯𝑡⋆\\mathbf{v}_{t}^{\\star} and thus evaluate f​(𝐈t)𝑓subscript𝐈𝑡f(\\mathbf{I}_{t}). New motor commands are issued as soon as the CEM optimization completes, and the controller runs at around 2 to 5 Hz. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_20", "text": " One appealing property of this sampling-based approach is that we can easily impose constraints on the types of grasps that are sampled. This can be used, for example, to incorporate user commands that require the robot to grasp in a particular location, keep the robot from grasping outside of the workspace, and obey joint limits. It also allows the servoing mechanism to control the height of the gripper during each move. It is often desirable to raise the gripper above the objects in the scene to reposition it to a new location, for example when the objects move (due to contacts) or if errors due to lack of camera calibration produce motions that do not position the gripper in a favorable configuration for grasping. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_21", "text": " We can use the predicted grasp success p​(ℓ=1)𝑝ℓ1p(\\ell=1) produced by the network to inform a heuristic for raising and lowering the gripper, as well as to choose when to stop moving and attempt a grasp. We use two heuristics in particular: first, we close the gripper whenever the network predicts that (𝐈t,∅)subscript𝐈𝑡(\\mathbf{I}_{t},\\emptyset), where ∅\\emptyset corresponds to no motion, will succeed with a probability that is at least 90%percent9090\\% of the best inferred motion 𝐯t⋆superscriptsubscript𝐯𝑡⋆\\mathbf{v}_{t}^{\\star}. The rationale behind this is to stop the grasp early if closing the gripper is nearly as likely to produce a successful grasp as moving it. The second heuristic is to raise the gripper off the table when (𝐈t,∅)subscript𝐈𝑡(\\mathbf{I}_{t},\\emptyset) has a probability of success that is less than 50%percent5050\\% of 𝐯t⋆superscriptsubscript𝐯𝑡⋆\\mathbf{v}_{t}^{\\star}. The rationale behind this choice is that, if closing the gripper now is substantially worse than moving it, the gripper is most likely not positioned in a good configuration, and a large motion will be required. Therefore, raising the gripper off the table minimizes the chance of hitting other objects that are in the way. While these heuristics are somewhat ad-hoc, we found that they were effective for successfully grasping a wide range of objects in highly cluttered situations, as discussed in Section 6. Pseudocode for the servoing mechanism f​(𝐈t)𝑓subscript𝐈𝑡f(\\mathbf{I}_{t}) is presented in Algorithm 1. Further details on the servoing mechanism are presented in Appendix A. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_22", "text": " One interesting conceptual question raised by our approach is the relationship between training the grasp prediction network and reinforcement learning. In the case where T=2𝑇2T=2, and only one decision is made by the servoing mechanism, the grasp network can be regarded as approximating the Q-function for the policy defined by the servoing mechanism f​(𝐈t)𝑓subscript𝐈𝑡f(\\mathbf{I}_{t}) and a reward function that is 111 when the grasp succeeds and 00 otherwise. Repeatedly deploying the latest grasp network g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}), collecting additional data, and refitting g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}) can then be regarded as fitted Q iteration (Antos et al., 2008). However, what happens when T>2𝑇2T>2? In that case, fitted Q iteration would correspond to learning to predict the final probability of success from tuples of the form (𝐈t,𝐩t+1−𝐩t)subscript𝐈𝑡subscript𝐩𝑡1subscript𝐩𝑡(\\mathbf{I}_{t},\\mathbf{p}_{t+1}-\\mathbf{p}_{t}), which is substantially harder, since 𝐩t+1−𝐩tsubscript𝐩𝑡1subscript𝐩𝑡\\mathbf{p}_{t+1}-\\mathbf{p}_{t} doesn’t tell us where the gripper will end up at the end, before closing (which is 𝐩Tsubscript𝐩𝑇\\mathbf{p}_{T}). ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_23", "text": " Using 𝐩T−𝐩tsubscript𝐩𝑇subscript𝐩𝑡\\mathbf{p}_{T}-\\mathbf{p}_{t} as the action representation in fitted Q iteration therefore implies an additional assumption on the form of the dynamics. The assumption is that the actions induce a transitive relation between states: that is, that moving from 𝐩1subscript𝐩1\\mathbf{p}_{1} to 𝐩2subscript𝐩2\\mathbf{p}_{2} and then to 𝐩3subscript𝐩3\\mathbf{p}_{3} is equivalent to moving from 𝐩1subscript𝐩1\\mathbf{p}_{1} to 𝐩3subscript𝐩3\\mathbf{p}_{3} directly. This assumption does not always hold in the case of grasping, since an intermediate motion might move objects in the scene, but it is a reasonable approximation that we found works quite well in practice. The major advantage of this approximation is that fitting the Q function reduces to a prediction problem, and avoids the usual instabilities associated with Q iteration, since the previous Q function does not appear in the regression. An interesting and promising direction for future work is to combine our approach with more standard reinforcement learning formulations that do consider the effects of intermediate actions. This could enable the robot, for example, to perform nonprehensile manipulations to intentionally reorient and reposition objects prior to grasping. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_24", "text": " In order to collect training data to train the prediction network g​(𝐈t,𝐯t)𝑔subscript𝐈𝑡subscript𝐯𝑡g(\\mathbf{I}_{t},\\mathbf{v}_{t}), we used between 6 and 14 robots at any given time. An illustration of our data collection setup is shown in Figure 1. This section describes the robots used in our data collection process, as well as the data collection procedure. The dataset is available here: https://sites.google.com/site/brainrobotdata/home ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_25", "text": " Our robotic manipulator platform consists of a lightweight 7 degree of freedom arm, a compliant, underactuated, two-finger gripper, and a camera mounted behind the arm looking over the shoulder. An illustration of a single robot is shown in Figure 5. The underactuated gripper provides some degree of compliance for oddly shaped objects, at the cost of producing a loose grip that is prone to slipping. An interesting property of this gripper was uneven wear and tear over the course of data collection, which lasted several months. Images of the grippers of various robots are shown in Figure 7, illustrating the range of variation in gripper wear and geometry. Furthermore, the cameras were mounted at slightly varying angles, providing a different viewpoint for each robot. The views from the cameras of all 14 robots during data collection are shown in Figure 6. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_26", "text": " We collected about 800,000 grasp attempts over the course of two months, using between 6 and 14 robots at any given point in time, without any manual annotation or supervision. The only human intervention into the data collection process was to replace the object in the bins in front of the robots and turn on the system. The data collection process started with random motor command selection and T=2𝑇2T=2.222The last command is always 𝐯T=∅subscript𝐯𝑇\\mathbf{v}_{T}=\\emptyset and corresponds to closing the gripper without moving. When executing completely random motor commands, the robots were successful on 10% - 30% of the grasp attempts, depending on the particular objects in front of them. About half of the dataset was collected using random grasps, and the rest used the latest network fitted to all of the data collected so far. Over the course of data collection, we updated the network 4 times, and increased the number of steps from T=2𝑇2T=2 at the beginning to T=10𝑇10T=10 at the end. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_27", "text": " The objects for grasping were chosen among common household and office items, and ranged from a 444 to 202020 cm in length along the longest axis. Some of these objects are shown in Figure 6. The objects were placed in front of the robots into metal bins with sloped sides to prevent the objects from becoming wedged into corners. The objects were periodically swapped out to increase the diversity of the training data. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_28", "text": " Grasp success was evaluated using two methods: first, we marked a grasp as successful if the position reading on the gripper was greater than 1 cm, indicating that the fingers had not closed fully. However, this method often missed thin objects, and we also included a drop test, where the robot picked up the object, recorded an image of the bin, and then dropped any object that was in the gripper. By comparing the image before and after the drop, we could determine whether any object had been picked up. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_29", "text": " To evaluate our continuous grasping system, we conducted a series of quantitative experiments with novel objects that were not seen during training. The particular objects used in our evaluation are shown in Figure 8. This set of objects presents a challenging cross section of common office and household items, including objects that are heavy, such as staplers and tape dispensers, objects that are flat, such as post-it notes, as well as objects that are small, large, rigid, soft, and translucent. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_30", "text": " The goal of our evaluation was to answer the following questions: (1) does continuous servoing significantly improve grasping accuracy and success rate? (2) how well does our learning-based system perform when compared to alternative approaches? To answer question (1), we compared our approach to an open-loop method that observes the scene prior to the grasp, extracts image patches, chooses the patch with the highest probability of a successful grasp, and then uses a known camera calibration to move the gripper to that location. This method is analogous to the approach proposed by Pinto & Gupta (2015), but uses the same network architecture as our method and the same training set. We refer to this approach as “open loop,” since it does not make use of continuous visual feedback. To answer question (2), we also compared our approach to a random baseline method, as well as a hand-engineered grasping system that uses depth images and heuristic positioning of the fingers. This hand-engineered system is described in Appendix C. Note that our method requires fewer assumptions than either of the two alternative methods: unlike Pinto & Gupta (2015), we do not require knowledge of the camera to hand calibration, and unlike the hand-engineered system, we do not require either the calibration or depth images. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_31", "text": " We evaluated the methods using two experimental protocols. In the first protocol, the objects were placed into a bin in front of the robot, and it was allowed to grasp objects for 100 attempts, placing any grasped object back into the bin after each attempt. Grasping with replacement tests the ability of the system to pick up objects in cluttered settings, but it also allows the robot to repeatedly pick up easy objects. To address this shortcoming of the replacement condition, we also tested each system without replacement, as shown in Figure 8, by having it remove objects from a bin. For this condition, which we refer to as “without replacement,” we repeated each experiment 4 times, and we report success rates on the first 10, 20, and 30 grasp attempts. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_32", "text": " The results are presented in Table 1. The success rate of our continuous servoing method exceeded the baseline and prior methods in all cases. For the evaluation without replacement, our method cleared the bin completely after 30 grasps on one of the 4 attempts, and had only one object left in the other 3 attempts (which was picked up on the 31stsuperscript31st31^{\\text{st}} grasp attempt in 2 of the three cases, thus clearing the bin). The hand-engineered baseline struggled to accurately resolve graspable objects in clutter, since the camera was positioned about a meter away from the table, and its performance also dropped in the non-replacement case as the bin was emptied, leaving only small, flat objects that could not be resolved by the depth camera. Many practical grasping systems use a wrist-mounted camera to address this issue (Leeper et al., 2014). In contrast, our approach did not require any special hardware modifications. The open-loop baseline was also substantially less successful. Although it benefited from the large dataset collected by our parallelized data collection setup, which was more than an order of magnitude larger than in prior work (Pinto & Gupta, 2015), it was unable to react to perturbations, movement of objects, and variability in actuation and gripper shape.333The absolute performance of the open-loop method is lower than reported by Pinto & Gupta (2015). This can be attributed to differences in the setup: different objects, grippers, and clutter. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_33", "text": " In Table 2, we evaluate the performance of our model under the no replacement condition with varying amounts of data. We trained grasp prediction models using roughly the first 12%percent1212\\%, 25%percent2525\\%, and 50%percent5050\\% of the grasp attempts in our dataset, to simulate the effective performance of the model one eighth, one quarter, and one half of the way through the data collection process. Table 2 shows the size of each dataset in terms of the number of images. Note that the length of the trajectories changed over the course of data collection, increasing from T=2𝑇2T=2 at the beginning to T=10𝑇10T=10 at the end, so that the later datasets are substantially larger in terms of the total number of images. Furthermore, the success rate in the later grasp attempts was substantially higher, increasing from 101010 to 20%percent2020\\% in the beginning to around 70%percent7070\\% at the end (using ϵitalic-ϵ\\epsilon-greedy exploration with ϵ=0.1italic-ϵ0.1\\epsilon=0.1, meaning that one in ten decisions was taken at random). Nonetheless, these results can be informative for understanding the data requirements of the grasping task. First, the results suggest that the grasp success rate continued to improve as more data was accumulated, and a high success rate (exceeding the open-loop and hand-engineered baselines) was not observed until at least halfway through the data collection process. The results also suggest that collecting additional data could further improve the accuracy of the grasping system, and we plan to experiment with larger datasets in the future. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_34", "text": " Qualitatively, our method exhibited some interesting behaviors. Figure 9 shows the grasps that were chosen for soft and hard objects. Our system preferred to grasp softer objects by embedding the finger into the center of the object, while harder objects were grasped by placing the fingers on either side. Our method was also able to grasp a variety of challenging objects, some of which are shown in Figure 10. Other interesting grasp strategies, corrections, and mistakes can be seen in our supplementary video: https://youtu.be/cXaic_k80uM ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_35", "text": " We presented a method for learning hand-eye coordination for robotic grasping, using deep learning to build a grasp success prediction network, and a continuous servoing mechanism to use this network to continuously control a robotic manipulator. By training on over 800,000 grasp attempts from 14 distinct robotic manipulators with variation in camera pose, we can achieve invariance to camera calibration and small variations in the hardware. Unlike most grasping and visual servoing methods, our approach does not require calibration of the camera to the robot, instead using continuous feedback to correct any errors resulting from discrepancies in calibration. Our experimental results demonstrate that our method can effectively grasp a wide range of different objects, including novel objects not seen during training. Our results also show that our method can use continuous feedback to correct mistakes and reposition the gripper in response to perturbation and movement of objects in the scene. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_36", "text": " As with all learning-based methods, our approach assumes that the data distribution during training resembles the distribution at test-time. While this assumption is reasonable for a large and diverse training set, such as the one used in this work, structural regularities during data collection can limit generalization at test time. For example, although our method exhibits some robustness to small variations in gripper shape, it would not readily generalize to new robotic platforms that differ substantially from those used during training. Furthermore, since all of our training grasp attempts were executed on flat surfaces, the proposed method is unlikely to generalize well to grasping on shelves, narrow cubbies, or other drastically different settings. These issues can be mitigated by increasing the diversity of the training setup, which we plan to explore as future work. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_37", "text": " One of the most exciting aspects of the proposed grasping method is the ability of the learning algorithm to discover unconventional and non-obvious grasping strategies. We observed, for example, that the system tended to adopt a different approach for grasping soft objects, as opposed to hard ones. For hard objects, the fingers must be placed on either side of the object for a successful grasp. However, soft objects can be grasped simply by pinching into the object, which is most easily accomplished by placing one finger into the middle, and the other to the side. We observed this strategy for objects such as paper tissues and sponges. In future work, we plan to further explore the relationship between our self-supervised continuous grasping approach and reinforcement learning, in order to allow the methods to learn a wider variety of grasp strategies from large datasets of robotic experience. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_38", "text": " At a more general level, our work explores the implications of large-scale data collection across multiple robotic platforms, demonstrating the value of this type of automatic large dataset construction for real-world robotic tasks. Although all of the robots in our experiments were located in a controlled laboratory environment, in the long term, this class of methods is particularly compelling for robotic systems that are deployed in the real world, and therefore are naturally exposed to a wide variety of environments, objects, lighting conditions, and wear and tear. For self-supervised tasks such as grasping, data collected and shared by robots in the real world would be the most representative of test-time inputs, and would therefore be the best possible training data for improving the real-world performance of the system. So a particularly exciting avenue for future work is to explore how our method would need to change to apply it to large-scale data collection across a large number of deployed robots engaged in real world tasks, including grasping and other manipulation skills. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" }, { "id": "1603.02199_all_39", "text": " We would like to thank Kurt Konolige and Mrinal Kalakrishnan for additional engineering and insightful discussions, Jed Hewitt, Don Jordan, and Aaron Weiss for help with maintaining the robots, Max Bajracharya and Nicolas Hudson for providing us with a baseline perception pipeline, and Vincent Vanhoucke and Jeff Dean for support and organization. ", "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection" } ]
Is fast processing the only metric that we consider in segmentation?
Other than the processing time, performance of the segmentation task is also measure by computing the warping error, Rand error and the pixel error from thresholded segmentation map and also by IOU metric [16].
[ 16 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al.  was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_1", "text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al.  trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_2", "text": " Obviously, the strategy in Ciresan et al.  has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_3", "text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_4", "text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_5", "text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al.  in the scope of unsupervised feature learning. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_6", "text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_7", "text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_8", "text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_9", "text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_10", "text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_11", "text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk​(𝐱)=exp⁡(ak​(𝐱))/(∑k′=1Kexp⁡(ak′​(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak​(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk​(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk​(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak​(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk​(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ​(𝐱)​(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw​(𝐱)​log⁡(pℓ​(𝐱)​(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_12", "text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_13", "text": " The separation border is computed using morphological operations. The weight map is then computed as w​(𝐱)=wc​(𝐱)+w0⋅exp⁡(−(d1​(𝐱)+d2​(𝐱))22​σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_14", "text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_15", "text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_16", "text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge  that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_17", "text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_18", "text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_19", "text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_20", "text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_21", "text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_22", "text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" } ]
Why aren't we using α=0.75 since the positive samples are our minority classes?
Setting the \alpha to 075 gives a gain of 09 in AP and for \gamma = 20, \alpha = 25 or 5 gives the best results ie [39]. it lowers the AP by 4 [41].
[ 39, 41 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of the foreground classes or as background using a convolutional neural network. Through a sequence of advances (10, 28, 20, 14), this two-stage framework consistently achieves top accuracy on the challenging COCO benchmark . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_1", "text": " Despite the success of two-stage detectors, a natural question to ask is: could a simple one-stage detector achieve similar accuracy? One stage detectors are applied over a regular, dense sampling of object locations, scales, and aspect ratios. Recent work on one-stage detectors, such as YOLO (26, 27) and SSD (22, 9), demonstrates promising results, yielding faster detectors with accuracy within 10-40% relative to state-of-the-art two-stage methods. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_2", "text": " This paper pushes the envelop further: we present a one-stage object detector that, for the first time, matches the state-of-the-art COCO AP of more complex two-stage detectors, such as the Feature Pyramid Network (FPN) or Mask R-CNN variants of Faster R-CNN . To achieve this result, we identify class imbalance during training as the main obstacle impeding one-stage detector from achieving state-of-the-art accuracy and propose a new loss function that eliminates this barrier. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_3", "text": " Class imbalance is addressed in R-CNN-like detectors by a two-stage cascade and sampling heuristics. The proposal stage (e.g., Selective Search , EdgeBoxes , DeepMask (24, 25), RPN ) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio (1:3), or online hard example mining (OHEM) , are performed to maintain a manageable balance between foreground and background. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_4", "text": " In contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. In practice this often amounts to enumerating ∼similar-to\\scriptstyle\\sim100k locations that densely cover spatial positions, scales, and aspect ratios. While similar sampling heuristics may also be applied, they are inefficient as the training procedure is still dominated by easily classified background examples. This inefficiency is a classic problem in object detection that is typically addressed via techniques such as bootstrapping (33, 29) or hard example mining (37, 8, 31). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_5", "text": " In this paper, we propose a new loss function that acts as a more effective alternative to previous approaches for dealing with class imbalance. The loss function is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases, see Figure 1. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. Experiments show that our proposed Focal Loss enables us to train a high-accuracy, one-stage detector that significantly outperforms the alternatives of training with the sampling heuristics or hard example mining, the previous state-of-the-art techniques for training one-stage detectors. Finally, we note that the exact form of the focal loss is not crucial, and we show other instantiations can achieve similar results. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_6", "text": " To demonstrate the effectiveness of the proposed focal loss, we design a simple one-stage object detector called RetinaNet, named for its dense sampling of object locations in an input image. Its design features an efficient in-network feature pyramid and use of anchor boxes. It draws on a variety of recent ideas from (22, 6, 28, 20). RetinaNet is efficient and accurate; our best model, based on a ResNet-101-FPN backbone, achieves a COCO test-dev AP of 39.1 while running at 5 fps, surpassing the previously best published single-model results from both one and two-stage detectors, see Figure 2. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_7", "text": " The sliding-window paradigm, in which a classifier is applied on a dense image grid, has a long and rich history. One of the earliest successes is the classic work of LeCun et al. who applied convolutional neural networks to handwritten digit recognition (19, 36). Viola and Jones used boosted object detectors for face detection, leading to widespread adoption of such models. The introduction of HOG and integral channel features gave rise to effective methods for pedestrian detection. DPMs helped extend dense detectors to more general object categories and had top results on PASCAL for many years. While the sliding-window approach was the leading detection paradigm in classic computer vision, with the resurgence of deep learning , two-stage detectors, described next, quickly came to dominate object detection. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_8", "text": " The dominant paradigm in modern object detection is based on a two-stage approach. As pioneered in the Selective Search work , the first stage generates a sparse set of candidate proposals that should contain all objects while filtering out the majority of negative locations, and the second stage classifies the proposals into foreground classes / background. R-CNN upgraded the second-stage classifier to a convolutional network yielding large gains in accuracy and ushering in the modern era of object detection. R-CNN was improved over the years, both in terms of speed (15, 10) and by using learned object proposals (6, 24, 28). Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework . Numerous extensions to this framework have been proposed, e.g. (20, 31, 32, 16, 14). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_9", "text": " OverFeat was one of the first modern one-stage object detector based on deep networks. More recently SSD (22, 9) and YOLO (26, 27) have renewed interest in one-stage methods. These detectors have been tuned for speed but their accuracy trails that of two-stage methods. SSD has a 10-20% lower AP, while YOLO focuses on an even more extreme speed/accuracy trade-off. See Figure 2. Recent work showed that two-stage detectors can be made fast simply by reducing input image resolution and the number of proposals, but one-stage methods trailed in accuracy even with a larger compute budget . In contrast, the aim of this work is to understand if one-stage detectors can match or surpass the accuracy of two-stage detectors while running at similar or faster speeds. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_10", "text": " The design of our RetinaNet detector shares many similarities with previous dense detectors, in particular the concept of ‘anchors’ introduced by RPN and use of features pyramids as in SSD and FPN . We emphasize that our simple detector achieves top results not based on innovations in network design but due to our novel loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_11", "text": " Both classic one-stage object detection methods, like boosted detectors (37, 5) and DPMs , and more recent methods, like SSD , face a large class imbalance during training. These detectors evaluate 104superscript10410^{4}-105superscript10510^{5} candidate locations per image but only a few locations contain objects. This imbalance causes two problems: (1) training is inefficient as most locations are easy negatives that contribute no useful learning signal; (2) en masse, the easy negatives can overwhelm training and lead to degenerate models. A common solution is to perform some form of hard negative mining (33, 37, 8, 31, 22) that samples hard examples during training or more complex sampling/reweighing schemes . In contrast, we show that our proposed focal loss naturally handles the class imbalance faced by a one-stage detector and allows us to efficiently train on all examples without sampling and without easy negatives overwhelming the loss and computed gradients. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_12", "text": " There has been much interest in designing robust loss functions (e.g., Huber loss ) that reduce the contribution of outliers by down-weighting the loss of examples with large errors (hard examples). In contrast, rather than addressing outliers, our focal loss is designed to address class imbalance by down-weighting inliers (easy examples) such that their contribution to the total loss is small even if their number is large. In other words, the focal loss performs the opposite role of a robust loss: it focuses training on a sparse set of hard examples. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_13", "text": " The Focal Loss is designed to address the one-stage object detection scenario in which there is an extreme imbalance between foreground and background classes during training (e.g., 1:1000). We introduce the focal loss starting from the cross entropy (CE) loss for binary classification111Extending the focal loss to the multi-class case is straightforward and works well; for simplicity we focus on the binary loss in this work.: CE​(p,y)={−log⁡(p)if y=1−log⁡(1−p)otherwise.CE𝑝𝑦cases𝑝if y=11𝑝otherwise.\\textrm{CE}(p,y)=\\begin{cases}-\\log(p)&\\text{if $y=1$}\\\\ -\\log(1-p)&\\text{otherwise.}\\end{cases} (1) In the above y∈{±1}𝑦plus-or-minus1y\\in\\{\\pm 1\\} specifies the ground-truth class and p∈(0,1)𝑝01p\\in(0,1) is the model’s estimated probability for the class with label y=1𝑦1y=1. For notational convenience, we define ptsubscript𝑝tp_{\\textrm{t}}: pt={pif y=11−potherwise,subscript𝑝tcases𝑝if y=11𝑝otherwise,p_{\\textrm{t}}=\\begin{cases}p&\\text{if $y=1$}\\\\ 1-p&\\text{otherwise,}\\end{cases} (2) and rewrite CE​(p,y)=CE​(pt)=−log⁡(pt)CE𝑝𝑦CEsubscript𝑝tsubscript𝑝t\\textrm{CE}(p,y)=\\textrm{CE}(p_{\\textrm{t}})=-\\log(p_{\\textrm{t}}). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_14", "text": " The CE loss can be seen as the blue (top) curve in Figure 1. One notable property of this loss, which can be easily seen in its plot, is that even examples that are easily classified (pt≫.5much-greater-thansubscript𝑝t.5p_{\\textrm{t}}\\gg.5) incur a loss with non-trivial magnitude. When summed over a large number of easy examples, these small loss values can overwhelm the rare class. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_15", "text": " A common method for addressing class imbalance is to introduce a weighting factor α∈(0,1)𝛼01\\alpha\\in(0,1) for class 111 and 1−α1𝛼1-\\alpha for class −11-1. In practice α𝛼\\alpha may be set by inverse class frequency or treated as a hyperparameter to set by cross validation. For notational convenience, we define αtsubscript𝛼t\\alpha_{\\textrm{t}} analogously to how we defined ptsubscript𝑝tp_{\\textrm{t}}. We write the α𝛼\\alpha-balanced CE loss as: CE​(pt)=−αt​log⁡(pt).CEsubscript𝑝tsubscript𝛼tsubscript𝑝t\\textrm{CE}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}\\log(p_{\\textrm{t}}). (3) This loss is a simple extension to CE that we consider as an experimental baseline for our proposed focal loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_16", "text": " As our experiments will show, the large class imbalance encountered during training of dense detectors overwhelms the cross entropy loss. Easily classified negatives comprise the majority of the loss and dominate the gradient. While α𝛼\\alpha balances the importance of positive/negative examples, it does not differentiate between easy/hard examples. Instead, we propose to reshape the loss function to down-weight easy examples and thus focus training on hard negatives. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_17", "text": " More formally, we propose to add a modulating factor (1−pt)γsuperscript1subscript𝑝t𝛾(1-p_{\\textrm{t}})^{\\gamma} to the cross entropy loss, with tunable focusing parameter γ≥0𝛾0\\gamma\\geq 0. We define the focal loss as: FL​(pt)=−(1−pt)γ​log⁡(pt).FLsubscript𝑝tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (4) ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_18", "text": " The focal loss is visualized for several values of γ∈(0,5)𝛾05\\gamma\\in(0,5) in Figure 1. We note two properties of the focal loss. (1) When an example is misclassified and ptsubscript𝑝tp_{\\textrm{t}} is small, the modulating factor is near 111 and the loss is unaffected. As pt→1→subscript𝑝t1p_{\\textrm{t}}\\rightarrow 1, the factor goes to 0 and the loss for well-classified examples is down-weighted. (2) The focusing parameter γ𝛾\\gamma smoothly adjusts the rate at which easy examples are down-weighted. When γ=0𝛾0\\gamma=0, FL is equivalent to CE, and as γ𝛾\\gamma is increased the effect of the modulating factor is likewise increased (we found γ=2𝛾2\\gamma=2 to work best in our experiments). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_19", "text": " Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss. For instance, with γ=2𝛾2\\gamma=2, an example classified with pt=0.9subscript𝑝t0.9p_{\\textrm{t}}=0.9 would have 100×100\\times lower loss compared with CE and with pt≈0.968subscript𝑝t0.968p_{\\textrm{t}}\\approx 0.968 it would have 1000×1000\\times lower loss. This in turn increases the importance of correcting misclassified examples (whose loss is scaled down by at most 4×4\\times for pt≤.5subscript𝑝t.5p_{\\textrm{t}}\\leq.5 and γ=2𝛾2\\gamma=2). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_20", "text": " In practice we use an α𝛼\\alpha-balanced variant of the focal loss: FL​(pt)=−αt​(1−pt)γ​log⁡(pt).FLsubscript𝑝tsubscript𝛼tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (5) We adopt this form in our experiments as it yields slightly improved accuracy over the non-α𝛼\\alpha-balanced form. Finally, we note that the implementation of the loss layer combines the sigmoid operation for computing p𝑝p with the loss computation, resulting in greater numerical stability. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_21", "text": " While in our main experimental results we use the focal loss definition above, its precise form is not crucial. In the appendix we consider other instantiations of the focal loss and demonstrate that these can be equally effective. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_22", "text": " Binary classification models are by default initialized to have equal probability of outputting either y=−1𝑦1y=-1 or 111. Under such an initialization, in the presence of class imbalance, the loss due to the frequent class can dominate total loss and cause instability in early training. To counter this, we introduce the concept of a ‘prior’ for the value of p𝑝p estimated by the model for the rare class (foreground) at the start of training. We denote the prior by π𝜋\\pi and set it so that the model’s estimated p𝑝p for examples of the rare class is low, e.g. 0.010.010.01. We note that this is a change in model initialization (see §4.1) and not of the loss function. We found this to improve training stability for both the cross entropy and focal loss in the case of heavy class imbalance. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_23", "text": " Two-stage detectors are often trained with the cross entropy loss without use of α𝛼\\alpha-balancing or our proposed loss. Instead, they address class imbalance through two mechanisms: (1) a two-stage cascade and (2) biased minibatch sampling. The first cascade stage is an object proposal mechanism (35, 24, 28) that reduces the nearly infinite set of possible object locations down to one or two thousand. Importantly, the selected proposals are not random, but are likely to correspond to true object locations, which removes the vast majority of easy negatives. When training the second stage, biased sampling is typically used to construct minibatches that contain, for instance, a 1:3 ratio of positive to negative examples. This ratio is like an implicit α𝛼\\alpha-balancing factor that is implemented via sampling. Our proposed focal loss is designed to address these mechanisms in a one-stage detection system directly via the loss function. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_24", "text": " RetinaNet is a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network. The first subnet performs convolutional object classification on the backbone’s output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that we propose specifically for one-stage, dense detection, see Figure 3. While there are many possible choices for the details of these components, most design parameters are not particularly sensitive to exact values as shown in the experiments. We describe each component of RetinaNet next. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_25", "text": " We adopt the Feature Pyramid Network (FPN) from as the backbone network for RetinaNet. In brief, FPN augments a standard convolutional network with a top-down pathway and lateral connections so the network efficiently constructs a rich, multi-scale feature pyramid from a single resolution input image, see Figure 3(a)-(b). Each level of the pyramid can be used for detecting objects at a different scale. FPN improves multi-scale predictions from fully convolutional networks (FCN) , as shown by its gains for RPN and DeepMask-style proposals , as well at two-stage detectors such as Fast R-CNN or Mask R-CNN . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_26", "text": " Following , we build FPN on top of the ResNet architecture . We construct a pyramid with levels P3subscript𝑃3P_{3} through P7subscript𝑃7P_{7}, where l𝑙l indicates pyramid level (Plsubscript𝑃𝑙P_{l} has resolution 2lsuperscript2𝑙2^{l} lower than the input). As in all pyramid levels have C=256𝐶256C=256 channels. Details of the pyramid generally follow with a few modest differences.222RetinaNet uses feature pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, where P3subscript𝑃3P_{3} to P5subscript𝑃5P_{5} are computed from the output of the corresponding ResNet residual stage (C3subscript𝐶3C_{3} through C5subscript𝐶5C_{5}) using top-down and lateral connections just as in , P6subscript𝑃6P_{6} is obtained via a 3×\\times3 stride-2 conv on C5subscript𝐶5C_{5}, and P7subscript𝑃7P_{7} is computed by applying ReLU followed by a 3×\\times3 stride-2 conv on P6subscript𝑃6P_{6}. This differs slightly from : (1) we don’t use the high-resolution pyramid level P2subscript𝑃2P_{2} for computational reasons, (2) P6subscript𝑃6P_{6} is computed by strided convolution instead of downsampling, and (3) we include P7subscript𝑃7P_{7} to improve large object detection. These minor modifications improve speed while maintaining accuracy. While many design choices are not crucial, we emphasize the use of the FPN backbone is; preliminary experiments using features from only the final ResNet layer yielded low AP. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_27", "text": " We use translation-invariant anchor boxes similar to those in the RPN variant in . The anchors have areas of 322superscript32232^{2} to 5122superscript5122512^{2} on pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, respectively. As in , at each pyramid level we use anchors at three aspect ratios {1\\{1:2,22, 111:111, 222:1}1\\}. For denser scale coverage than in , at each level we add anchors of sizes {20superscript202^{0}, 21/3superscript2132^{1/3}, 22/3superscript2232^{2/3}} of the original set of 3 aspect ratio anchors. This improve AP in our setting. In total there are A=9𝐴9A=9 anchors per level and across levels they cover the scale range 32 - 813 pixels with respect to the network’s input image. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_28", "text": " Each anchor is assigned a length K𝐾K one-hot vector of classification targets, where K𝐾K is the number of object classes, and a 4-vector of box regression targets. We use the assignment rule from RPN but modified for multi-class detection and with adjusted thresholds. Specifically, anchors are assigned to ground-truth object boxes using an intersection-over-union (IoU) threshold of 0.5; and to background if their IoU is in (0, 0.4). As each anchor is assigned to at most one object box, we set the corresponding entry in its length K𝐾K label vector to 111 and all other entries to 00. If an anchor is unassigned, which may happen with overlap in (0.4, 0.5), it is ignored during training. Box regression targets are computed as the offset between each anchor and its assigned object box, or omitted if there is no assignment. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_29", "text": " The classification subnet predicts the probability of object presence at each spatial position for each of the A𝐴A anchors and K𝐾K object classes. This subnet is a small FCN attached to each FPN level; parameters of this subnet are shared across all pyramid levels. Its design is simple. Taking an input feature map with C𝐶C channels from a given pyramid level, the subnet applies four 3×\\times3 conv layers, each with C𝐶C filters and each followed by ReLU activations, followed by a 3×\\times3 conv layer with K​A𝐾𝐴KA filters. Finally sigmoid activations are attached to output the K​A𝐾𝐴KA binary predictions per spatial location, see Figure 3 (c). We use C=256𝐶256C=256 and A=9𝐴9A=9 in most experiments. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_30", "text": " In contrast to RPN , our object classification subnet is deeper, uses only 3×\\times3 convs, and does not share parameters with the box regression subnet (described next). We found these higher-level design decisions to be more important than specific values of hyperparameters. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_31", "text": " In parallel with the object classification subnet, we attach another small FCN to each pyramid level for the purpose of regressing the offset from each anchor box to a nearby ground-truth object, if one exists. The design of the box regression subnet is identical to the classification subnet except that it terminates in 4​A4𝐴4A linear outputs per spatial location, see Figure 3 (d). For each of the A𝐴A anchors per spatial location, these 444 outputs predict the relative offset between the anchor and the ground-truth box (we use the standard box parameterization from R-CNN ). We note that unlike most recent work, we use a class-agnostic bounding box regressor which uses fewer parameters and we found to be equally effective. The object classification subnet and the box regression subnet, though sharing a common structure, use separate parameters. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_32", "text": " RetinaNet forms a single FCN comprised of a ResNet-FPN backbone, a classification subnet, and a box regression subnet, see Figure 3. As such, inference involves simply forwarding an image through the network. To improve speed, we only decode box predictions from at most 1k top-scoring predictions per FPN level, after thresholding detector confidence at 0.05. The top predictions from all levels are merged and non-maximum suppression with a threshold of 0.5 is applied to yield the final detections. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_33", "text": " We use the focal loss introduced in this work as the loss on the output of the classification subnet. As we will show in §5, we find that γ=2𝛾2\\gamma=2 works well in practice and the RetinaNet is relatively robust to γ∈(0.5,5)𝛾0.55\\gamma\\in(0.5,5). We emphasize that when training RetinaNet, the focal loss is applied to all ∼similar-to\\scriptstyle\\sim100k anchors in each sampled image. This stands in contrast to common practice of using heuristic sampling (RPN) or hard example mining (OHEM, SSD) to select a small set of anchors (e.g., 256) for each minibatch. The total focal loss of an image is computed as the sum of the focal loss over all ∼similar-to\\scriptstyle\\sim100k anchors, normalized by the number of anchors assigned to a ground-truth box. We perform the normalization by the number of assigned anchors, not total anchors, since the vast majority of anchors are easy negatives and receive negligible loss values under the focal loss. Finally we note that α𝛼\\alpha, the weight assigned to the rare class, also has a stable range, but it interacts with γ𝛾\\gamma making it necessary to select the two together (see Tables 1a and 1b). In general α𝛼\\alpha should be decreased slightly as γ𝛾\\gamma is increased (for γ=2𝛾2\\gamma=2, α=0.25𝛼0.25\\alpha=0.25 works best). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_34", "text": " We experiment with ResNet-50-FPN and ResNet-101-FPN backbones . The base ResNet-50 and ResNet-101 models are pre-trained on ImageNet1k; we use the models released by . New layers added for FPN are initialized as in . All new conv layers except the final one in the RetinaNet subnets are initialized with bias b=0𝑏0b=0 and a Gaussian weight fill with σ=0.01𝜎0.01\\sigma=0.01. For the final conv layer of the classification subnet, we set the bias initialization to b=−log⁡((1−π)/π)𝑏1𝜋𝜋b=-\\log((1-\\pi)/\\pi), where π𝜋\\pi specifies that at the start of training every anchor should be labeled as foreground with confidence of ∼similar-to\\scriptstyle\\simπ𝜋\\pi. We use π=.01𝜋.01\\pi=.01 in all experiments, although results are robust to the exact value. As explained in §3.3, this initialization prevents the large number of background anchors from generating a large, destabilizing loss value in the first iteration of training. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_35", "text": " RetinaNet is trained with stochastic gradient descent (SGD). We use synchronized SGD over 8 GPUs with a total of 16 images per minibatch (2 images per GPU). Unless otherwise specified, all models are trained for 90k iterations with an initial learning rate of 0.01, which is then divided by 10 at 60k and again at 80k iterations. We use horizontal image flipping as the only form of data augmentation unless otherwise noted. Weight decay of 0.0001 and momentum of 0.9 are used. The training loss is the sum the focal loss and the standard smooth L1subscript𝐿1L_{1} loss used for box regression . Training time ranges between 10 and 35 hours for the models in Table 1e. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_36", "text": " We present experimental results on the bounding box detection track of the challenging COCO benchmark . For training, we follow common practice (1, 20) and use the COCO trainval35k split (union of 80k images from train and a random 35k subset of images from the 40k image val split). We report lesion and sensitivity studies by evaluating on the minival split (the remaining 5k images from val). For our main results, we report COCO AP on the test-dev split, which has no public labels and requires use of the evaluation server. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_37", "text": " We run numerous experiments to analyze the behavior of the loss function for dense detection along with various optimization strategies. For all experiments we use depth 50 or 101 ResNets with a Feature Pyramid Network (FPN)  constructed on top. For all ablation studies we use an image scale of 600 pixels for training and testing. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_38", "text": " Our first attempt to train RetinaNet uses standard cross entropy (CE) loss without any modifications to the initialization or learning strategy. This fails quickly, with the network diverging during training. However, simply initializing the last layer of our model such that the prior probability of detecting an object is π=.01𝜋.01\\pi=.01 (see §4.1) enables effective learning. Training RetinaNet with ResNet-50 and this initialization already yields a respectable AP of 30.2 on COCO. Results are insensitive to the exact value of π𝜋\\pi so we use π=.01𝜋.01\\pi=.01 for all experiments. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_39", "text": " Our next attempt to improve learning involved using the α𝛼\\alpha-balanced CE loss described in §3.1. Results for various α𝛼\\alpha are shown in Table 1a. Setting α=.75𝛼.75\\alpha=.75 gives a gain of 0.9 points AP. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_40", "text": " Results using our proposed focal loss are shown in Table 1b. The focal loss introduces one new hyperparameter, the focusing parameter γ𝛾\\gamma, that controls the strength of the modulating term. When γ=0𝛾0\\gamma=0, our loss is equivalent to the CE loss. As γ𝛾\\gamma increases, the shape of the loss changes so that “easy” examples with low loss get further discounted, see Figure 1. FL shows large gains over CE as γ𝛾\\gamma is increased. With γ=2𝛾2\\gamma=2, FL yields a 2.9 AP improvement over the α𝛼\\alpha-balanced CE loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_41", "text": " For the experiments in Table 1b, for a fair comparison we find the best α𝛼\\alpha for each γ𝛾\\gamma. We observe that lower α𝛼\\alpha’s are selected for higher γ𝛾\\gamma’s (as easy negatives are down-weighted, less emphasis needs to be placed on the positives). Overall, however, the benefit of changing γ𝛾\\gamma is much larger, and indeed the best α𝛼\\alpha’s ranged in just (.25,.75) (we tested α∈(.01,.999)𝛼.01.999\\alpha\\in(.01,.999)). We use γ=2.0𝛾2.0\\gamma=2.0 with α=.25𝛼.25\\alpha=.25 for all experiments but α=.5𝛼.5\\alpha=.5 works nearly as well (.4 AP lower). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_42", "text": " To understand the focal loss better, we analyze the empirical distribution of the loss of a converged model. For this, we take take our default ResNet-101 600-pixel model trained with γ=2𝛾2\\gamma=2 (which has 36.0 AP). We apply this model to a large number of random images and sample the predicted probability for ∼similar-to\\scriptstyle\\sim107superscript10710^{7} negative windows and ∼similar-to\\scriptstyle\\sim105superscript10510^{5} positive windows. Next, separately for positives and negatives, we compute FL for these samples, and normalize the loss such that it sums to one. Given the normalized loss, we can sort the loss from lowest to highest and plot its cumulative distribution function (CDF) for both positive and negative samples and for different settings for γ𝛾\\gamma (even though model was trained with γ=2𝛾2\\gamma=2). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_43", "text": " Cumulative distribution functions for positive and negative samples are shown in Figure 4. If we observe the positive samples, we see that the CDF looks fairly similar for different values of γ𝛾\\gamma. For example, approximately 20% of the hardest positive samples account for roughly half of the positive loss, as γ𝛾\\gamma increases more of the loss gets concentrated in the top 20% of examples, but the effect is minor. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_44", "text": " The effect of γ𝛾\\gamma on negative samples is dramatically different. For γ=0𝛾0\\gamma=0, the positive and negative CDFs are quite similar. However, as γ𝛾\\gamma increases, substantially more weight becomes concentrated on the hard negative examples. In fact, with γ=2𝛾2\\gamma=2 (our default setting), the vast majority of the loss comes from a small fraction of samples. As can be seen, FL can effectively discount the effect of easy negatives, focusing all attention on the hard negative examples. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_45", "text": " proposed to improve training of two-stage detectors by constructing minibatches using high-loss examples. Specifically, in OHEM each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples. The nms threshold and batch size are tunable parameters. Like the focal loss, OHEM puts more emphasis on misclassified examples, but unlike FL, OHEM completely discards easy examples. We also implement a variant of OHEM used in SSD : after applying nms to all examples, the minibatch is constructed to enforce a 1:3 ratio between positives and negatives to help ensure each minibatch has enough positives. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_46", "text": " We test both OHEM variants in our setting of one-stage detection which has large class imbalance. Results for the original OHEM strategy and the ‘OHEM 1:3’ strategy for selected batch sizes and nms thresholds are shown in Table 1d. These results use ResNet-101, our baseline trained with FL achieves 36.0 AP for this setting. In contrast, the best setting for OHEM (no 1:3 ratio, batch size 128, nms of .5) achieves 32.8 AP. This is a gap of 3.2 AP, showing FL is more effective than OHEM for training dense detectors. We note that we tried other parameter setting and variants for OHEM but did not achieve better results. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_47", "text": " Finally, in early experiments, we attempted to train with the hinge loss on ptsubscript𝑝tp_{\\textrm{t}}, which sets loss to 0 above a certain value of ptsubscript𝑝tp_{\\textrm{t}}. However, this was unstable and we did not manage to obtain meaningful results. Results exploring alternate loss functions are in the appendix. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_48", "text": " One of the most important design factors in a one-stage detection system is how densely it covers the space of possible image boxes. Two-stage detectors can classify boxes at any position, scale, and aspect ratio using a region pooling operation . In contrast, as one-stage detectors use a fixed sampling grid, a popular approach for achieving high coverage of boxes in these approaches is to use multiple ‘anchors’ at each spatial position to cover boxes of various scales and aspect ratios. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_49", "text": " We sweep over the number of scale and aspect ratio anchors used at each spatial position and each pyramid level in FPN. We consider cases from a single square anchor at each location to 12 anchors per location spanning 4 sub-octave scales (2k/4superscript2𝑘42^{k/4}, for k≤3𝑘3k\\leq 3) and 3 aspect ratios (0.5, 1, 2). Results using ResNet-50 are shown in Table 1c. A surprisingly good AP (30.3) is achieved using just one square anchor. However, the AP can be improved by nearly 4 points (to 34.0) when using 3 scales and 3 aspect ratios per location. We used this setting for all other experiments in this work. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_50", "text": " Finally, we note that increasing beyond 6-9 anchors did not shown further gains. Thus while two-stage systems can classify arbitrary boxes in an image, the saturation of performance w.r.t. density implies the higher potential density of two-stage systems may not offer an advantage. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_51", "text": " Larger backbone networks yield higher accuracy, but also slower inference speeds. Likewise for input image scale (defined by the shorter image side). We show the impact of these two factors in Table 1e. In Figure 2 we plot the speed/accuracy trade-off curve for RetinaNet and compare it to recent methods using public numbers on COCO test-dev. The plot reveals that RetinaNet, enabled by our focal loss, forms an upper envelope over all existing methods, discounting the low-accuracy regime. RetinaNet with ResNet-101-FPN and a 600 pixel image scale (which we denote by RetinaNet-101-600 for simplicity) matches the accuracy of the recently published ResNet-101-FPN Faster R-CNN , while running in 122 ms per image compared to 172 ms (both measured on an Nvidia M40 GPU). Using larger scales allows RetinaNet to surpass the accuracy of all two-stage approaches, while still being faster. For faster runtimes, there is only one operating point (500 pixel input) at which using ResNet-50-FPN improves over ResNet-101-FPN. Addressing the high frame rate regime will likely require special network design, as in , and is beyond the scope of this work. We note that after publication, faster and more accurate results can now be obtained by a variant of Faster R-CNN from . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_52", "text": " We evaluate RetinaNet on the challenging COCO dataset and compare test-dev results to recent state-of-the-art methods including both one-stage and two-stage models. Results are presented in Table 2 for our RetinaNet-101-800 model trained using scale jitter and for 1.5×\\times longer than the models in Table 1e (giving a 1.3 AP gain). Compared to existing one-stage methods, our approach achieves a healthy 5.9 point AP gap (39.1 vs. 33.2) with the closest competitor, DSSD , while also being faster, see Figure 2. Compared to recent two-stage methods, RetinaNet achieves a 2.3 point gap above the top-performing Faster R-CNN model based on Inception-ResNet-v2-TDM . Plugging in ResNeXt-32x8d-101-FPN as the RetinaNet backbone further improves results another 1.7 AP, surpassing 40 AP on COCO. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_53", "text": " In this work, we identify class imbalance as the primary obstacle preventing one-stage object detectors from surpassing top-performing, two-stage methods. To address this, we propose the focal loss which applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. Our approach is simple and highly effective. We demonstrate its efficacy by designing a fully convolutional one-stage detector and report extensive experimental analysis showing that it achieves state-of-the-art accuracy and speed. Source code is available at https://github.com/facebookresearch/Detectron . ", "title": "Focal Loss for Dense Object Detection" } ]
How are the images for this challenge collected for each category?
Training images are taken directly from ImageNet [39].
[ 39 ]
[ { "id": "1409.0575_all_0", "text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompass both image classification (a task requiring an algorithm to determine what object classes are present in the image) as well as object detection (a task requiring an algorithm to localize all objects present in the image). ILSVRC follows in the footsteps of the PASCAL VOC challenge (Everingham et al.,, 2012), established in 2005, which set the precedent for standardized evaluation of recognition algorithms in the form of yearly competitions. As in PASCAL VOC, ILSVRC consists of two components: (1) a publically available dataset, and (2) an annual competition and corresponding workshop. The dataset allows for the development and comparison of categorical object recognition algorithms, and the competition and workshop provide a way to track the progress and discuss the lessons learned from the most successful and innovative entries each year. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_1", "text": " The publically released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.222 In 2010, the test annotations were later released publicly; since then the test annotation have been kept hidden. Participants train their algorithms using the training images and then automatically annotate the test images. These predicted annotations are submitted to the evaluation server. Results of the evaluation are revealed at the end of the competition period and authors are invited to share insights at the workshop held at the International Conference on Computer Vision (ICCV) or European Conference on Computer Vision (ECCV) in alternate years. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_2", "text": " ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_3", "text": " In creating the dataset, several challenges had to be addressed. Scaling up from 19,737 images in PASCAL VOC 2010 to 1,461,406 in ILSVRC 2010 and from 20 object classes to 1000 object classes brings with it several challenges. It is no longer feasible for a small group of annotators to annotate the data as is done for other datasets (Fei-Fei et al.,, 2004; Criminisi,, 2004; Everingham et al.,, 2012; Xiao et al.,, 2010). Instead we turn to designing novel crowdsourcing approaches for collecting large-scale annotations (Su et al.,, 2012; Deng et al.,, 2009, 2014). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_4", "text": " Some of the 1000 object classes may not be as easy to annotate as the 20 categories of PASCAL VOC: e.g., bananas which appear in bunches may not be as easy to delineate as the basic-level categories of aeroplanes or cars. Having more than a million images makes it infeasible to annotate the locations of all objects (much less with object segmentations, human body parts, and other detailed annotations that subsets of PASCAL VOC contain). New evaluation criteria have to be defined to take into account the facts that obtaining perfect manual annotations in this setting may be infeasible. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_5", "text": " Once the challenge dataset was collected, its scale allowed for unprecedented opportunities both in evaluation of object recognition algorithms and in developing new techniques. Novel algorithmic innovations emerge with the availability of large-scale training data. The broad spectrum of object categories motivated the need for algorithms that are even able to distinguish classes which are visually very similar. We highlight the most successful of these algorithms in this paper, and compare their performance with human-level accuracy. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_6", "text": " Finally, the large variety of object classes in ILSVRC allows us to perform an analysis of statistical properties of objects and their impact on recognition algorithms. This type of analysis allows for a deeper understanding of object recognition, and for designing the next generation of general object recognition algorithms. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_7", "text": " This paper has three key goals: 1. To discuss the challenges of creating this large-scale object recognition benchmark dataset, 2. To highlight the developments in object classification and detection that have resulted from this effort, and 3. To take a closer look at the current state of the field of categorical object recognition. The paper may be of interest to researchers working on creating large-scale datasets, as well as to anybody interested in better understanding the history and the current state of large-scale object recognition. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_8", "text": " The collected dataset and additional information about ILSVRC can be found at: http://image-net.org/challenges/LSVRC/ ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_9", "text": " We briefly discuss some prior work in constructing benchmark image datasets. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_10", "text": " Caltech 101 (Fei-Fei et al.,, 2004) was among the first standardized datasets for multi-category image classification, with 101 object classes and commonly 15-30 training images per class. Caltech 256 (Griffin et al.,, 2007) increased the number of object classes to 256 and added images with greater scale and background variability. The TinyImages dataset (Torralba et al.,, 2008) contains 80 million 32x32 low resolution images collected from the internet using synsets in WordNet (Miller,, 1995) as queries. However, since this data has not been manually verified, there are many errors, making it less suitable for algorithm evaluation. Datasets such as 15 Scenes (Oliva and Torralba,, 2001; Fei-Fei and Perona,, 2005; Lazebnik et al.,, 2006) or recent Places (Zhou et al.,, 2014) provide a single scene category label (as opposed to an object category). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_11", "text": " The ImageNet dataset (Deng et al.,, 2009) is the backbone of ILSVRC. ImageNet is an image dataset organized according to the WordNet hierarchy (Miller,, 1995). Each concept in WordNet, possibly described by multiple words or word phrases, is called a “synonym set” or “synset”. ImageNet populates 21,841 synsets of WordNet with an average of 650 manually verified and full resolution images. As a result, ImageNet contains 14,197,122 annotated images organized by the semantic hierarchy of WordNet (as of August 2014). ImageNet is larger in scale and diversity than the other image classification datasets. ILSVRC uses a subset of ImageNet images for training the algorithms and some of ImageNet’s image collection protocols for annotating additional images for testing the algorithms. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_12", "text": " Many datasets aim to provide richer image annotations beyond image-category labels. LabelMe (Russell et al.,, 2007) contains general photographs with multiple objects per image. It has bounding polygon annotations around objects, but the object names are not standardized: annotators are free to choose which objects to label and what to name each object. The SUN2012 (Xiao et al.,, 2010) dataset contains 16,873 manually cleaned up and fully annotated images more suitable for standard object detection training and evaluation. SIFT Flow (Liu et al.,, 2011) contains 2,688 images labeled using the LabelMe system. The LotusHill dataset (Yao et al.,, 2007) contains very detailed annotations of objects in 636,748 images and video frames, but it is not available for free. Several datasets provide pixel-level segmentations: for example, MSRC dataset (Criminisi,, 2004) with 591 images and 23 object classes, Stanford Background Dataset (Gould et al.,, 2009) with 715 images and 8 classes, and the Berkeley Segmentation dataset (Arbelaez et al.,, 2011) with 500 images annotated with object boundaries. OpenSurfaces segments surfaces from consumer photographs and annotates them with surface properties, including material, texture, and contextual information (Bell et al.,, 2013) . ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_13", "text": " The closest to ILSVRC is the PASCAL VOC dataset (Everingham et al.,, 2010, 2014), which provides a standardized test bed for object detection, image classification, object segmentation, person layout, and action classification. Much of the design choices in ILSVRC have been inspired by PASCAL VOC and the similarities and differences between the datasets are discussed at length throughout the paper. ILSVRC scales up PASCAL VOC’s goal of standardized training and evaluation of recognition algorithms by more than an order of magnitude in number of object classes and images: PASCAL VOC 2012 has 20 object classes and 21,738 images compared to ILSVRC2012 with 1000 object classes and 1,431,167 annotated images. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_14", "text": " The recently released COCO dataset (Lin et al., 2014b, ) contains more than 328,000 images with 2.5 million object instances manually segmented. It has fewer object categories than ILSVRC (91 in COCO versus 200 in ILSVRC object detection) but more instances per category (27K on average compared to about 1K in ILSVRC object detection). Further, it contains object segmentation annotations which are not currently available in ILSVRC. COCO is likely to become another important large-scale benchmark. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_15", "text": " ILSVRC makes extensive use of Amazon Mechanical Turk to obtain accurate annotations (Sorokin and Forsyth,, 2008). Works such as (Welinder et al.,, 2010; Sheng et al.,, 2008; Vittayakorn and Hays,, 2011) describe quality control mechanisms for this marketplace. (Vondrick et al.,, 2012) provides a detailed overview of crowdsourcing video annotation. A related line of work is to obtain annotations through well-designed games, e.g. (von Ahn and Dabbish,, 2005). Our novel approaches to crowdsourcing accurate image annotations are in Sections 3.1.3, 3.2.1 and 3.3.3. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_16", "text": " There are several datasets with standardized online evaluation similar to ILSVRC: the aforementioned PASCAL VOC (Everingham et al.,, 2012), Labeled Faces in the Wild (Huang et al.,, 2007) for unconstrained face recognition, Reconstruction meets Recognition (Urtasun et al.,, 2014) for 3D reconstruction and KITTI (Geiger et al.,, 2013) for computer vision in autonomous driving. These datasets along with ILSVRC help benchmark progress in different areas of computer vision. Works such as (Torralba and Efros,, 2011) emphasize the importance of examining the bias inherent in any standardized dataset. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_17", "text": " We begin with a brief overview of ILSVRC challenge tasks in Section 2. Dataset collection and annotation are described at length in Section 3. Section 4 discusses the evaluation criteria of algorithms in the large-scale recognition setting. Section 5 provides an overview of the methods developed by ILSVRC participants. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_18", "text": " Section 6 contains an in-depth analysis of ILSVRC results: Section 6.1 documents the progress of large-scale recognition over the years, Section 6.2 concludes that ILSVRC results are statistically significant, Section 6.3 thoroughly analyzes the current state of the field of object recognition, and Section 6.4 compares state-of-the-art computer vision accuracy with human accuracy. We conclude and discuss lessons learned from ILSVRC in Section 7. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_19", "text": " The goal of ILSVRC is to estimate the content of photographs for the purpose of retrieval and automatic annotation. Test images are presented with no initial annotation, and algorithms have to produce labelings specifying what objects are present in the images. New test images are collected and labeled especially for this competition and are not part of the previously published ImageNet dataset (Deng et al.,, 2009). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_20", "text": " ILSVRC over the years has consisted of one or more of the following tasks (years in parentheses):333In addition, ILSVRC in 2012 also included a taster fine-grained classification task, where algorithms would classify dog photographs into one of 120 dog breeds (Khosla et al.,, 2011). Fine-grained classification has evolved into its own Fine-Grained classification challenge in 2013 (Berg et al.,, 2013), which is outside the scope of this paper. 1. Image classification (2010-2014): Algorithms produce a list of object categories present in the image. 2. Single-object localization (2011-2014): Algorithms produce a list of object categories present in the image, along with an axis-aligned bounding box indicating the position and scale of one instance of each object category. 3. Object detection (2013-2014): Algorithms produce a list of object categories present in the image along with an axis-aligned bounding box indicating the position and scale of every instance of each object category. This section provides an overview and history of each of the three tasks. Table 1 shows summary statistics. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_21", "text": " Data for the image classification task consists of photographs collected from Flickr444www.flickr.com and other search engines, manually labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_22", "text": " For each image, algorithms produce a list of object categories present in the image. The quality of a labeling is evaluated based on the label that best matches the ground truth label for the image (see Section 4.1). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_23", "text": " Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs (Deng et al.,, 2009). The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_24", "text": " The single-object localization task, introduced in 2011, built off of the image classification task to evaluate the ability of algorithms to learn the appearance of the target object itself rather than its image context. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_25", "text": " Data for the single-object localization task consists of the same photographs collected for the image classification task, hand labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. Additionally, every instance of this category is annotated with an axis-aligned bounding box. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_26", "text": " For each image, algorithms produce a list of object categories present in the image, along with a bounding box indicating the position and scale of one instance of each object category. The quality of a labeling is evaluated based on the object category label that best matches the ground truth label, with the additional requirement that the location of the predicted instance is also accurate (see Section 4.2). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_27", "text": " The object detection task went a step beyond single-object localization and tackled the problem of localizing multiple object categories in the image. This task has been a part of the PASCAL VOC for many years on the scale of 20 object categories and tens of thousands of images, but scaling it up by an order of magnitude in object categories and in images proved to be very challenging from a dataset collection and annotation point of view (see Section 3.3). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_28", "text": " Data for the detection tasks consists of new photographs collected from Flickr using scene-level queries. The images are annotated with axis-aligned bounding boxes indicating the position and scale of every instance of each target object category. The training set is additionally supplemented with (a) data from the single-object localization task, which contains annotations for all instances of just one object category, and (b) negative images known not to contain any instance of some object categories. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_29", "text": " For each image, algorithms produce bounding boxes indicating the position and scale of all instances of all target object categories. The quality of labeling is evaluated by recall, or number of target object instances detected, and precision, or the number of spurious detections produced by the algorithm (see Section 4.3). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_30", "text": " Our process of constructing large-scale object recognition image datasets consists of three key steps. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_31", "text": " The first step is defining the set of target object categories. To do this, we select from among the existing ImageNet (Deng et al.,, 2009) categories. By using WordNet as a backbone (Miller,, 1995), ImageNet already takes care of disambiguating word meanings and of combining together synonyms into the same object category. Since the selection of object categories needs to be done only once per challenge task, we use a combination of automatic heuristics and manual post-processing to create the list of target categories appropriate for each task. For example, for image classification we may include broader scene categories such as a type of beach, but for single-object localization and object detection we want to focus only on object categories which can be unambiguously localized in images (Sections 3.1.1 and 3.3.1). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_32", "text": " The second step is collecting a diverse set of candidate images to represent the selected categories. We use both automatic and manual strategies on multiple search engines to do the image collection. The process is modified for the different ILSVRC tasks. For example, for object detection we focus our efforts on collecting scene-like images using generic queries such as “African safari” to find pictures likely to contain multiple animals in one scene (Section 3.3.2). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_33", "text": " The third (and most challenging) step is annotating the millions of collected images to obtain a clean dataset. We carefully design crowdsourcing strategies targeted to each individual ILSVRC task. For example, the bounding box annotation system used for localization and detection tasks consists of three distinct parts in order to include automatic crowdsourced quality control (Section 3.2.1). Annotating images fully with all target object categories (on a reasonable budget) for object detection requires an additional hierarchical image labeling system (Section 3.3.3). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_34", "text": " We describe the data collection and annotation procedure for each of the ILSVRC tasks in order: image classification (Section 3.1), single-object localization (Section 3.2), and object detection (Section 3.3), focusing on the three key steps for each dataset. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_35", "text": " The image classification task tests the ability of an algorithm to name the objects present in the image, without necessarily localizing them. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_36", "text": " We describe the choices we made in constructing the ILSVRC image classification dataset: selecting the target object categories from ImageNet (Section 3.1.1), collecting a diverse set of candidate images by using multiple search engines and an expanded set of queries in multiple languages (Section 3.1.2), and finally filtering the millions of collected images using the carefully designed crowdsourcing strategy of ImageNet (Deng et al.,, 2009) (Section 3.1.3). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_37", "text": " The 1000 categories used for the image classification task were selected from the ImageNet (Deng et al.,, 2009) categories. The 1000 synsets are selected such that there is no overlap between synsets: for any synsets i𝑖i and j𝑗j, i𝑖i is not an ancestor of j𝑗j in the ImageNet hierarchy. These synsets are part of the larger hierarchy and may have children in ImageNet; however, for ILSVRC we do not consider their child subcategories. The synset hierarchy of ILSVRC can be thought of as a “trimmed” version of the complete ImageNet hierarchy. Figure 1 visualizes the diversity of the ILSVRC2012 object categories. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_38", "text": " The exact 1000 synsets used for the image classification and single-object localization tasks have changed over the years. There are 639 synsets which have been used in all five ILSVRC challenges so far. In the first year of the challenge synsets were selected randomly from the available ImageNet synsets at the time, followed by manual filtering to make sure the object categories were not too obscure. With the introduction of the object localization challenge in 2011 there were 321 synsets that changed: categories such as “New Zealand beach” which were inherently difficult to localize were removed, and some new categories from ImageNet containing object localization annotations were added. In ILSVRC2012, 90 synsets were replaced with categories corresponding to dog breeds to allow for evaluation of more fine-grained object classification, as shown in Figure 2. The synsets have remained consistent since year 2012. Appendix A provides the complete list of object categories used in ILSVRC2012-2014. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_39", "text": " Image collection for ILSVRC classification task is the same as the strategy employed for constructing ImageNet (Deng et al.,, 2009). Training images are taken directly from ImageNet. Additional images are collected for the ILSVRC using this strategy and randomly partitioned into the validation and test sets. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_40", "text": " We briefly summarize the process; (Deng et al.,, 2009) contains further details. Candidate images are collected from the Internet by querying several image search engines. For each synset, the queries are the set of WordNet synonyms. Search engines typically limit the number of retrievable images (on the order of a few hundred to a thousand). To obtain as many images as possible, we expand the query set by appending the queries with the word from parent synsets, if the same word appears in the glossary of the target synset. For example, when querying “whippet”, according to WordNet’s glossary a “small slender dog of greyhound type developed in England”, we also use “whippet dog” and “whippet greyhound.” To further enlarge and diversify the candidate pool, we translate the queries into other languages, including Chinese, Spanish, Dutch and Italian. We obtain accurate translations using WordNets in those languages. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_41", "text": " Annotating images with corresponding object classes follows the strategy employed by ImageNet (Deng et al.,, 2009). We summarize it briefly here. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_42", "text": " To collect a highly accurate dataset, we rely on humans to verify each candidate image collected in the previous step for a given synset. This is achieved by using Amazon Mechanical Turk (AMT), an online platform on which one can put up tasks for users for a monetary reward. With a global user base, AMT is particularly suitable for large scale labeling. In each of our labeling tasks, we present the users with a set of candidate images and the definition of the target synset (including a link to Wikipedia). We then ask the users to verify whether each image contains objects of the synset. We encourage users to select images regardless of occlusions, number of objects and clutter in the scene to ensure diversity. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_43", "text": " While users are instructed to make accurate judgment, we need to set up a quality control system to ensure this accuracy. There are two issues to consider. First, human users make mistakes and not all users follow the instructions. Second, users do not always agree with each other, especially for more subtle or confusing synsets, typically at the deeper levels of the tree. The solution to these issues is to have multiple users independently label the same image. An image is considered positive only if it gets a convincing majority of the votes. We observe, however, that different categories require different levels of consensus among users. For example, while five users might be necessary for obtaining a good consensus on “Burmese cat” images, a much smaller number is needed for “cat” images. We develop a simple algorithm to dynamically determine the number of agreements needed for different categories of images. For each synset, we first randomly sample an initial subset of images. At least 10 users are asked to vote on each of these images. We then obtain a confidence score table, indicating the probability of an image being a good image given the consensus among user votes. For each of the remaining candidate images in this synset, we proceed with the AMT user labeling until a pre-determined confidence score threshold is reached. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_44", "text": " Evaluation of the accuracy of the large-scale crowdsourced image annotation system was done on the entire ImageNet (Deng et al.,, 2009). A total of 80 synsets were randomly sampled at every tree depth of the mammal and vehicle subtrees. An independent group of subjects verified the correctness of each of the images. An average of 99.7%percent99.799.7\\% precision is achieved across the synsets. We expect similar accuracy on ILSVRC image classification dataset since the image annotation pipeline has remained the same. To verify, we manually checked 1500 ILSVRC2012-2014 image classification test set images (the test set has remained unchanged in these three years). We found 5 annotation errors, corresponding as expected to 99.7%percent99.799.7\\% precision. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_45", "text": " Using the image collection and annotation procedure described in previous sections, we collected a large-scale dataset used for ILSVRC classification task. There are 1000 object classes and approximately 1.2 million training images, 50 thousand validation images and 100 thousand test images. Table 2 (top) documents the size of the dataset over the years of the challenge. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_46", "text": " The single-object localization task evaluates the ability of an algorithm to localize one instance of an object category. It was introduced as a taster task in ILSVRC 2011, and became an official part of ILSVRC in 2012. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_47", "text": " The key challenge was developing a scalable crowdsourcing method for object bounding box annotation. Our three-step self-verifying pipeline is described in Section 3.2.1. Having the dataset collected, we perform detailed analysis in Section 3.2.2 to ensure that the dataset is sufficiently varied to be suitable for evaluation of object localization algorithms. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_48", "text": " The object classes for single-object localization task are the same as the object classes for image classification task described above in Section 3.1. The training images for localization task are a subset of the training images used for image classification task, and the validation and test images are the same between both tasks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_49", "text": " Recall that for the image classification task every image was annotated with one object class label, corresponding to one object that is present in an image. For the single-object localization task, every validation and test image and a subset of the training images are annotated with axis-aligned bounding boxes around every instance of this object. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_50", "text": " Every bounding box is required to be as small as possible while including all visible parts of the object instance. An alternate annotation procedure could be to annotate the full (estimated) extent of the object: e.g., if a person’s legs are occluded and only the torso is visible, the bounding box could be drawn to include the likely location of the legs. However, this alternative procedure is inherently ambiguous and ill-defined, leading to disagreement among annotators and among researchers (what is the true “most likely” extent of this object?). We follow the standard protocol of only annotating visible object parts (Russell et al.,, 2007; Everingham et al.,, 2010).555Some datasets such as PASCAL VOC (Everingham et al.,, 2010) and LabelMe (Russell et al.,, 2007) are able to provide more detailed annotations: for example, marking individual object instances as being truncated. We chose not to provide this level of detail in favor of annotating more images and more object instances. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_51", "text": " We summarize the crowdsourced bounding box annotation system described in detail in (Su et al.,, 2012). The goal is to build a system that is fully automated, highly accurate, and cost-effective. Given a collection of images where the object of interest has been verified to exist, for each image the system collects a tight bounding box for every instance of the object. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_52", "text": " There are two requirements: ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_53", "text": " • Quality Each bounding box needs to be tight, i.e. the smallest among all bounding boxes that contains all visible parts of the object. This facilitates the object detection learning algorithms by providing the precise location of each object instance; • Coverage Every object instance needs to have a bounding box. This is important for training localization algorithms because it tells the learning algorithms with certainty what is not the object. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_54", "text": " The core challenge of building such a system is effectively controlling the data quality with minimal cost. Our key observation is that drawing a bounding box is significantly more difficult and time consuming than giving answers to multiple choice questions. Thus quality control through additional verification tasks is more cost-effective than consensus-based algorithms. This leads to the following workflow with simple basic subtasks: ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_55", "text": " 1. Drawing A worker draws one bounding box around one instance of an object on the given image. 2. Quality verification A second worker checks if the bounding box is correctly drawn. 3. Coverage verification A third worker checks if all object instances have bounding boxes. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_56", "text": " The sub-tasks are designed following two principles. First, the tasks are made as simple as possible. For example, instead of asking the worker to draw all bounding boxes on the same image, we ask the worker to draw only one. This reduces the complexity of the task. Second, each task has a fixed and predictable amount of work. For example, assuming that the input images are clean (object presence is correctly verified) and the coverage verification tasks give correct results, the amount of work of the drawing task is always that of providing exactly one bounding box. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_57", "text": " Quality control on Tasks 2 and 3 is implemented by embedding “gold standard” images where the correct answer is known. Worker training for each of these subtasks is described in detail in (Su et al.,, 2012). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_58", "text": " The system is evaluated on 10 categories with ImageNet (Deng et al.,, 2009): balloon, bear, bed, bench, beach, bird, bookshelf, basketball hoop, bottle, and people. A subset of 200 images are randomly sampled from each category. On the image level, our evaluation shows that 97.9%percent97.997.9\\% images are completely covered with bounding boxes. For the remaining 2.1%percent2.12.1\\%, some bounding boxes are missing. However, these are all difficult cases: the size is too small, the boundary is blurry, or there is strong shadow. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_59", "text": " On the bounding box level, 99.2%percent99.299.2\\% of all bounding boxes are accurate (the bounding boxes are visibly tight). The remaining 0.8%percent0.80.8\\% are somewhat off. No bounding boxes are found to have less than 50%percent5050\\% intersection over union overlap with ground truth. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_60", "text": " Additional evaluation of the overall cost and an analysis of quality control can be found in (Su et al.,, 2012). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_61", "text": " Using the annotation procedure described above, we collect a large set of bounding box annotations for the ILSVRC single-object classification task. All 50 thousand images in the validation set and 100 thousand images in the test set are annotated with bounding boxes around all instances of the ground truth object class (one object class per image). In addition, in ILSVRC2011 25%percent2525\\% of training images are annotated with bounding boxes the same way, yielding more than 310 thousand annotated images with more than 340 thousand annotated object instances. In ILSVRC2012 40%percent4040\\% of training images are annotated, yielding more than 520 thousand annotated images with more than 590 thousand annotated object instances. Table 2 (bottom) documents the size of this dataset. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_62", "text": " In addition to the size of the dataset, we also analyze the level of difficulty of object localization in these images compared to the PASCAL VOC benchmark. We compute statistics on the ILSVRC2012 single-object localization validation set images compared to PASCAL VOC 2012 validation images. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_63", "text": " Real-world scenes are likely to contain multiple instances of some objects, and nearby object instances are particularly difficult to delineate. The average object category in ILSVRC has 1.611.611.61 target object instances on average per positive image, with each instance having on average 0.470.470.47 neighbors (adjacent instances of the same object category). This is comparable to 1.691.691.69 instances per positive image and 0.520.520.52 neighbors per instance for an average object class in PASCAL. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_64", "text": " As described in (Hoiem et al.,, 2012), smaller objects tend to be significantly more difficult to localize. In the average object category in PASCAL the object occupies 24.1%percent24.124.1\\% of the image area, and in ILSVRC 35.8%percent35.835.8\\%. However, PASCAL has only 20 object categories while ILSVRC has 1000. The 537 object categories of ILSVRC with the smallest objects on average occupy the same fraction of the image as PASCAL objects: 24.1%percent24.124.1\\%. Thus even though on average the object instances tend to be bigger in ILSVRC images, there are more than 25 times more object categories than in PASCAL VOC with the same average object scale. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_65", "text": " Appendix B and (Russakovsky et al.,, 2013) have additional comparisons. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_66", "text": " The ILSVRC task of object detection evaluates the ability of an algorithm to name and localize all instances of all target objects present in an image. It is much more challenging than object localization because some object instances may be small/occluded/difficult to accurately localize, and the algorithm is expected to locate them all, not just the one it finds easiest. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_67", "text": " There are three key challenges in collecting the object detection dataset. The first challenge is selecting the set of common objects which tend to appear in cluttered photographs and are well-suited for benchmarking object detection performance. Our approach relies on statistics of the object localization dataset and the tradition of the PASCAL VOC challenge (Section 3.3.1). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_68", "text": " The second challenge is obtaining a much more varied set of scene images than those used for the image classification and single-object localization datasets. Section 3.3.2 describes the procedure for utilizing as much data from the single-object localization dataset as possible and supplementing it with Flickr images queried using hundreds of manually designed high-level queries. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_69", "text": " The third, and biggest, challenge is completely annotating this dataset with all the objects. This is done in two parts. Section 3.3.3 describes the first part: our hierarchical strategy for obtaining the list of all target objects which occur within every image. This is necessary since annotating in a straight-forward way by creating a task for every (image, object class) pair is no longer feasible at this scale. Appendix E describes the second part: annotating the bounding boxes around these objects, using the single-object localization bounding box annotation pipeline of Section 3.2.1 along with extra verification to ensure that every instance of the object is annotated with exactly one bounding box. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_70", "text": " There are 200 object classes hand-selected for the detection task, eacg corresponding to a synset within ImageNet. These were chosen to be mostly basic-level object categories that would be easy for people to identify and label. The rationale is that the object detection system developed for this task can later be combined with a fine-grained classification model to further classify the objects if a finer subdivision is desired.666Some of the training objects are actually annotated with more detailed classes: for example, one of the 200 object classes is the category “dog,” and some training instances are annotated with the specific dog breed. As with the 1000 classification classes, the synsets are selected such that there is no overlap: for any synsets i𝑖i and j𝑗j, i𝑖i is not an ancestor of j𝑗j in the ImageNet hierarchy. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_71", "text": " The selection of the 200 object detection classes in 2013 was guided by the ILSVRC 2012 classification and localization dataset. Starting with 1000 object classes and their bounding box annotations we first eliminated all object classes which tended to be too “big” in the image (on average the object area was greater than 50%percent5050\\% of the image area). These were classes such as T-shirt, spiderweb, or manhole cover. We then manually eliminated all classes which we did not feel were well-suited for detection, such as hay, barbershop, or poncho. This left 494 object classes which were merged into basic-level categories: for example, different species of birds were merged into just the “bird” class. The classes remained the same in ILSVRC2014. Appendix D contains the complete list of object categories used in ILSVRC2013-2014 (in the context of the hierarchy described in Section 3.3.3). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_72", "text": " Staying mindful of the tradition of the PASCAL VOC dataset we also tried to ensure that the set of 200 classes contains as many of the 20 PASCAL VOC classes as possible. Table 3 shows the correspondences. The changes that were done were to ensure more accurate and consistent crowdsourced annotations. The object class with the weakest correspondence is “potted plant” in PASCAL VOC, corresponding to “flower pot” in ILSVRC. “Potted plant” was one of the most challenging object classes to annotate consistently among the PASCAL VOC classes, and in order to obtain accurate annotations using crowdsourcing we had to restrict the definition to a more concrete object. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_73", "text": " Many images for the detection task were collected differently than the images in ImageNet and the classification and single-object localization tasks. Figure 3 summarizes the types of images that were collected. Ideally all of these images would be scene images fully annotated with all target categories. However, given budget constraints our goal was to provide as much suitable detection data as possible, even if the images were drawn from a few different sources and distributions. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_74", "text": " The validation and test detection set images come from two sources (percent of images from each source in parentheses). The first source (77%)percent77(77\\%) is images from ILSVRC2012 single-object localization validation and test sets corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). Images where the target object occupied more than 50%percent5050\\% of the image area were discarded, since they were unlikely to contain other objects of interest. The second source (23%)percent23(23\\%) is images from Flickr collected specifically for detection task. We queried Flickr using a large set of manually defined queries, such as “kitchenette” or “Australian zoo” to retrieve images of scenes likely to contain several objects of interest. Appendix C contains the full list. We also added pairwise queries, or queries with two target object names such as “tiger lion,” which also often returned cluttered scenes. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_75", "text": " Figure 4 shows a random set of both types of validation images. Images were randomly split, with 33%percent3333\\% going into the validation set and 67%percent6767\\% into the test set.777The validation/test split is consistent with ILSVRC2012: validation images of ILSVRC2012 remained in the validation set of ILSVRC2013, and ILSVRC2012 test images remained in ILSVRC2013 test set. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_76", "text": " The training set for the detection task comes from three sources of images (percent of images from each source in parentheses). The first source (63%)percent63(63\\%) is all training images from ILSVRC2012 single-object localization task corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). We did not filter by object size, allowing teams to take advantage of all the positive examples available. The second source (24%)percent24(24\\%) is negative images which were part of the original ImageNet collection process but voted as negative: for example, some of the images were collected from Flickr and search engines for the ImageNet synset “animals” but during the manual verification step did not collect enough votes to be considered as containing an “animal.” These images were manually re-verified for the detection task to ensure that they did not in fact contain the target objects. The third source (13%)percent13(13\\%) is images collected from Flickr specifically for the detection task. These images were added for ILSVRC2014 following the same protocol as the second type of images in the validation and test set. This was done to bring the training and testing distributions closer together. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_77", "text": " The key challenge in annotating images for the object detection task is that all objects in all images need to be labeled. Suppose there are N inputs (images) which need to be annotated with the presence or absence of K labels (objects). A naïve approach would query humans for each combination of input and label, requiring N​K𝑁𝐾NK queries. However, N and K can be very large and the cost of this exhaustive approach quickly becomes prohibitive. For example, annotating 60,0006000060,000 validation and test images with the presence or absence of 200200200 object classes for the detection task naïvely would take 808080 times more effort than annotating 150,000150000150,000 validation and test images with 111 object each for the classification task – and this is not even counting the additional cost of collecting bounding box annotations around each object instance. This quickly becomes infeasible. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_78", "text": " In (Deng et al.,, 2014) we study strategies for scalable multilabel annotation, or for efficiently acquiring multiple labels from humans for a collection of items. We exploit three key observations for labels in real world applications (illustrated in Figure LABEL:fig:chipull): ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_79", "text": " 1. Correlation. Subsets of labels are often highly correlated. Objects such as a computer keyboard, mouse and monitor frequently co-occur in images. Similarly, some labels tend to all be absent at the same time. For example, all objects that require electricity are usually absent in pictures taken outdoors. This suggests that we could potentially “fill in” the values of multiple labels by grouping them into only one query for humans. Instead of checking if dog, cat, rabbit etc. are present in the photo, we just check about the “animal” group If the answer is no, then this implies a no for all categories in the group. 2. Hierarchy. The above example of grouping dog, cat, rabbit etc. into animal has implicitly assumed that labels can be grouped together and humans can efficiently answer queries about the group as a whole. This brings up our second key observation: humans organize semantic concepts into hierarchies and are able to efficiently categorize at higher semantic levels (Thorpe et al.,, 1996), e.g. humans can determine the presence of an animal in an image as fast as every type of animal individually. This leads to substantial cost savings. 3. Sparsity. The values of labels for each image tend to be sparse, i.e. an image is unlikely to contain more than a dozen types of objects, a small fraction of the hundreds of object categories. This enables rapid elimination of many objects by quickly filling in no. With a high degree of sparsity, an efficient algorithm can have a cost which grows logarithmically with the number of objects instead of linearly. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_80", "text": " We propose algorithmic strategies that exploit the above intuitions. The key is to select a sequence of queries for humans such that we achieve the same labeling results with only a fraction of the cost of the naïve approach. The main challenges include how to measure cost and utility of queries, how to construct good queries, and how to dynamically order them. A detailed description of the generic algorithm, along with theoretical analysis and empirical evaluation, is presented in (Deng et al.,, 2014). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_81", "text": " The generic algorithm automatically selects the most informative queries to ask based on object label statistics learned from the training set. In our case of 200 object classes, since obtaining the training set was by itself challenging we chose to design the queries by hand. We created a hierarchy of queries of the type “is there a… in the image?” For example, one of the high-level questions was “is there an animal in the image?” We ask the crowd workers this question about every image we want to label. The children of the “animal” question would correspond to specific examples of animals: for example, “is there a mammal in the image?” or “is there an animal with no legs?” To annotate images efficiently, these questions are asked only on images determined to contain an animal. The 200 leaf node questions correspond to the 200 target objects, e.g., “is there a cat in the image?”. A few sample iterations of the algorithm are shown in Figure 6. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_82", "text": " Algorithm 1 is the formal algorithm for labeling an image with the presence or absence of each target object category. With this algorithm in mind, the hierarchy of questions was constructed following the principle that false positives only add extra cost whereas false negatives can significantly affect the quality of the labeling. Thus, it is always better to stick with more general but less ambiguous questions, such as “is there a mammal in the image?” as opposed to asking overly specific but potentially ambiguous questions, such as “is there an animal that can climb trees?” Constructing this hierarchy was a surprisingly time-consuming process, involving multiple iterations to ensure high accuracy of labeling and avoid question ambiguity. Appendix D shows the constructed hierarchy. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_83", "text": " Once all images are labeled with the presence or absence of all object categories we use the bounding box system described in Section 3.2.1 along with some additional modifications of Appendix E to annotate the location of every instance of every present object category. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_84", "text": " Using the procedure described above, we collect a large-scale dataset for ILSVRC object detection task. There are 200 object classes and approximately 450K training images, 20K validation images and 40K test images. Table 4 documents the size of the dataset over the years of the challenge. The major change between ILSVRC2013 and ILSVRC2014 was the addition of 60,658 fully annotated training images. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_85", "text": " Prior to ILSVRC, the object detection benchmark was the PASCAL VOC challenge (Everingham et al.,, 2010). ILSVRC has 101010 times more object classes than PASCAL VOC (200 vs 20), 10.610.610.6 times more fully annotated training images (60,658 vs 5,717), 35.235.235.2 times more training objects (478,807 vs 13,609), 3.53.53.5 times more validation images (20,121 vs 5823) and 3.53.53.5 times more validation objects (55,501 vs 15,787). ILSVRC has 2.82.82.8 annotated objects per image on the validation set, compared to 2.72.72.7 in PASCAL VOC. The average object in ILSVRC takes up 17.0%percent17.017.0\\% of the image area and in PASCAL VOC takes up 20.7%percent20.720.7\\%; Table 3 contains per-class comparisons. Additionally, ILSVRC contains a wide variety of objects, including tiny objects such as sunglasses (1.3%percent1.31.3\\% of image area on average), ping-pong balls (1.5%percent1.51.5\\% of image area on average) and basketballs (2.0%percent2.02.0\\% of image area on average). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_86", "text": " Once the dataset has been collected, we need to define a standardized evaluation procedure for algorithms. Some measures have already been established by datasets such as the Caltech 101 (Fei-Fei et al.,, 2004) for image classification and PASCAL VOC (Everingham et al.,, 2012) for both image classification and object detection. To adapt these procedures to the large-scale setting we had to address three key challenges. First, for the image classification and single-object localization tasks only one object category could be labeled in each image due to the scale of the dataset. This created potential ambiguity during evaluation (addressed in Section 4.1). Second, evaluating localization of object instances is inherently difficult in some images which contain a cluster of objects (addressed in Section 4.2). Third, evaluating localization of object instances which occupy few pixels in the image is challenging (addressed in Section 4.3). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_87", "text": " In this section we describe the standardized evaluation criteria for each of the three ILSVRC tasks. We elaborate further on these and other more minor challenges with large-scale evaluation. Appendix F describes the submission protocol and other details of running the competition itself. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_88", "text": " The scale of ILSVRC classification task (1000 categories and more than a million of images) makes it very expensive to label every instance of every object in every image. Therefore, on this dataset only one object category is labeled in each image. This creates ambiguity in evaluation. For example, an image might be labeled as a “strawberry” but contain both a strawberry and an apple. Then an algorithm would not know which one of the two objects to name. For the image classification task we allowed an algorithm to identify multiple (up to 5) objects in an image and not be penalized as long as one of the objects indeed corresponded to the ground truth label. Figure 7(top row) shows some examples. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_89", "text": " Concretely, each image i𝑖i has a single class label Cisubscript𝐶𝑖C_{i}. An algorithm is allowed to return 5 labels ci​1,…​ci​5subscript𝑐𝑖1…subscript𝑐𝑖5c_{i1},\\dots c_{i5}, and is considered correct if ci​j=Cisubscript𝑐𝑖𝑗subscript𝐶𝑖c_{ij}=C_{i} for some j𝑗j. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_90", "text": " Let the error of a prediction di​j=d​(ci​j,Ci)subscript𝑑𝑖𝑗𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖d_{ij}=d(c_{ij},C_{i}) be 111 if ci​j≠Cisubscript𝑐𝑖𝑗subscript𝐶𝑖c_{ij}\\neq C_{i} and 00 otherwise. The error of an algorithm is the fraction of test images on which the algorithm makes a mistake: error =1N​∑i=1Nminj⁡di​jabsent1𝑁superscriptsubscript𝑖1𝑁subscript𝑗subscript𝑑𝑖𝑗\\displaystyle=\\frac{1}{N}\\sum_{i=1}^{N}\\min_{j}d_{ij} (1) ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_91", "text": " We used two additional measures of error. First, we evaluated top-1 error. In this case algorithms were penalized if their highest-confidence output label ci​1subscript𝑐𝑖1c_{i1} did not match ground truth class Cisubscript𝐶𝑖C_{i}. Second, we evaluated hierarchical error. The intuition is that confusing two nearby classes (such as two different breeds of dogs) is not as harmful as confusing a dog for a container ship. For the hierarchical criteria, the cost of one misclassification, d​(ci​j,Ci)𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖d(c_{ij},C_{i}), is defined as the height of the lowest common ancestor of ci​jsubscript𝑐𝑖𝑗c_{ij} and Cisubscript𝐶𝑖C_{i} in the ImageNet hierarchy. The height of a node is the length of the longest path to a leaf node (leaf nodes have height zero). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_92", "text": " However, in practice we found that all three measures of error (top-5, top-1, and hierarchical) produced the same ordering of results. Thus, since ILSVRC2012 we have been exclusively using the top-5 metric which is the simplest and most suitable to the dataset. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_93", "text": " The evaluation for single-object localization is similar to object classification, again using a top-5 criteria to allow the algorithm to return unannotated object classes without penalty. However, now the algorithm is considered correct only if it both correctly identifies the target class Cisubscript𝐶𝑖C_{i} and accurately localizes one of its instances. Figure 7(middle row) shows some examples. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_94", "text": " Concretely, an image is associated with object class Cisubscript𝐶𝑖C_{i}, with all instances of this object class annotated with bounding boxes Bi​ksubscript𝐵𝑖𝑘B_{ik}. An algorithm returns {(ci​j,bi​j)}j=15superscriptsubscriptsubscript𝑐𝑖𝑗subscript𝑏𝑖𝑗𝑗15\\{(c_{ij},b_{ij})\\}_{j=1}^{5} of class labels ci​jsubscript𝑐𝑖𝑗c_{ij} and associated locations bi​jsubscript𝑏𝑖𝑗b_{ij}. The error of a prediction j𝑗j is: di​jsubscript𝑑𝑖𝑗\\displaystyle d_{ij} =max⁡(d​(ci​j,Ci),mink⁡d​(bi​j,Bi​k))absent𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖subscript𝑘𝑑subscript𝑏𝑖𝑗subscript𝐵𝑖𝑘\\displaystyle=\\max(d(c_{ij},C_{i}),\\min_{k}d(b_{ij},B_{ik})) (2) Here d​(bi​j,Bi​k)𝑑subscript𝑏𝑖𝑗subscript𝐵𝑖𝑘d(b_{ij},B_{ik}) is the error of localization, defined as 00 if the area of intersection of boxes bi​jsubscript𝑏𝑖𝑗b_{ij} and Bi​ksubscript𝐵𝑖𝑘B_{ik} divided by the areas of their union is greater than 0.50.50.5, and 111 otherwise. (Everingham et al.,, 2010) The error of an algorithm is computed as in Eq. 1. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_95", "text": " Evaluating localization is inherently difficult in some images. Consider a picture of a bunch of bananas or a carton of apples. It is easy to classify these images as containing bananas or apples, and even possible to localize a few instances of each fruit. However, in order for evaluation to be accurate every instance of banana or apple needs to be annotated, and that may be impossible. To handle the images where localizing individual object instances is inherently ambiguous we manually discarded 3.5%percent3.53.5\\% of images since ILSVRC2012. Some examples of discarded images are shown in Figure 8. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_96", "text": " The criteria for object detection was adopted from PASCAL VOC (Everingham et al.,, 2010). It is designed to penalize the algorithm for missing object instances, for duplicate detections of one instance, and for false positive detections. Figure 7(bottom row) shows examples. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_97", "text": " For each object class and each image Iisubscript𝐼𝑖I_{i}, an algorithm returns predicted detections (bi​j,si​j)subscript𝑏𝑖𝑗subscript𝑠𝑖𝑗(b_{ij},s_{ij}) of predicted locations bi​jsubscript𝑏𝑖𝑗b_{ij} with confidence scores si​jsubscript𝑠𝑖𝑗s_{ij}. These detections are greedily matched to the ground truth boxes {Bi​k}subscript𝐵𝑖𝑘\\{B_{ik}\\} using Algorithm 2. For every detection j𝑗j on image i𝑖i the algorithm returns zi​j=1subscript𝑧𝑖𝑗1z_{ij}=1 if the detection is matched to a ground truth box according to the threshold criteria, and 00 otherwise. For a given object class, let N𝑁N be the total number of ground truth instances across all images. Given a threshold t𝑡t, define recall as the fraction of the N𝑁N objects detected by the algorithm, and precision as the fraction of correct detections out of the total detections returned by the algorithm. Concretely, R​e​c​a​l​l​(t)𝑅𝑒𝑐𝑎𝑙𝑙𝑡\\displaystyle Recall(t) =∑i​j1​(si​j≥t)​zi​jNabsentsubscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡subscript𝑧𝑖𝑗𝑁\\displaystyle=\\frac{\\sum_{ij}1(s_{ij}\\geq t)z_{ij}}{N} (3) P​r​e​c​i​s​i​o​n​(t)𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑡\\displaystyle Precision(t) =∑i​j1​(si​j≥t)​zi​j∑i​j1​(si​j≥t)absentsubscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡subscript𝑧𝑖𝑗subscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡\\displaystyle=\\frac{\\sum_{ij}1(s_{ij}\\geq t)z_{ij}}{\\sum_{ij}1(s_{ij}\\geq t)} (4) ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_98", "text": " The final metric for evaluating an algorithm on a given object class is average precision over the different levels of recall achieved by varying the threshold t𝑡t. The winner of each object class is then the team with the highest average precision, and then winner of the challenge is the team that wins on the most object classes.888In this paper we focus on the mean average precision across all categories as the measure of a team’s performance. This is done for simplicity and is justified since the ordering of teams by mean average precision was always the same as the ordering by object categories won. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_99", "text": " Evaluating localization of object instances which occupy very few pixels in the image is challenging. The PASCAL VOC approach was to label such instances as “difficult” and ignore them during evaluation. However, since ILSVRC contains a more diverse set of object classes including, for example, “nail” and “ping pong ball” which have many very small instances, it is important to include even very small object instances in evaluation. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_100", "text": " In Algorithm 2, a predicted bounding box b𝑏b is considered to have properly localized by a ground truth bounding box B𝐵B if I​O​U​(b,B)≥thr​(B)𝐼𝑂𝑈𝑏𝐵thr𝐵IOU(b,B)\\geq\\mbox{thr}(B). The PASCAL VOC metric uses the threshold thr​(B)=0.5thr𝐵0.5\\mbox{thr}(B)=0.5. However, for small objects even deviations of a few pixels would be unacceptable according to this threshold. For example, consider an object B𝐵B of size 10×10101010\\times 10 pixels, with a detection window of 20×20202020\\times 20 pixels which fully contains that object. This would be an error of approximately 555 pixels on each dimension, which is average human annotation error. However, the IOU in this case would be 100/400=0.251004000.25100/400=0.25, far below the threshold of 0.50.50.5. Thus for smaller objects we loosen the threshold in ILSVRC to allow for the annotation to extend up to 5 pixels on average in each direction around the object. Concretely, if the ground truth box B𝐵B is of dimensions w×h𝑤ℎw\\times h then thr​(B)=min⁡(0.5,w​h(w+10)​(h+10))thr𝐵0.5𝑤ℎ𝑤10ℎ10\\mbox{thr}(B)=\\min\\left(0.5,\\frac{wh}{(w+10)(h+10)}\\right) (5) In practice, this changes the threshold only on objects which are smaller than approximately 25×25252525\\times 25 pixels, and affects 5.5%percent5.55.5\\% of objects in the detection validation set. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_101", "text": " One additional practical consideration for ILSVRC detection evaluation is subtle and comes directly as a result of the scale of ILSVRC. In PASCAL, algorithms would often return many detections per class on the test set, including ones with low confidence scores. This allowed the algorithms to reach the level of high recall at least in the realm of very low precision. On ILSVRC detection test set if an algorithm returns 10 bounding boxes per object per image this would result in 10×200×40​K=801020040𝐾8010\\times 200\\times 40K=80M detections. Each detection contains an image index, a class index, 4 bounding box coordinates, and the confidence score, so it takes on the order of 28 bytes. The full set of detections would then require 2.242.242.24Gb to store and submit to the evaluation server, which is impractical. This means that algorithms are implicitly required to limit their predictions to only the most confident locations. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_102", "text": " The ILSVRC dataset and the competition has allowed significant algorithmic advances in large-scale image recognition and retrieval. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_103", "text": " This section is organized chronologically, highlighting the particularly innovative and successful methods which participated in the ILSVRC each year. Tables LABEL:table:sub10-12, LABEL:table:sub13 and LABEL:table:sub14 list all the participating teams. We see a turning point in 2012 with the development of large-scale convolutional neural networks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_104", "text": " The first year the challenge consisted of just the classification task. The winning entry from NEC team (Lin et al.,, 2011) used SIFT (Lowe,, 2004) and LBP (Ahonen et al.,, 2006) features with two non-linear coding representations (Zhou et al.,, 2010; Wang et al.,, 2010) and a stochastic SVM. The honorable mention XRCE team (Perronnin et al.,, 2010) used an improved Fisher vector representation (Perronnin and Dance,, 2007) along with PCA dimensionality reduction and data compression followed by a linear SVM. Fisher vector-based methods have evolved over five years of the challenge and continued performing strongly in every ILSVRC from 2010 to 2014. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_105", "text": " The winning classification entry in 2011 was the 2010 runner-up team XRCE, applying high-dimensional image signatures (Perronnin et al.,, 2010) with compression using product quantization (Sanchez and Perronnin,, 2011) and one-vs-all linear SVMs. The single-object localization competition was held for the first time, with two brave entries. The winner was the UvA team using a selective search approach to generate class-independent object hypothesis regions (van de Sande et al., 2011b, ), followed by dense sampling and vector quantization of several color SIFT features (van de Sande et al.,, 2010), pooling with spatial pyramid matching (Lazebnik et al.,, 2006), and classifying with a histogram intersection kernel SVM (Maji and Malik,, 2009) trained on a GPU (van de Sande et al., 2011a, ). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_106", "text": " This was a turning point for large-scale object recognition, when large-scale deep neural networks entered the scene. The undisputed winner of both the classification and localization tasks in 2012 was the SuperVision team. They trained a large, deep convolutional neural network on RGB values, with 60 million parameters using an efficient GPU implementation and a novel hidden-unit dropout trick (Krizhevsky et al.,, 2012; Hinton et al.,, 2012). The second place in image classification went to the ISI team, which used Fisher vectors (Sanchez and Perronnin,, 2011) and a streamlined version of Graphical Gaussian Vectors (Harada and Kuniyoshi,, 2012), along with linear classifiers using Passive-Aggressive (PA) algorithm (Crammer et al.,, 2006). The second place in single-object localization went to the VGG, with an image classification system including dense SIFT features and color statistics (Lowe,, 2004), a Fisher vector representation (Sanchez and Perronnin,, 2011), and a linear SVM classifier, plus additional insights from (Arandjelovic and Zisserman,, 2012; Sanchez et al.,, 2012). Both ISI and VGG used (Felzenszwalb et al.,, 2010) for object localization; SuperVision used a regression model trained to predict bounding box locations. Despite the weaker detection model, SuperVision handily won the object localization task. A detailed analysis and comparison of the SuperVision and VGG submissions on the single-object localization task can be found in (Russakovsky et al.,, 2013). The influence of the success of the SuperVision model can be clearly seen in ILSVRC2013 and ILSVRC2014. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_107", "text": " There were 24 teams participating in the ILSVRC2013 competition, compared to 21 in the previous three years combined. Following the success of the deep learning-based method in 2012, the vast majority of entries in 2013 used deep convolutional neural networks in their submission. The winner of the classification task was Clarifai, with several large deep convolutional networks averaged together. The network architectures were chosen using the visualization technique of (Zeiler and Fergus,, 2013), and they were trained on the GPU following (Zeiler et al.,, 2011) using the dropout technique (Krizhevsky et al.,, 2012). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_108", "text": " The winning single-object localization OverFeat submission was based on an integrated framework for using convolutional networks for classification, localization and detection with a multiscale sliding window approach (Sermanet et al.,, 2013). They were the only team tackling all three tasks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_109", "text": " The winner of object detection task was UvA team, which utilized a new way of efficient encoding (van de Sande et al.,, 2014) densely sampled color descriptors (van de Sande et al.,, 2010) pooled using a multi-level spatial pyramid in a selective search framework (Uijlings et al.,, 2013). The detection results were rescored using a full-image convolutional network classifier. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_110", "text": " 2014 attracted the most submissions, with 36 teams submitting 123 entries compared to just 24 teams in 2013 – a 1.5x increase in participation.999Table LABEL:table:sub14 omits 4 teams which submitted results but chose not to officially participate in the challenge. As in 2013 almost all teams used convolutional neural networks as the basis for their submission. Significant progress has been made in just one year: image classification error was almost halved since ILSVRC2013 and object detection mean average precision almost doubled compared to ILSVRC2013. Please refer to Section 6.1 for details. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_111", "text": " In 2014 teams were allowed to use outside data for training their models in the competition, so there were six tracks: provided and outside data tracks in each of image classification, single-object localization, and object detection tasks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_112", "text": " The winning image classification with provided data team was GoogLeNet, which explored an improved convolutional neural network architecture combining the multi-scale idea with intuitions gained from the Hebbian principle. Additional dimension reduction layers allowed them to increase both the depth and the width of the network significantly without incurring significant computational overhead. In the image classification with external data track, CASIAWS won by using weakly supervised object localization from only classification labels to improve image classification. MCG region proposals (Arbeláez et al.,, 2014) pretrained on PASCAL VOC 2012 data are used to extract region proposals, regions are represented using convolutional networks, and a multiple instance learning strategy is used to learn weakly supervised object detectors to represent the image. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_113", "text": " In the single-object localization with provided data track, the winning team was VGG, which explored the effect of convolutional neural network depth on its accuracy by using three different architectures with up to 19 weight layers with rectified linear unit non-linearity, building off of the implementation of Caffe (Jia,, 2013). For localization they used per-class bounding box regression similar to OverFeat (Sermanet et al.,, 2013). In the single-object localization with external data track, Adobe used 2000 additional ImageNet classes to train the classifiers in an integrated convolutional neural network framework for both classification and localization, with bounding box regression. At test time they used k-means to find bounding box clusters and rank the clusters according to the classification scores. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_114", "text": " In the object detection with provided data track, the winning team NUS used the RCNN framework (Girshick et al.,, 2013) with the network-in-network method (Lin et al., 2014a, ) and improvements of (Howard,, 2014). Global context information was incorporated following (Chen et al.,, 2014). In the object detection with external data track, the winning team was GoogLeNet (which also won image classification with provided data). It is truly remarkable that the same team was able to win at both image classification and object detection, indicating that their methods are able to not only classify the image based on scene information but also accurately localize multiple object instances. Just like most teams participating in this track, GoogLeNet used the image classification dataset as extra training data. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_115", "text": " ILSVRC over the past five years has paved the way for several breakthroughs in computer vision. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_116", "text": " The field of categorical object recognition has dramatically evolved in the large-scale setting. Section 5.1 documents the progress, starting from coded SIFT features and evolving to large-scale convolutional neural networks dominating at all three tasks of image classification, single-object localization, and object detection. With the availability of so much training data (along with an efficient algorithmic implementation and GPU computing resources) it became possible to learn neural networks directly from the image data, without needing to create multi-stage hand-tuned pipelines of extracted features and discriminative classifiers. The major breakthrough came in 2012 with the win of the SuperVision team on image classification and single-object localization tasks (Krizhevsky et al.,, 2012), and by 2014 all of the top contestants were relying heavily on convolutional neural networks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_117", "text": " Further, over the past few years there has been a lot of focus on large-scale recognition in the computer vision community . Best paper awards at top vision conferences in 2013 were awarded to large-scale recognition methods: at CVPR 2013 to ”Fast, Accurate Detection of 100,000 Object Classes on a Single Machine” (Dean et al.,, 2013) and at ICCV 2013 to ”From Large Scale Image Categorization to Entry-Level Categories” (Ordonez et al.,, 2013). Additionally, several influential lines of research have emerged, such as large-scale weakly supervised localization work of (Kuettel et al.,, 2012) which was awarded the best paper award in ECCV 2012 and large-scale zero-shot learning, e.g., (Frome et al.,, 2013). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_118", "text": " State-of-the-art accuracy has improved significantly from ILSVRC2010 to ILSVRC2014, showcasing the massive progress that has been made in large-scale object recognition over the past five years. The performance of the winning ILSVRC entries for each task and each year are shown in Figure 9. The improvement over the years is clearly visible. In this section we quantify and analyze this improvement. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_119", "text": " There has been a 4.2x reduction in image classification error (from 28.2%percent28.228.2\\% to 6.7%percent6.76.7\\%) and a 1.7x reduction in single-object localization error (from 42.5%percent42.542.5\\% to 25.3%percent25.325.3\\%) since the beginning of the challenge. For consistency, here we consider only teams that use the provided training data. Even though the exact object categories have changed (Section 3.1.1), the large scale of the dataset has remained the same (Table 2), making the results comparable across the years. The dataset has not changed since 2012, and there has been a 2.4x reduction in image classification error (from 16.4%percent16.416.4\\% to 6.7%percent6.76.7\\%) and a 1.3x in single-object localization error (from 33.5%percent33.533.5\\% to 25.3%percent25.325.3\\%) in the past three years. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_120", "text": " Object detection accuracy as measured by the mean average precision (mAP) has increased 1.9x since the introduction of this task, from 22.6%percent22.622.6\\% mAP in ILSVRC2013 to 43.9%percent43.943.9\\% mAP in ILSVRC2014. However, these results are not directly comparable for two reasons. First, the size of the object detection training data has increased significantly from 2013 to 2014 (Section 3.3). Second, the 43.9%percent43.943.9\\% mAP result was obtained with the addition of the image classification and single-object localization training data. Here we attempt to understand the relative effects of the training set size increase versus algorithmic improvements. All models are evaluated on the same ILSVRC2013-2014 object detection test set. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_121", "text": " First, we quantify the effects of increasing detection training data between the two challenges by comparing the same model trained on ILSVRC2013 detection data versus ILSVRC2014 detection data. The UvA team’s framework from 2013 achieved 22.6%percent22.622.6\\% with ILSVRC2013 data (Table LABEL:table:sub13) and 26.3%percent26.326.3\\% with ILSVRC2014 data and no other modifications.101010Personal communication with members of the UvA team. The absolute increase in mAP was 3.7%percent3.73.7\\%. The RCNN model achieved 31.4%percent31.431.4\\% mAP with ILSVRC2013 detection plus image classification data (Girshick et al.,, 2013) and 34.5%percent34.534.5\\% mAP with ILSVRC2014 detection plus image classification data (Berkeley team in Table LABEL:table:sub14). The absolute increase in mAP by expanding ILSVRC2013 detection data to ILSVRC2014 was 3.1%percent3.13.1\\%. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_122", "text": " Second, we quantify the effects of adding in the external data for training object detection models. The NEC model in 2013 achieved 19.6%percent19.619.6\\% mAP trained on ILSVRC2013 detection data alone and 20.9%percent20.920.9\\% mAP trained on ILSVRC2013 detection plus classification data (Table LABEL:table:sub13). The absolute increase in mAP was 1.3%percent1.31.3\\%. The UvA team’s best entry in 2014 achieved 32.0%percent32.032.0\\% mAP trained on ILSVRC2014 detection data and 35.4%percent35.435.4\\% mAP trained on ILSVRC2014 detection plus classification data. The absolute increase in mAP was 3.4%percent3.43.4\\%. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_123", "text": " Thus, we conclude based on the evidence so far that expanding the ILSVRC2013 detection set to the ILSVRC2014 set, as well as adding in additional training data from the classification task, all account for approximately 1−4%1percent41-4\\% in absolute mAP improvement for the models. For comparison, we can also attempt to quantify the effect of algorithmic innovation. The UvA team’s 2013 framework achieved 26.3%percent26.326.3\\% mAP on ILSVRC2014 data as mentioned above, and their improved method in 2014 obtained 32.0%percent32.032.0\\% mAP (Table LABEL:table:sub14). This is 5.8%percent5.85.8\\% absolute increase in mAP over just one year from algorithmic innovation alone. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_124", "text": " In summary, we conclude that the absolute 21.3%percent21.321.3\\% increase in mAP between winning entries of ILSVRC2013 (22.6%percent22.622.6\\% mAP) and of ILSVRC2014 (43.9%percent43.943.9\\% mAP) is the result of impressive algorithmic innovation and not just a consequence of increased training data. However, increasing the ISLVRC2014 object detection training dataset further is likely to produce additional improvements in detection accuracy for current algorithms. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_125", "text": " One important question to ask is whether results of different submissions to ILSVRC are statistically significantly different from each other. Given the large scale, it is no surprise that even minor differences in accuracy are statistically significant; we seek to quantify exactly how much of a difference is enough. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_126", "text": " Following the strategy employed by PASCAL VOC (Everingham et al.,, 2014), for each method we obtain a confidence interval of its score using bootstrap sampling. During each bootstrap round, we sample N𝑁N images with replacement from all the available N𝑁N test images and evaluate the performance of the algorithm on those sampled images. This can be done very efficiently by precomputing the accuracy on each image. Given the results of all the bootstrapping rounds we discard the lower and the upper α𝛼\\alpha fraction. The range of the remaining results represents the 1−2​α12𝛼1-2\\alpha confidence interval. We run a large number of bootstrapping rounds (from 20,000 until convergence). Table 5 shows the results of the top entries to each task of ILSVRC2012-2014. The winning methods are statistically significantly different from the other methods, even at the 99.9%percent99.999.9\\% level. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_127", "text": " Besides looking at just the average accuracy across hundreds of object categories and tens of thousands of images, we can also delve deeper to understand where mistakes are being made and where researchers’ efforts should be focused to expedite progress. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_128", "text": " To do so, in this section we will be analyzing an “optimistic” measurement of state-of-the-art recognition performance instead of focusing on the differences in individual algorithms. For each task and each object class, we compute the best performance of any entry submitted to any ILSVRC2012-2014, including methods using additional training data. Since the test sets have remained the same, we can directly compare all the entries in the past three years to obtain the most “optimistic” measurement of state-of-the-art accuracy on each category. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_129", "text": " For consistency with the object detection metric (higher is better), in this section we will be using image classification and single-object localization accuracy instead of error, where a​c​c​u​r​a​c​y=1−e​r​r​o​r𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦1𝑒𝑟𝑟𝑜𝑟accuracy=1-error. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_130", "text": " Figure 10 shows the distribution of accuracy achieved by the “optimistic” models across the object categories. The image classification model achieves 94.6%percent94.694.6\\% accuracy on average (or 5.4%percent5.45.4\\% error), but there remains a 41.0%percent41.041.0\\% absolute difference inaccuracy between the most and least accurate object class. The single-object localization model achieves 81.5%percent81.581.5\\% accuracy on average (or 18.5%percent18.518.5\\% error), with a 77.0%percent77.077.0\\% range in accuracy across the object classes. The object detection model achieves 44.7%percent44.744.7\\% average precision, with an 84.7%percent84.784.7\\% range across the object classes. It is clear that the ILSVRC dataset is far from saturated: performance on many categories has remained poor despite the strong overall performance of the models. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_131", "text": " Figures 11 and 12 show the easiest and hardest classes for each task, i.e., classes with the best and worst results obtained with the “optimistic” models. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_132", "text": " For image classification, 121 out of 1000 object classes have 100%percent100100\\% image classification accuracy according to the optimistic estimate. Figure 11 (top) shows a random set of 10 of them. They contain a variety of classes, such as mammals like “red fox” and animals with distinctive structures like “stingray”. The hardest classes in the image classification task, with accuracy as low as 59.0%percent59.059.0\\%, include metallic and see-through man-made objects, such as “hook” and “water bottle,” the material “velvet” and the highly varied scene class “restaurant.” ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_133", "text": " For single-object localization, the 10 easiest classes with 99.0−100%99.0percent10099.0-100\\% accuracy are all mammals and birds. The hardest classes include metallic man-made objects such as “letter opener” and “ladle”, plus thin structures such as “pole” and “spacebar” and highly varied classes such as “wing”. The most challenging class “spacebar” has a only 23.0%percent23.023.0\\% localization accuracy. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_134", "text": " Object detection results are shown in Figure 12. The easiest classes are living organisms such as “dog” and “tiger”, plus “basketball” and “volleyball” with distinctive shape and color, and a somewhat surprising “snowplow.” The easiest class “butterfly” is not yet perfectly detected but is very close with 92.7%percent92.792.7\\% AP. The hardest classes are as expected small thin objects such as “flute” and “nail”, and the highly varied “lamp” and “backpack” classes, with as low as 8.0%percent8.08.0\\% AP. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_135", "text": " We now take a closer look at the image properties to try to understand why current algorithms perform well on some object classes but not others. One hypothesis is that variation in accuracy comes from the fact that instances of some classes tend to be much smaller in images than instances of other classes, and smaller objects may be harder for computers to recognize. In this section we argue that while accuracy is correlated with object scale in the image, not all variation in accuracy can be accounted for by scale alone. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_136", "text": " For every object class, we compute its average scale, or the average fraction of image area occupied by an instance of the object class on the ILSVRC2012-2014 validation set. Since the images and object classes in the image classification and single-object localization tasks are the same, we use the bounding box annotations of the single-object localization dataset for both tasks. In that dataset the object classes range from “swimming trunks” with scale of 1.5%percent1.51.5\\% to “spider web” with scale of 85.6%percent85.685.6\\%. In the object detection validation dataset the object classes range from “sunglasses” with scale of 1.3%percent1.31.3\\% to “sofa” with scale of 44.4%percent44.444.4\\%. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_137", "text": " Figure 13 shows the performance of the “optimistic” method as a function of the average scale of the object in the image. Each dot corresponds to one object class. We observe a very weak positive correlation between object scale and image classification accuracy: ρ=0.14𝜌0.14\\rho=0.14. For single-object localization and object detection the correlation is stronger, at ρ=0.40𝜌0.40\\rho=0.40 and ρ=0.41𝜌0.41\\rho=0.41 respectively. It is clear that not all variation in accuracy can be accounted for by scale alone. Nevertheless, in the next section we will normalize for object scale to ensure that this factor is not affecting our conclusions. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_138", "text": " Besides considering image-level properties we can also observe how accuracy changes as a function of intrinsic object properties. We define three properties inspired by human vision: the real-world size of the object, whether it’s deformable within instance, and how textured it is. For each property, the object classes are assigned to one of a few bins (listed below). These properties are illustrated in Figure 1. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_139", "text": " Human subjects annotated each of the 1000 image classification and single-object localization object classes from ILSVRC2012-2014 with these properties. (Russakovsky et al.,, 2013). By construction (see Section 3.3.1), each of the 200 object detection classes is either also one of 1000 object classes or is an ancestor of one or more of the 1000 classes in the ImageNet hierarchy. To compute the values of the properties for each object detection class, we simply average the annotated values of the descendant classes. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_140", "text": " In this section we draw the following conclusions about state-of-the-art recognition accuracy as a function of these object properties: • Real-world size: XS for extra small (e.g. nail), small (e.g. fox), medium (e.g. bookcase), large (e.g. car) or XL for extra large (e.g. church) The image classification and single-object localization “optimistic” models performs better on large and extra large real-world objects than on smaller ones. The “optimistic” object detection model surprisingly performs better on extra small objects than on small or medium ones. • Deformability within instance: Rigid (e.g., mug) or deformable (e.g., water snake) The “optimistic” model on each of the three tasks performs statistically significantly better on deformable objects compared to rigid ones. However, this effect disappears when analyzing natural objects separately from man-made objects. • Amount of texture: none (e.g. punching bag), low (e.g. horse), medium (e.g. sheep) or high (e.g. honeycomb) The “optimistic” model on each of the three tasks is significantly better on objects with at least low level of texture compared to untextured objects. These and other findings are justified and discussed in detail below. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_141", "text": " We observed in Section 6.3.3 that objects that occupy a larger area in the image tend to be somewhat easier to recognize. To make sure that differences in object scale are not influencing results in this section, we normalize each bin by object scale. We discard object classes with the largest scales from each bin as needed until the average object scale of object classes in each bin across one property is the same (or as close as possible). For real-world size property for example, the resulting average object scale in each of the five bins is 31.6%−31.7%percent31.6percent31.731.6\\%-31.7\\% in the image classification and single-object localization tasks, and 12.9%−13.4%percent12.9percent13.412.9\\%-13.4\\% in the object detection task.111111For rigid versus deformable objects, the average scale in each bin is 34.1%−34.2%percent34.1percent34.234.1\\%-34.2\\% for classification and localization, and 13.5%−13.7%percent13.5percent13.713.5\\%-13.7\\% for detection. For texture, the average scale in each of the four bins is 31.1%−31.3%percent31.1percent31.331.1\\%-31.3\\% for classification and localization, and 12.7%−12.8%percent12.7percent12.812.7\\%-12.8\\% for detection. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_142", "text": " Figure 14 shows the average performance of the “optimistic” model on the object classes that fall into each bin for each property. We analyze the results in detail below. Unless otherwise specified, the reported accuracies below are after the scale normalization step. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_143", "text": " To evaluate statistical significance, we compute the 95%percent9595\\% confidence interval for accuracy using bootstrapping: we repeatedly sample the object classes within the bin with replacement, discard some as needed to normalize by scale, and compute the average accuracy of the “optimistic” model on the remaining classes. We report the 95%percent9595\\% confidence intervals (CI) in parentheses. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_144", "text": " In Figure 14(top, left) we observe that in the image classification task the “optimistic” model tends to perform significantly better on objects which are larger in the real-world. The classification accuracy is 93.6%−93.9%percent93.6percent93.993.6\\%-93.9\\% on XS, S and M objects compared to 97.0%percent97.097.0\\% on L and 96.4%percent96.496.4\\% on XL objects. Since this is after normalizing for scale and thus can’t be explained by the objects’ size in the image, we conclude that either (1) larger real-world objects are easier for the model to recognize, or (2) larger real-world objects usually occur in images with very distinctive backgrounds. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_145", "text": " To distinguish between the two cases we look Figure 14(top, middle). We see that in the single-object localization task, the L objects are easy to localize at 82.4%percent82.482.4\\% localization accuracy. XL objects, however, tend to be the hardest to localize with only 73.4%percent73.473.4\\% localization accuracy. We conclude that the appearance of L objects must be easier for the model to learn, while XL objects tend to appear in distinctive backgrounds. The image background make these XL classes easier for the image-level classifier, but the individual instances are difficult to accurately localize. Some examples of L objects are “killer whale,” “schooner,” and “lion,” and some examples of XL objects are “boathouse,” “mosque,” “toyshop” and “steel arch bridge.” ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_146", "text": " In Figure 14(top,right) corresponding to the object detection task, the influence of real-world object size is not as apparent. One of the key reasons is that many of the XL and L object classes of the image classification and single-object localization datasets were removed in constructing the detection dataset (Section 3.3.1) since they were not basic categories well-suited for detection. There were only 3 XL object classes remaining in the dataset (“train,” “airplane” and “bus”), and none after scale normalization.We omit them from the analysis. The average precision of XS, S, M objects (44.5%percent44.544.5\\%, 39.0%percent39.039.0\\%, and 38.5%percent38.538.5\\% mAP respectively) is statistically insignificant from average precision on L objects: 95%percent9595\\% confidence interval of L objects is 37.5%−59.5%percent37.5percent59.537.5\\%-59.5\\%. This may be due to the fact that there are only 6 L object classes remaining after scale normalization; all other real-world size bins have at least 18 object classes. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_147", "text": " Finally, it is interesting that performance on XS objects of 44.5%percent44.544.5\\% mAP (CI 40.5%−47.6%percent40.5percent47.640.5\\%-47.6\\%) is statistically significantly better than performance on S or M objects with 39.0%percent39.039.0\\% mAP and 38.5%percent38.538.5\\% mAP respectively. Some examples of XS objects are “strawberry,” “bow tie” and “rugby ball.” ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_148", "text": " In Figure 14(second row) it is clear that the “optimistic” model performs statistically significantly worse on rigid objects than on deformable objects. Image classification accuracy is 93.2%percent93.293.2\\% on rigid objects (CI 92.6%−93.8%percent92.6percent93.892.6\\%-93.8\\%), much smaller than 95.7%percent95.795.7\\% on deformable ones. Single-object localization accuracy is 76.2%percent76.276.2\\% on rigid objects (CI 74.9%−77.4%percent74.9percent77.474.9\\%-77.4\\%), much smaller than 84.7%percent84.784.7\\% on deformable ones. Object detection mAP is 40.1%percent40.140.1\\% on rigid objects (CI 37.2%−42.9%percent37.2percent42.937.2\\%-42.9\\%), much smaller than 44.8%percent44.844.8\\% on deformable ones. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_149", "text": " We can further analyze the effects of deformability after separating object classes into “natural” and “man-made” bins based on the ImageNet hierarchy. Deformability is highly correlated with whether the object is natural or man-made: 0.720.720.72 correlation for image classification and single-object localization classes, and 0.610.610.61 for object detection classes. Figure 14(third row) shows the effect of deformability on performance of the model for man-made and natural objects separately. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_150", "text": " Man-made classes are significantly harder than natural classes: classification accuracy 92.8%percent92.892.8\\% (CI 92.3%−93.3%percent92.3percent93.392.3\\%-93.3\\%) for man-made versus 97.0%percent97.097.0\\% for natural, localization accuracy 75.5%percent75.575.5\\% (CI 74.3%−76.5%percent74.3percent76.574.3\\%-76.5\\%) for man-made versus 88.5%percent88.588.5\\% for natural, and detection mAP 38.7%percent38.738.7\\% (CI 35.6−41.3%35.6percent41.335.6-41.3\\%) for man-made versus 50.9%percent50.950.9\\% for natural. However, whether the classes are rigid or deformable within this subdivision is no longer significant in most cases. For example, the image classification accuracy is 92.3%percent92.392.3\\% (CI 91.4%−93.1%percent91.4percent93.191.4\\%-93.1\\%) on man-made rigid objects and 91.8%percent91.891.8\\% on man-made deformable objects – not statistically significantly different. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_151", "text": " There are two cases where the differences in performance are statistically significant. First, for single-object localization, natural deformable objects are easier than natural rigid objects: localization accuracy of 87.9%percent87.987.9\\% (CI 85.9%−90.1%percent85.9percent90.185.9\\%-90.1\\%) on natural deformable objects is higher than 85.8%percent85.885.8\\% on natural rigid objects – falling slightly outside the 95%percent9595\\% confidence interval. This difference in performance is likely because deformable natural animals tend to be easier to localize than rigid natural fruit. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_152", "text": " Second, for object detection, man-made rigid objects are easier than man-made deformable objects: 38.5%percent38.538.5\\% mAP (CI 35.2%−41.7%percent35.2percent41.735.2\\%-41.7\\%) on man-made rigid objects is higher than 33.0%percent33.033.0\\% mAP on man-made deformable objects. This is because man-made rigid objects include classes like “traffic light” or “car” whereas the man-made deformable objects contain challenging classes like “plastic bag,” “swimming trunks” or “stethoscope.” ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_153", "text": " Finally, we analyze the effect that object texture has on the accuracy of the “optimistic” model. Figure 14(fourth row) demonstrates that the model performs better as the amount of texture on the object increases. The most significant difference is between the performance on untextured objects and the performance on objects with low texture. Image classification accuracy is 90.5%percent90.590.5\\% on untextured objects (CI 89.3%−91.6%percent89.3percent91.689.3\\%-91.6\\%), lower than 94.6%percent94.694.6\\% on low-textured objects. Single-object localization accuracy is 71.4%percent71.471.4\\% on untextured objects (CI 69.1%−73.3%percent69.1percent73.369.1\\%-73.3\\%), lower than 80.2%percent80.280.2\\% on low-textured objects. Object detection mAP is 33.2%percent33.233.2\\% on untextured objects (CI 29.5%−35.9%percent29.5percent35.929.5\\%-35.9\\%), lower than 42.9%percent42.942.9\\% on low-textured objects. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_154", "text": " Texture is correlated with whether the object is natural or man-made, at 0.350.350.35 correlation for image classification and single-object localization, and 0.460.460.46 correlation for object detection. To determine if this is a contributing factor, in Figure 14(bottom row) we break up the object classes into natural and man-made and show the accuracy on objects with no texture versus objects with low texture. We observe that the model is still statistically significantly better on low-textured object classes than on untextured ones, both on man-made and natural object classes independently.121212Natural object detection classes are removed from this analysis because there are only 3 and 13 natural untextured and low-textured classes respectively, and none remain after scale normalization. All other bins contain at least 9 object classes after scale normalization. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_155", "text": " Recent improvements in state-of-the-art accuracy on the ILSVRC dataset are easier to put in perspective when compared to human-level accuracy. In this section we compare the performance of the leading large-scale image classification method with the performance of humans on this task. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_156", "text": " To support this comparison, we developed an interface that allowed a human labeler to annotate images with up to five ILSVRC target classes. We compare human errors to those of the winning ILSRC2014 image classification model, GoogLeNet (Section 5.1). For this analysis we use a random sample of 1500 ILSVRC2012-2014 image classification test set images. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_157", "text": " Our web-based annotation interface consists of one test set image and a list of 1000 ILSVRC categories on the side. Each category is described by its title, such as “cowboy boot.” The categories are sorted in the topological order of the ImageNet hierarchy, which places semantically similar concepts nearby in the list. For example, all motor vehicle-related classes are arranged contiguously in the list. Every class category is additionally accompanied by a row of 13 examples images from the training set to allow for faster visual scanning. The user of the interface selects 5 categories from the list by clicking on the desired items. Since our interface is web-based, it allows for natural scrolling through the list, and also search by text. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_158", "text": " We found the task of annotating images with one of 1000 categories to be an extremely challenging task for an untrained annotator. The most common error that an untrained annotator is susceptible to is a failure to consider a relevant class as a possible label because they are unaware of its existence. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_159", "text": " Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_160", "text": " We report results based on experiments with two expert annotators. The first annotator (A1) trained on 500 images and annotated 1500 test images. The second annotator (A2) trained on 100 images and then annotated 258 test images. The average pace of labeling was approximately 1 image per minute, but the distribution is strongly bimodal: some images are quickly recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) may require multiple minutes of concentrated effort. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_161", "text": " The results are reported in Table 6. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_162", "text": " Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classification error on this sample was estimated to be 6.8%percent6.86.8\\% (recall that the error on full test set of 100,000 images is 6.7%percent6.76.7\\%, as shown in Table LABEL:table:sub14). The human error was estimated to be 5.1%. Thus, annotator A1 achieves a performance superior to GoogLeNet, by approximately 1.7%percent1.71.7\\%. We can analyze the statistical significance of this result under the null hypothesis that they are from the same distribution. In particular, comparing the two proportions with a z-test yields a one-sided p𝑝p-value of p=0.022𝑝0.022p=0.022. Thus, we can conclude that this result is statistically significant at the 95%percent9595\\% confidence level. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_163", "text": " Our second annotator (A2) trained on a smaller sample of only 100 images and then labeled 258 test set images. As seen in Table 6, the final classification error is significantly worse, at approximately 12.0%percent12.012.0\\% Top-5 error. The majority of these errors (48.8%percent48.848.8\\%) can be attributed to the annotator failing to spot and consider the ground truth label as an option. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_164", "text": " Thus, we conclude that a significant amount of training time is necessary for a human to achieve competitive performance on ILSVRC. However, with a sufficient amount of training, a human annotator is still able to outperform the GoogLeNet result (p=0.022𝑝0.022p=0.022) by approximately 1.7%percent1.71.7\\%. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_165", "text": " We also compare the prediction accuracy of the two annotators. Of a total of 204 images that both A1 and A2 labeled, 174 (85%percent8585\\%) were correctly labeled by both A1 and A2, 19 (9%percent99\\%) were correctly labeled by A1 but not A2, 6 (3%percent33\\%) were correctly labeled by A2 but not A1, and 5 (2%percent22\\%) were incorrectly labeled by both. These include 2 images that we consider to be incorrectly labeled in the ground truth. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_166", "text": " In particular, our results suggest that the human annotators do not exhibit strong overlap in their predictions. We can approximate the performance of an “optimistic” human classifier by assuming an image to be correct if at least one of A1 or A2 correctly labeled the image. On this sample of 204 images, we approximate the error rate of an “optimistic” human annotator at 2.4%percent2.42.4\\%, compared to the GoogLeNet error rate of 4.9%percent4.94.9\\%. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_167", "text": " We manually inspected both human and GoogLeNet errors to gain an understanding of common error types and how they compare. For purposes of this section, we only discuss results based on the larger sample of 1500 images that were labeled by annotator A1. Examples of representative mistakes are in Figure 15. The analysis and insights below were derived specifically from GoogLeNet predictions, but we suspect that many of the same errors may be present in other methods. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_168", "text": " 1. Multiple objects. Both GoogLeNet and humans struggle with images that contain multiple ILSVRC classes (usually many more than five), with little indication of which object is the focus of the image. This error is only present in the Classification setting, since every image is constrained to have exactly one correct label. In total, we attribute 24 (24%percent2424\\%) of GoogLeNet errors and 12 (16%percent1616\\%) of human errors to this category. It is worth noting that humans can have a slight advantage in this error type, since it can sometimes be easy to identify the most salient object in the image. 2. Incorrect annotations. We found that approximately 5 out of 1500 images (0.3%percent0.30.3\\%) were incorrectly annotated in the ground truth. This introduces an approximately equal number of errors for both humans and GoogLeNet. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_169", "text": " 1. Object small or thin. GoogLeNet struggles with recognizing objects that are very small or thin in the image, even if that object is the only object present. Examples of this include an image of a standing person wearing sunglasses, a person holding a quill in their hand, or a small ant on a stem of a flower. We estimate that approximately 22 (21%percent2121\\%) of GoogLeNet errors fall into this category, while none of the human errors do. In other words, in our sample of images, no image was mislabeled by a human because they were unable to identify a very small or thin object. This discrepancy can be attributed to the fact that a human can very effectively leverage context and affordances to accurately infer the identity of small objects (for example, a few barely visible feathers near person’s hand as very likely belonging to a mostly occluded quill). 2. Image filters. Many people enhance their photos with filters that distort the contrast and color distributions of the image. We found that 13 (13%percent1313\\%) of the images that GoogLeNet incorrectly classified contained a filter. Thus, we posit that GoogLeNet is not very robust to these distortions. In comparison, only one image among the human errors contained a filter, but we do not attribute the source of the error to the filter. 3. Abstract representations. GoogLeNet struggles with images that depict objects of interest in an abstract form, such as 3D-rendered images, paintings, sketches, plush toys, or statues. An example is the abstract shape of a bow drawn with a light source in night photography, a 3D-rendered robotic scorpion, or a shadow on the ground, of a child on a swing. We attribute approximately 6 (6%percent66\\%) of GoogLeNet errors to this type of error and believe that humans are significantly more robust, with no such errors seen in our sample. 4. Miscellaneous sources. Additional sources of error that occur relatively infrequently include extreme closeups of parts of an object, unconventional viewpoints such as a rotated image, images that can significantly benefit from the ability to read text (e.g. a featureless container identifying itself as “face powder”), objects with heavy occlusions, and images that depict a collage of multiple images. In general, we found that humans are more robust to all of these types of error. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_170", "text": " 1. Fine-grained recognition. We found that humans are noticeably worse at fine-grained recognition (e.g. dogs, monkeys, snakes, birds), even when they are in clear view. To understand the difficulty, consider that there are more than 120 species of dogs in the dataset. We estimate that 28 (37%percent3737\\%) of the human errors fall into this category, while only 7 (7%percent77\\%) of GoogLeNet errors do. 2. Class unawareness. The annotator may sometimes be unaware of the ground truth class present as a label option. When pointed out as an ILSVRC class, it is usually clear that the label applies to the image. These errors get progressively less frequent as the annotator becomes more familiar with ILSVRC classes. Approximately 18 (24%percent2424\\%) of the human errors fall into this category. 3. Insufficient training data. Recall that the annotator is only presented with 13 examples of a class under every category name. However, 13 images are not always enough to adequately convey the allowed class variations. For example, a brown dog can be incorrectly dismissed as a “Kelpie” if all examples of a “Kelpie” feature a dog with black coat. However, if more than 13 images were listed it would have become clear that a “Kelpie” may have brown coat. Approximately 4 (5%percent55\\%) of human errors fall into this category. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_171", "text": " We investigated the performance of trained human annotators on a sample of 1500 ILSVRC test set images. Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately 1.7%percent1.71.7\\% (p=0.022𝑝0.022p=0.022). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_172", "text": " We expect that some sources of error may be relatively easily eliminated (e.g. robustness to filters, rotations, collages, effectively reasoning over multiple scales), while others may prove more elusive (e.g. identifying abstract representations of objects). On the other hand, a large majority of human errors come from fine-grained categories and class unawareness. We expect that the former can be significantly reduced with fine-grained expert annotators, while the latter could be reduced with more practice and greater familiarity with ILSVRC classes. Our results also hint that human errors are not strongly correlated and that human ensembles may further reduce human error rate. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_173", "text": " It is clear that humans will soon outperform state-of-the-art ILSVRC image classification models only by use of significant effort, expertise, and time. One interesting follow-up question for future investigation is how computer-level accuracy compares with human-level accuracy on more complex image understanding tasks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_174", "text": " In this paper we described the large-scale data collection process of ILSVRC, provided a summary of the most successful algorithms on this data, and analyzed the success and failure modes of these algorithms. In this section we discuss some of the key lessons we learned over the years of ILSVRC, strive to address the key criticisms of the datasets and the challenges we encountered over the years, and conclude by looking forward into the future. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_175", "text": " The key lesson of collecting the datasets and running the challenges for five years is this: All human intelligence tasks need to be exceptionally well-designed. We learned this lesson both when annotating the dataset using Amazon Mechanical Turk workers (Section 3) and even when trying to evaluate human-level image classification accuracy using expert labelers (Section 6.4). The first iteration of the labeling interface was always bad – generally meaning completely unusable. If there was any inherent ambiguity in the questions posed (and there almost always was), workers found it and accuracy suffered. If there is one piece of advice we can offer to future research, it is to very carefully design, continuously monitor, and extensively sanity-check all crowdsourcing tasks. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_176", "text": " The other lesson, already well-known to large-scale researchers, is this: Scaling up the dataset always reveals unexpected challenges. From designing complicated multi-step annotation strategies (Section 3.2.1) to having to modify the evaluation procedure (Section 4), we had to continuously adjust to the large-scale setting. On the plus side, of course, the major breakthroughs in object recognition accuracy (Section 5) and the analysis of the strength and weaknesses of current algorithms as a function of object class properties ( Section 6.3) would never have been possible on a smaller scale. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_177", "text": " In the past five years, we encountered three major criticisms of the ILSVRC dataset and the corresponding challenge: (1) the ILSVRC dataset is insufficiently challenging, (2) the ILSVRC dataset contains annotation errors, and (3) the rules of ILSVRC competition are too restrictive. We discuss these in order. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_178", "text": " The first criticism is that the objects in the dataset tend to be large and centered in the images, making the dataset insufficiently challenging. In Sections 3.2.2 and 3.3.4 we tried to put those concerns to rest by analyzing the statistics of the ILSVRC dataset and concluding that it is comparable with, and in many cases much more challenging than, the long-standing PASCAL VOC benchmark (Everingham et al.,, 2010). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_179", "text": " The second is regarding the errors in ground truth labeling. We went through several rounds of in-house post-processing of the annotations obtained using crowdsourcing, and corrected many common sources of errors (e.g., Appendix E). The major remaining source of annotation errors stem from fine-grained object classes, e.g., labelers failing to distinguish different species of birds. This is a tradeoff that had to be made: in order to annotate data at this scale on a reasonable budget, we had to rely on non-expert crowd labelers. However, overall the dataset is encouragingly clean. By our estimates, 99.7%percent99.799.7\\% precision is achieved in the image classification dataset (Sections 3.1.3 and 6.4) and 97.9%percent97.997.9\\% of images that went through the bounding box annotation system have all instances of the target object class labeled with bounding boxes (Section 3.2.1). ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_180", "text": " The third criticism we encountered is over the rules of the competition regarding using external training data. In ILSVRC2010-2013, algorithms had to only use the provided training and validation set images and annotations for training their models. With the growth of the field of large-scale unsupervised feature learning, however, questions began to arise about what exactly constitutes “outside” data: for example, are image features trained on a large pool of “outside” images in an unsupervised fashion allowed in the competition? After much discussion, in ILSVRC2014 we took the first step towards addressing this problem. We followed the PASCAL VOC strategy and created two tracks in the competition: entries using only “provided” data and entries using “outside” data, meaning any images or annotations not provided as part of ILSVRC training or validation sets. However, in the future this strategy will likely need to be further revised as the computer vision field evolves. For example, competitions can consider allowing the use of any image features which are publically available, even if these features were learned on an external source of data. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_181", "text": " Given the massive algorithmic breakthroughs over the past five years, we are very eager to see what will happen in the next five years. There are many potential directions of improvement and growth for ILSVRC and other large-scale image datasets. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_182", "text": " First, continuing the trend of moving towards richer image understanding (from image classification to single-object localization to object detection), the next challenge would be to tackle pixel-level object segmentation. The recently released large-scale COCO dataset (Lin et al., 2014b, ) is already taking a step in that direction. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_183", "text": " Second, as datasets grow even larger in scale, it may become impossible to fully annotate them manually. The scale of ILSVRC is already imposing limits on the manual annotations that are feasible to obtain: for example, we had to restrict the number of objects labeled per image in the image classification and single-object localization datasets. In the future, with billions of images, it will become impossible to obtain even one clean label for every image. Datasets such as Yahoo’s Flickr Creative Commons 100M,131313http://webscope.sandbox.yahoo.com/catalog.php?datatype=i&did=67 released with weak human tags but no centralized annotation, will become more common. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_184", "text": " The growth of unlabeled or only partially labeled large-scale datasets implies two things. First, algorithms will have to rely more on weakly supervised training data. Second, even evaluation might have to be done after the algorithms make predictions, not before. This means that rather than evaluating accuracy (how many of the test images or objects did the algorithm get right) or recall (how many of the desired images or objects did the algorithm manage to find), both of which require a fully annotated test set, we will be focusing more on precision: of the predictions that the algorithm made, how many were deemed correct by humans. ", "title": "ImageNet Large Scale Visual Recognition Challenge" }, { "id": "1409.0575_all_185", "text": " We are eagerly awaiting the future development of object recognition datasets and algorithms, and are grateful that ILSVRC served as a stepping stone along this path. ", "title": "ImageNet Large Scale Visual Recognition Challenge" } ]
What does the b of the sub-role, s^b_i mean in the semantic structure of sentence S?
^b refer to a sub-role in the semantic structure of the sentence [16]. as S is the semantic structure of the sentence, i is specific to a number of sub-role of a sequence of sub-roles in the semantic structure of a sentence [24].
[ 16, 24 ]
[ { "id": "2103.12204_all_0", "text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks, current captioning models plausibly have already achieved “super-human” performance in all accuracy-based evaluation metrics. However, many studies have indicated that these models tend to produce generic descriptions, and fail to control the caption generation process as humans, \\eg, referring to different contents of interest or descriptive patterns. In order to endow the captioning models with human-like controllability, a recent surge of efforts (16, 10, 19, 78, 48, 77, 27, 20) resort to introducing extra control signals as constraints of the generated captions, called Controllable Image Captioning (CIC). As a byproduct, the CIC models can easily generate diverse descriptions by feeding different control signals. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_1", "text": " Early CIC works mainly focus on subjective control signals, such as sentiments , emotions (42, 22), and personality (14, 54), \\ie, the linguistic styles of sentences. Although these stylized captioning models can eventually produce style-related captions, they remain hard to control the generation process effectively and precisely. To further improve the controllability, recent CIC works gradually put a more emphasis on objective control signals. More specifically, they can be coarsely classified into two categories: 1) Content-controlled: the control signals are about the contents of interest which need to be described. As the example shown in Figure 1 (a), given the region set () as a control signal, we hope that the generated caption can cover all regions (\\ie, man, wave, and surfboard). So far, various types of content-controlled signals have been proposed, such as visual relations , object regions (16, 35), scene graphs (10, 78), and mouse trace . 2) Structure-controlled: the control signals are about the semantic structures of sentences. For instance, the length-level , part-of-speech tags , or attributes  of the sentence (cf. Figure 1 (b)) are some typical structure-controlled signals. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_2", "text": " Nevertheless, all existing objective control signals (\\ie, both content-controlled and structure-controlled) have overlooked two indispensable characteristics of an ideal control signal towards “human-like” controllable image captioning: 1) Event-compatible: all visual contents referred to in a single sentence should be compatible with the described activity. Imaging how humans describe images — our brains always quickly structure a descriptive pattern like “sth do sth at someplace” first, and then fill in the detailed description (56, 46, 30, 71), \\ie, we have subconsciously made sure that all the mentioned entities are event-compatible (\\eg, man, wave, surfboard are all involved in activity riding in Figure 1 (a)). To further see the negative impact of dissatisfying this requirement, suppose that we deliberately utilize two more objects (hand and sky, \\ie, ) as part of the control signal, and the model generates an incoherent and illogical caption. 2) Sample-suitable: the control signals should be suitable for the specific image sample. By “suitable”, we mean that there do exist reasonable descriptions satisfying the control signals, \\eg, a large length-level may not be suitable for an image with a very simple scene. Unfortunately, it is always very difficult to decide whether a control signal is sample-suitable in advance. For example in Figure 1 (b), although the two control signals (\\ie, length-levels 3 and 4) are quite close, the quality of respectively generated captions varies greatly. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_3", "text": " In this paper, we propose a new event-oriented objective control signal, Verb-specific Semantic Roles (VSR), to meet both event-compatible and sample-suitable requirements simultaneously. VSR consists of a verb (\\ie, predicate ) and some user-interested semantic roles . As shown in Figure 2, the verb captures the scope of a salient activity in the image (\\eg, eating), and the corresponding semantic roles111We use PropBank-style annotations of semantic roles (\\eg, Arg0, Arg1) in all experiments (cf. Figure 1). The FrameNet-style annotations of semantic roles (\\eg, Agent) here are just for a more intuitive illustration. In the PropBank-style annotations, Arg denotes “argument”, MNR denotes “manner”, DIR denotes “directional”, and LOC denotes “location”. We leave more details in the supplementary material. (\\eg, agent, food, container, and tool) categorize how objects participate in this activity, \\ie, a child (agent) is eating (activity) a pancake (food) from a plate (container) with a fork (tool). Thus, VSR is designed to guarantee that all the mentioned entities are event-compatible. Meanwhile, unlike the existing structure-controlled signals which directly impose constraints on the generated captions, VSR only restricts the involved semantic roles, which is theoretically suitable for all the images with the activity, \\ie, sample-suitable. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_4", "text": " In order to generate sentences with respect to the designated VSRs, we first train a grounded semantic role labeling (GSRL) model to identify and ground all entities for each role. Then, we propose a semantic structure planner (SSP) to rank the given verb and semantic roles, and output some human-like descriptive semantic structures, \\eg, Arg0readerreader{}_{\\text{reader}} – read – Arg1thingthing{}_{\\text{thing}} – LOC in Figure 1 (c). Finally, we combine the grounded entities and semantic structures, and use an RNN-based role-shift captioning model to generate the captions by sequentially focusing on different roles. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_5", "text": " Although these are no available captioning datasets with the VSR annotations, they can be easily obtained by off-the-shelf semantic role parsing toolkits . Extensive experiments on two challenging CIC benchmarks (\\ie, COCO Entities  and Flickr30K Entities ) demonstrate that our framework can achieve better controllability given designated VSRs than several strong baselines. Moreover, our framework can also realize diverse image captioning and achieve a better trade-off between quality and diversity. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_6", "text": " In summary, we make three contributions in this paper: 1. We propose a new control signal for CIC: Verb-specific Semantic Roles (VSR). To the best of our knowledge, VSR is the first control signal to consider both event-compatible and sample-suitable requirements222When using control signals extracted from GT captions, existing control signals can always meet both requirements and generate reasonable captions. However, in more general settings (\\eg, construct control signals without GT captions), the form of VSR is more human-friendly, and it is easier to construct signals which meet both requirements compared with all existing forms of control signals, which is the main advantage of VSR.. 2. We can learn human-like verb-specific semantic structures automatically, and abundant visualization examples demonstrate that these patterns are reasonable. 3. We achieve state-of-the-art controllability on two challenging benchmarks, and generate diverse captions by using different verbs, semantic roles, or structures. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_7", "text": " Controllable Image Captioning. Compared with conventional image captioning (63, 68, 9, 25, 13), CIC is a more challenging task, which needs to consider extra constraints. Early CIC works are mostly about stylized image captioning, \\ie, constraints are the linguistic styles of sentences. According to the requirements of parallel training samples, existing solutions can be divided into two types: models using parallel stylized image-caption data (41, 11, 54, 1) or not (22, 42). Subsequently, the community gradually shifts the emphasis to controlling described contents (16, 77, 27, 10, 78, 48, 35) or structures (20, 19, 75, 76) of the sentences. In this paper, we propose a novel control signal VSR, which is the first control signal to consider both the event-compatible and sample-suitable requirements. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_8", "text": " Diverse and Distinctive Image Captioning. Diverse image captioning, \\ie, describing the image contents with diverse wordings and rich expressions, is an essential property of human-like captioning models. Except from feeding different control signals to the CIC models, other diverse captioning methods can be coarsely grouped into four types: 1) GAN-based (17, 52, 32): they use a discriminator to force the generator to generate human-indistinguishable captions. 2) VAE-based (65, 7): the diversity obtained with them is by sampling from a learned latent space. 3) RL-based : they regard diversity as an extra reward in the RL training stage. 4) BS-based : they decode a list of diverse captions by optimizing a diversity-augmented objective. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_9", "text": " Meanwhile, distinctive image captioning is another close research direction (18, 60, 37, 36, 64), which aims to generate discriminative and unique captions for individual images. Unfortunately, due to the subjective nature of diverse and distinctive captions, effective evaluation remains as an open problem, and several new metrics are proposed, such as SPICE-U , CIDErBtw , self-CIDEr , word recall , mBLEU . In this paper, we can easily generate diverse captions in both lexical-level and syntactic-level. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_10", "text": " Semantic Roles in Images. Inspired from the semantic role labeling task  in NLP, several tasks have been proposed to label the roles of each object in an activity in an image: ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_11", "text": " Visual Semantic Role Labeling (VSRL), also called situation recognition, is a generalization of action recognition and human-object interaction, which aims to label an image with a set of verb-specific action frames . Specifically, each action frame describes details of the activity captured by the verb, and it consists of a fixed set of verb-specific semantic roles and their corresponding values. The values are the entities or objects involved in the activity and the semantic roles categorize how objects participate in the activity. The current VSRL methods (23, 73, 40, 33, 72, 57, 15) usually learn an independent action classifier first, and then model the role inter-dependency by RNNs or GNNs. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_12", "text": " Grounded Semantic Role Labeling (GSRL), also called grounded situation recognition, builds upon the VSRL task, which requires the models not only to label a set of frames, but also to localize each role-value pair in the image (49, 55, 70, 23). In this paper, we use the GSRL model as a bridge to connect the control signals (VSR) and related regions. To the best of our knowledge, we are the first captioning work to benefit from the verb lexicon developed by linguists. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_13", "text": " For human-like controllable image captioning, we first propose the Verb-specific Semantic Roles (VSR) as the control signal for generating customized captions. As shown in Figure 3, we formally represent a control signal VSR as: 𝒱𝒮ℛ={v,<s1,n1>,…,<sm,nm>},\\displaystyle\\begin{aligned} \\mathcal{VSR}=\\{v,<s_{1},n_{1}>,...,<s_{m},n_{m}>\\},\\\\ \\end{aligned} (1) where v𝑣v is a verb capturing the scope of a salient activity in the image (\\eg, ride), sisubscript𝑠𝑖s_{i} is a semantic role of verb v𝑣v (\\eg, LOC), and nisubscript𝑛𝑖n_{i} is the number of interested entities in the role sisubscript𝑠𝑖s_{i}. For example, for 𝒱𝒮ℛ={ride,<Arg0,1>,<Arg1,1>,<Loc,2>}\\mathcal{VSR}=\\{\\texttt{ride},<\\texttt{Arg0},\\texttt{1}>,<\\texttt{Arg1},\\texttt{1}>,<\\texttt{Loc},\\texttt{2}>\\}, we hope to generate a caption which not only focuses on describing the ride activity, but also contains one entity respectively in the role Arg0riderrider{}_{\\text{rider}} and Arg1steedsteed{}_{\\text{steed}}, and two entities in the role LOC. Thus, VSR can effectively control the amount of information carried in the whole sentence and each role, \\ie, the level of details. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_14", "text": " It is convenient to construct VSRs automatically or manually. For the verbs, they can be accurately predicted by an off-the-shelf action recognition network with a predefined verb vocabulary. For the verb-specific semantic roles, they can be easily retrieved from the verb lexicon such as PropBank or FrameNet. Then, the users can easily select a subset of roles or an automatic sampling to generate a subset of roles, and randomly assign the entity number for each role. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_15", "text": " Given an image 𝑰𝑰\\bm{I} and a control signal 𝒱​𝒮​ℛ𝒱𝒮ℛ\\mathcal{VSR}, the controllable image captioning model aims to describe 𝑰𝑰\\bm{I} by a textual sentence 𝒚={y1,…,yT}𝒚subscript𝑦1…subscript𝑦𝑇\\bm{y}=\\{y_{1},...,y_{T}\\}, \\ie, modeling the probability p​(𝒚|𝑰,𝒱​𝒮​ℛ)𝑝conditional𝒚𝑰𝒱𝒮ℛp(\\bm{y}|\\bm{I},\\mathcal{VSR}). Inspired from the human habit of describing images, we decompose this task into two steps: structuring a descriptive pattern and filling in detailed captions: p​(𝒚|𝑰,𝒱​𝒮​ℛ)=p​(𝒚|pattern)​p​(pattern|𝑰,𝒱​𝒮​ℛ).𝑝conditional𝒚𝑰𝒱𝒮ℛ𝑝conditional𝒚pattern𝑝conditionalpattern𝑰𝒱𝒮ℛ\\displaystyle p(\\bm{y}|\\bm{I},\\mathcal{VSR})=p(\\bm{y}|\\text{pattern})p(\\text{pattern}|\\bm{I},\\mathcal{VSR}). (2) ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_16", "text": " Further, we utilize two sequences 𝒮=(s1b,…,sKb)𝒮subscriptsuperscript𝑠𝑏1…subscriptsuperscript𝑠𝑏𝐾\\mathcal{S}=(s^{b}_{1},...,s^{b}_{K}) and ℛ=(𝒓1,…,𝒓K)ℛsubscript𝒓1…subscript𝒓𝐾\\mathcal{R}=(\\bm{r}_{1},...,\\bm{r}_{K}) to model the descriptive patterns. Specifically, 𝒮𝒮\\mathcal{S} is a semantic structure of the sentence and each sib∈𝒮subscriptsuperscript𝑠𝑏𝑖𝒮s^{b}_{i}\\in\\mathcal{S} is a sub-role. By “sub-role”, we mean that each role si∈𝒱​𝒮​ℛsubscript𝑠𝑖𝒱𝒮ℛs_{i}\\in\\mathcal{VSR} can be divided into nisubscript𝑛𝑖n_{i} sub-roles, and when ni=1subscript𝑛𝑖1n_{i}=1, role sisubscript𝑠𝑖s_{i} itself is a sub-role. Thus, VSR in Figure 3 can be rewritten as Arg0, Arg1, LOC-1, and LOC-2. ℛℛ\\mathcal{R} is a sequence of visual features of the corresponding grounded entities for each sub-role in 𝒮𝒮\\mathcal{S} (\\eg, 𝒓isubscript𝒓𝑖\\bm{r}_{i} is the features of visual regions referring to sibsubscriptsuperscript𝑠𝑏𝑖s^{b}_{i}). Particularly, for presentation conciseness, we regard the verb in 𝒱​𝒮​ℛ𝒱𝒮ℛ\\mathcal{VSR} as a special type of sub-role, and since there are no grounded visual regions referring to the verb, we use the global image feature as the grounded region feature in ℛℛ\\mathcal{R}. Meanwhile, we use ℛ~~ℛ\\mathcal{\\tilde{R}} to denote a set of all elements in the sequence ℛℛ\\mathcal{R}. Thus, we further decompose this task into three components: p​(𝒚|𝑰,𝒱​𝒮​ℛ)=p​(𝒚|𝒮,ℛ)⏟Captioner​p​(𝒮,ℛ|ℛ~,𝒱​𝒮​ℛ)⏟SSP​p​(ℛ~|𝑰,𝒱​𝒮​ℛ)⏟GSRL.𝑝conditional𝒚𝑰𝒱𝒮ℛsubscript⏟𝑝conditional𝒚𝒮ℛCaptionersubscript⏟𝑝𝒮conditionalℛ~ℛ𝒱𝒮ℛSSPsubscript⏟𝑝conditional~ℛ𝑰𝒱𝒮ℛGSRL\\displaystyle p(\\bm{y}|\\bm{I},\\mathcal{VSR})=\\underbrace{p(\\bm{y}|\\mathcal{S},\\mathcal{R})}_{\\text{Captioner}}\\underbrace{p(\\mathcal{S},\\mathcal{R}|\\mathcal{\\tilde{R}},\\mathcal{VSR})}_{\\text{SSP}}\\underbrace{p(\\mathcal{\\tilde{R}}|\\bm{I},\\mathcal{VSR})}_{\\text{GSRL}}. (3) ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_17", "text": " In this section, we first introduce each component of the whole framework of the VSR-guided controllable image captioning model sequentially in Section 3.1 (cf. Figure 3), including a grounded semantic role labeling (GSRL) model, a semantic structure planner (SSP), and a role-shift captioning model. Then, we demonstrate the details about all training objectives and the inference stage in Section 3.2, including extending from a single VSR to multiple VSRs. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_18", "text": " Given an image 𝑰𝑰\\bm{I}, we first utilize an object detector  to extract a set of object proposals ℬℬ\\mathcal{B}. Each proposal 𝒃i∈ℬsubscript𝒃𝑖ℬ\\bm{b}_{i}\\in\\mathcal{B} is associated with a visual feature 𝒇isubscript𝒇𝑖\\bm{f}_{i} and a class label ci∈𝒞subscript𝑐𝑖𝒞c_{i}\\in\\mathcal{C}. Then, we group all these proposals into N𝑁N disjoint sets, \\ie, ℬ={ℬ1,…,ℬN}ℬsubscriptℬ1…subscriptℬ𝑁\\mathcal{B}=\\{\\mathcal{B}_{1},...,\\mathcal{B}_{N}\\}333Due to different annotation natures of specific CIC datasets, we group proposals by different principles. Details are shown in Section 4.2., and each proposal set ℬisubscriptℬ𝑖\\mathcal{B}_{i} consists of one or more proposals. In this GSRL step, we need to refer each sub-role in the 𝒱​𝒮​ℛ𝒱𝒮ℛ\\mathcal{VSR} to a proposal set in ℬℬ\\mathcal{B}. Specifically, we calculate the similarity score ai​jsubscript𝑎𝑖𝑗a_{ij} between semantic role sisubscript𝑠𝑖s_{i} and proposal set ℬjsubscriptℬ𝑗\\mathcal{B}_{j} by: 𝒒i=(𝒆vg;𝒆sig;𝒇¯),ai​j=Fa​(𝒒i,𝒇¯𝒋),formulae-sequencesubscript𝒒𝑖subscriptsuperscript𝒆𝑔𝑣subscriptsuperscript𝒆𝑔subscript𝑠𝑖bold-¯𝒇subscript𝑎𝑖𝑗subscript𝐹𝑎subscript𝒒𝑖subscriptbold-¯𝒇𝒋\\displaystyle\\bm{q}_{i}=\\left(\\bm{e}^{g}_{v};\\bm{e}^{g}_{s_{i}};\\bm{\\bar{f}}\\right),\\quad a_{ij}=F_{a}(\\bm{q}_{i},\\bm{\\bar{f}_{j}}), (4) where 𝒆vgsubscriptsuperscript𝒆𝑔𝑣\\bm{e}^{g}_{v} and 𝒆sigsubscriptsuperscript𝒆𝑔subscript𝑠𝑖\\bm{e}^{g}_{s_{i}} are the word embedding features of verb v𝑣v and semantic role sisubscript𝑠𝑖s_{i}, 𝒇¯bold-¯𝒇\\bm{\\bar{f}} and 𝒇¯𝒋subscriptbold-¯𝒇𝒋\\bm{\\bar{f}_{j}} represent the average-pooled visual features of proposal set ℬℬ\\mathcal{B} and ℬjsubscriptℬ𝑗\\mathcal{B}_{j}, (;) is a concatenation operation, and Fasubscript𝐹𝑎F_{a} is a learnable similarity function444For conciseness, we leave the details in the supplementary material. . ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_19", "text": " After obtaining the grounding similarity scores {ai​j}subscript𝑎𝑖𝑗\\{a_{ij}\\} between semantic role sisubscript𝑠𝑖s_{i} and all proposal sets {ℬj}subscriptℬ𝑗\\{\\mathcal{B}_{j}\\}, we then select the top nisubscript𝑛𝑖n_{i} proposal sets with the highest scores as the grounding results for all sub-roles of sisubscript𝑠𝑖s_{i}. ℛ~~ℛ\\mathcal{\\tilde{R}} in Eq. (3) is the set of visual features of all grounded proposal sets. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_20", "text": " Semantic structure planner (SSP) is a hierarchical semantic structure learning model, which aims to learn a reasonable sequence of sub-roles 𝒮𝒮\\mathcal{S}. As shown in Figure 3, it consists of two subnets: an S-level SSP and an R-level SSP. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_21", "text": " S-level SSP. The sentence-level (S-level) SSP is a coarse-grained structure learning model, which only learns a sequence of all involved general semantic roles (including the verb) in 𝒱​𝒮​ℛ𝒱𝒮ℛ\\mathcal{VSR} (\\eg, ride, Arg0riderrider{}_{\\text{rider}}, Arg1steedsteed{}_{\\text{steed}} and LOC in Figure 3). To this end, we formulate this sentence-level structure learning as a role sequence generation task, as long as we constrain that each output role token belongs to the given role set and each role can only appear once. Specifically, we utilize a three-layer Transformer 555More comparison results between Transformer and Sinkhorn networks (43, 16) are left in supplementary material. to calucate the probability of roles p​(st|𝒱​𝒮​ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛp(s_{t}|\\mathcal{VSR}) at each time step t𝑡t4: 𝑯𝑯\\displaystyle\\bm{H} =Transformerenc​({FCa​(𝒆vi+𝒆sii)}),absentsubscriptTransformerencsubscriptFC𝑎subscriptsuperscript𝒆𝑖𝑣subscriptsuperscript𝒆𝑖subscript𝑠𝑖\\displaystyle=\\text{Transformer}_{\\text{enc}}\\left(\\{\\text{FC}_{a}(\\bm{e}^{i}_{v}+\\bm{e}^{i}_{s_{i}})\\}\\right), (5) p​(st|𝒱​𝒮​ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛ\\displaystyle p(s_{t}|\\mathcal{VSR}) =Transformerdec​(𝑯,𝒆s<to),absentsubscriptTransformerdec𝑯subscriptsuperscript𝒆𝑜subscript𝑠absent𝑡\\displaystyle=\\text{Transformer}_{\\text{dec}}\\left(\\bm{H},\\bm{e}^{o}_{s_{<t}}\\right), where Transformer∗ are the encoder (enc) and decoder (dec) of the standard multi-head transformer. 𝒆visubscriptsuperscript𝒆𝑖𝑣\\bm{e}^{i}_{v} and 𝒆siisubscriptsuperscript𝒆𝑖subscript𝑠𝑖\\bm{e}^{i}_{s_{i}} are the word embedding features of verb v𝑣v and semantic role sjsubscript𝑠𝑗s_{j}, respectively. FCasubscriptFC𝑎\\text{FC}_{a} is a learnable fc-layer to obtain the embedding of each input token. 𝒆s<tosubscriptsuperscript𝒆𝑜subscript𝑠absent𝑡\\bm{e}^{o}_{s_{<t}} is the sequence of embeddings of previous roles. Based on p​(st|𝒱​𝒮​ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛp(s_{t}|\\mathcal{VSR}), we can predict a role at time step t𝑡t and obtain an initial role sequence, \\eg, Arg0riderrider{}_{\\text{rider}} – ride – Arg1steedsteed{}_{\\text{steed}} – LOC in Figure 3. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_22", "text": " R-level SSP. The role-level (R-level) SSP is a fine-grained structure model which aims to rank all sub-roles within the same semantic role (\\eg, LOC-1 and LOC-2 are two sub-roles of role Loc in Figure 3). Since the only differences among these sub-roles are the grounded visual regions, we borrow ideas from the Sinkhorn networks (43, 16), which use a differentiable Sinkhorn operation to learn a soft permutation matrix 𝑷𝑷\\bm{P}. Specifically, for each role sisubscript𝑠𝑖s_{i} with multiple sub-roles (\\ie, ni>1subscript𝑛𝑖1n_{i}>1), we first select all the corresponding grounded proposal sets for these sub-roles, denoted as ℬ^={ℬ^1,…,ℬ^ni}^ℬsubscript^ℬ1…subscript^ℬsubscript𝑛𝑖\\mathcal{\\hat{B}}=\\{\\mathcal{\\hat{B}}_{1},...,\\mathcal{\\hat{B}}_{n_{i}}\\}. And for each proposal 𝒃∗∈ℬ^subscript𝒃^ℬ\\bm{b}_{*}\\in\\mathcal{\\hat{B}}, we encode a feature vector 𝒛∗=(𝒛∗v;𝒛∗si;𝒛∗l)subscript𝒛subscriptsuperscript𝒛𝑣subscriptsuperscript𝒛subscript𝑠𝑖subscriptsuperscript𝒛𝑙\\bm{z}_{*}=(\\bm{z}^{v}_{*};\\bm{z}^{s_{i}}_{*};\\bm{z}^{l}_{*}), where 𝒛∗vsubscriptsuperscript𝒛𝑣\\bm{z}^{v}_{*} is a transformation of its visual feature 𝒇∗subscript𝒇\\bm{f}_{*}, 𝒛∗sisubscriptsuperscript𝒛subscript𝑠𝑖\\bm{z}^{s_{i}}_{*} is the word embedding feature of the semantic role sisubscript𝑠𝑖s_{i}, and 𝒛∗lsubscriptsuperscript𝒛𝑙\\bm{z}^{l}_{*} is a 4-d encoding of the spatial position of proposal 𝒃∗subscript𝒃\\bm{b}_{*}. Then, we transform each feature 𝒛∗subscript𝒛\\bm{z}_{*} into nisubscript𝑛𝑖n_{i}-d, and average-pooled all features among the same proposal set, \\ie, we can obtain an nisubscript𝑛𝑖n_{i}-d feature for each ℬ^isubscript^ℬ𝑖\\mathcal{\\hat{B}}_{i}. We concatenate all these features to get an ni×nisubscript𝑛𝑖subscript𝑛𝑖n_{i}\\times n_{i} matrix 𝒁𝒁\\bm{Z}. Finally, we use the Sinkhorn operation to obtain the soft permutation matrix 𝑷𝑷\\bm{P}4: 𝑷=Sinkhorn​(𝒁).𝑷Sinkhorn𝒁\\displaystyle\\bm{P}=\\text{Sinkhorn}(\\bm{Z}). (6) ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_23", "text": " After the two SSP subnets (\\ie, S-level and R-level), we can obtain the semantic structure 𝒮𝒮\\mathcal{S} (cf. Eq. (3)). Based on the sequence of 𝒮𝒮\\mathcal{S} and the set of proposal featurs ℛ~~ℛ\\mathcal{\\tilde{R}} from the GSRL model, we re-rank ℛ~~ℛ\\mathcal{\\tilde{R}} based on 𝒮𝒮\\mathcal{S} and obtain ℛℛ\\mathcal{R}. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_24", "text": " Given the semantic structure sequence 𝒮=(s1b,…,sKb)𝒮subscriptsuperscript𝑠𝑏1…subscriptsuperscript𝑠𝑏𝐾\\mathcal{S}=(s^{b}_{1},...,s^{b}_{K}) and corresponding proposal feature sequence ℛ=(𝒓1,…,𝒓K)ℛsubscript𝒓1…subscript𝒓𝐾\\mathcal{R}=(\\bm{r}_{1},...,\\bm{r}_{K}), we utilize a two-layer LSTM to generate the final caption 𝒚𝒚\\bm{y}. At each time step, the model fouces on one specific sub-role 𝒔tbsubscriptsuperscript𝒔𝑏𝑡\\bm{s}^{b}_{t} and its grounded region set 𝒓tsubscript𝒓𝑡\\bm{r}_{t}, and then generates the word ytsubscript𝑦𝑡y_{t}. Therefore, we take inspirations from previous CIC methods (16, 10), and predict two distributions simultaneously: p​(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}) for controlling the shift of sub-roles, and p​(yt|𝒮,ℛ)𝑝conditionalsubscript𝑦𝑡𝒮ℛp(y_{t}|\\mathcal{S},\\mathcal{R}) to predict the distribution of a word. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_25", "text": " As for the role-shift, we use an adaptive attention mechanism  to predict the probability of shifting4: αtg,𝜶tr,𝒔​𝒓tgsubscriptsuperscript𝛼𝑔𝑡subscriptsuperscript𝜶𝑟𝑡𝒔subscriptsuperscript𝒓𝑔𝑡\\displaystyle\\alpha^{g}_{t},\\bm{\\alpha}^{r}_{t},\\bm{sr}^{g}_{t} =AdaptiveAttna​(𝒙t,𝒓t),absentsubscriptAdaptiveAttn𝑎subscript𝒙𝑡subscript𝒓𝑡\\displaystyle=\\text{AdaptiveAttn}_{a}(\\bm{x}_{t},\\bm{r}_{t}), (7) where AdaptiveAttnasubscriptAdaptiveAttn𝑎\\text{AdaptiveAttn}_{a} is an adaptive attention network, 𝒙tsubscript𝒙𝑡\\bm{x}_{t} is the input query for attention, 𝒔​𝒓tg𝒔subscriptsuperscript𝒓𝑔𝑡\\bm{sr}^{g}_{t} is a sential vector, αtgsubscriptsuperscript𝛼𝑔𝑡\\alpha^{g}_{t} and 𝒂trsubscriptsuperscript𝒂𝑟𝑡\\bm{a}^{r}_{t} are the attention weights for the sential vector and region features, respectively. We directly use attention weight αtgsubscriptsuperscript𝛼𝑔𝑡\\alpha^{g}_{t} as the probability of shifting sub-roles, \\ie, p​(gt|𝒮,ℛ)=αtg𝑝conditionalsubscript𝑔𝑡𝒮ℛsubscriptsuperscript𝛼𝑔𝑡p(g_{t}|\\mathcal{S},\\mathcal{R})=\\alpha^{g}_{t}. Based on probability p​(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}), we can sample a gate value gj∈{0,1}subscript𝑔𝑗01g_{j}\\in\\{0,1\\}, and the focused sub-role at time step t𝑡t is: stb←𝒮​(i),where​i=min⁡(1+∑j=1t−1gj,K).formulae-sequence←subscriptsuperscript𝑠𝑏𝑡𝒮delimited-()𝑖where𝑖1subscriptsuperscript𝑡1𝑗1subscript𝑔𝑗𝐾\\displaystyle s^{b}_{t}\\leftarrow\\mathcal{S}(i),\\text{where}\\;i=\\min\\left(1+\\textstyle{\\sum}^{t-1}_{j=1}g_{j},K\\right). (8) Due to the special nature of sub-role “verb”, we fix gt+1=1subscript𝑔𝑡11g_{t+1}=1 when stbsubscriptsuperscript𝑠𝑏𝑡s^{b}_{t} is the verb. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_26", "text": " For each sub-role stbsubscriptsuperscript𝑠𝑏𝑡s^{b}_{t}, we use the corresponding proposal set features 𝒓tsubscript𝒓𝑡\\bm{r}_{t} and a two-layer LSTM to generate word ytsubscript𝑦𝑡y_{t}: 𝒉t1subscriptsuperscript𝒉1𝑡\\displaystyle\\bm{h}^{1}_{t} =LSTM1​(𝒉t−11,{yt−1,𝒇¯,𝒉t−12}),absentsubscriptLSTM1subscriptsuperscript𝒉1𝑡1subscript𝑦𝑡1bold-¯𝒇subscriptsuperscript𝒉2𝑡1\\displaystyle=\\text{LSTM}_{1}\\left(\\bm{h}^{1}_{t-1},\\{y_{t-1},\\bm{\\bar{f}},\\bm{h}^{2}_{t-1}\\}\\right), (9) 𝒉t2subscriptsuperscript𝒉2𝑡\\displaystyle\\bm{h}^{2}_{t} =LSTM2​(𝒉t−12,{𝒉t1,𝒄t}),absentsubscriptLSTM2subscriptsuperscript𝒉2𝑡1subscriptsuperscript𝒉1𝑡subscript𝒄𝑡\\displaystyle=\\text{LSTM}_{2}\\left(\\bm{h}^{2}_{t-1},\\{\\bm{h}^{1}_{t},\\bm{c}_{t}\\}\\right), ytsubscript𝑦𝑡\\displaystyle y_{t} ∼p​(yt|𝒮,ℛ)=FCb​(𝒉t2),similar-toabsent𝑝conditionalsubscript𝑦𝑡𝒮ℛsubscriptFC𝑏subscriptsuperscript𝒉2𝑡\\displaystyle\\sim p(y_{t}|\\mathcal{S},\\mathcal{R})=\\text{FC}_{b}(\\bm{h}^{2}_{t}), where 𝒉t1subscriptsuperscript𝒉1𝑡\\bm{h}^{1}_{t} and 𝒉t2subscriptsuperscript𝒉2𝑡\\bm{h}^{2}_{t} are hidden states of the first- and second-layer LSTM (\\ie, LSTM1 and LSTM2), FCbsubscriptFC𝑏\\text{FC}_{b} is a learnable fc-layer, and 𝒄tsubscript𝒄𝑡\\bm{c}_{t} is a context vector. To further distinguish the textual and visual words, we use another adaptive attention network to obtain the context vector 𝒄tsubscript𝒄𝑡\\bm{c}_{t}4: αtv,𝜶tr,𝒔​𝒓tvsubscriptsuperscript𝛼𝑣𝑡subscriptsuperscript𝜶𝑟𝑡𝒔subscriptsuperscript𝒓𝑣𝑡\\displaystyle\\alpha^{v}_{t},\\bm{\\alpha}^{r}_{t},\\bm{sr}^{v}_{t} =AdaptiveAttnb​(𝒙t,𝒓t),absentsubscriptAdaptiveAttn𝑏subscript𝒙𝑡subscript𝒓𝑡\\displaystyle=\\text{AdaptiveAttn}_{b}(\\bm{x}_{t},\\bm{r}_{t}), (10) 𝒄tsubscript𝒄𝑡\\displaystyle\\bm{c}_{t} =αtv⋅𝒔​𝒓tv+∑i𝜶t,ir⋅𝒓t,i,absent⋅subscriptsuperscript𝛼𝑣𝑡𝒔subscriptsuperscript𝒓𝑣𝑡subscript𝑖⋅subscriptsuperscript𝜶𝑟𝑡𝑖subscript𝒓𝑡𝑖\\displaystyle=\\alpha^{v}_{t}\\cdot\\bm{sr}^{v}_{t}+\\textstyle{\\sum}_{i}\\bm{\\alpha}^{r}_{t,i}\\cdot\\bm{r}_{t,i}, where 𝒙tsubscript𝒙𝑡\\bm{x}_{t} is the query for adaptive attention (\\ie, the input of the LSTM1subscriptLSTM1\\text{LSTM}_{1}), 𝒔​𝒓tv𝒔subscriptsuperscript𝒓𝑣𝑡\\bm{sr}^{v}_{t} is a sential vector, and αtvsubscriptsuperscript𝛼𝑣𝑡\\alpha^{v}_{t} and 𝜶trsubscriptsuperscript𝜶𝑟𝑡\\bm{\\alpha}^{r}_{t} are the attention weights for the sential vector and region features. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_27", "text": " Training Stage. In the training stage, we train the three components (GSRL, SSP and captioning model) separately: ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_28", "text": " Training objective of GSRL. For the GSRL model, we use a binary cross-entropy (BCE) loss between the predicted similarity scores a^i​jsubscript^𝑎𝑖𝑗\\hat{a}_{ij} and its ground truth ai​j∗subscriptsuperscript𝑎𝑖𝑗a^{*}_{ij} as the training loss: LGSRL=∑i​jBCE​(a^i​j,ai​j∗).subscript𝐿GSRLsubscript𝑖𝑗BCEsubscript^𝑎𝑖𝑗subscriptsuperscript𝑎𝑖𝑗\\displaystyle L_{\\text{GSRL}}=\\textstyle{\\sum}_{ij}\\text{BCE}(\\hat{a}_{ij},a^{*}_{ij}). (11) ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_29", "text": " Training objective of SSP. For S-level SSP, we use a cross-entropy (XE) loss between prediction s^tsubscript^𝑠𝑡\\hat{s}_{t} and its ground truth st∗subscriptsuperscript𝑠𝑡s^{*}_{t} as the training objective. For R-level SSP, we use a mean square (MSE) loss between prediction 𝑷^tsubscriptbold-^𝑷𝑡\\bm{\\hat{P}}_{t} and its ground truth 𝑷∗tsubscriptsuperscript𝑷𝑡\\bm{P^{*}}_{t} as the training objective: LSSPS=∑tXE​(s^t,st∗),LSSPR=∑t𝟏(nt>1)​MSE​(𝑷^t,𝑷∗t),formulae-sequencesubscriptsuperscript𝐿𝑆SSPsubscript𝑡XEsubscript^𝑠𝑡subscriptsuperscript𝑠𝑡subscriptsuperscript𝐿𝑅SSPsubscript𝑡subscript1subscript𝑛𝑡1MSEsubscriptbold-^𝑷𝑡subscriptsuperscript𝑷𝑡\\displaystyle L^{S}_{\\text{SSP}}=\\textstyle{\\sum}_{t}\\text{XE}(\\hat{s}_{t},s^{*}_{t}),L^{R}_{\\text{SSP}}=\\textstyle{\\sum}_{t}\\mathbf{1}_{(n_{t}>1)}\\text{MSE}(\\bm{\\hat{P}}_{t},\\bm{P^{*}}_{t}), (12) where 𝟏(nt>1)subscript1subscript𝑛𝑡1\\mathbf{1}_{(n_{t}>1)} is an indicator function, being 1 if nt>1subscript𝑛𝑡1n_{t}>1 and 0 otherwise. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_30", "text": " Training objective of captioning model. We follow the conventions of previous captioning works and use a two-stage training scheme: XE and RL stages. In the XE stage, we use an XE loss between predicted words and ground truth words as the training loss. In the RL stage, we use a self-critical baseline . At each step, we sample from p​(yt|𝒮,ℛ)𝑝conditionalsubscript𝑦𝑡𝒮ℛp(y_{t}|\\mathcal{S},\\mathcal{R}) and p​(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}) to obtain the next word yt+1subscript𝑦𝑡1y_{t+1} and sub-role st+1bsubscriptsuperscript𝑠𝑏𝑡1s^{b}_{t+1}. Then we calcuate the reward r​(𝒚s)𝑟superscript𝒚𝑠r(\\bm{y}^{s}) of the sampled sentence 𝒚ssuperscript𝒚𝑠\\bm{y}^{s}. Baseline b𝑏b is the reward of the greedily generated sentence. Thus, the gradient expression of the training loss is: ∇θL=−(r​(𝒚s)−b)​(∇θlog⁡p​(𝒚s)+∇θlog⁡p​(𝒈s)),subscript∇𝜃𝐿𝑟superscript𝒚𝑠𝑏subscript∇𝜃𝑝superscript𝒚𝑠subscript∇𝜃𝑝superscript𝒈𝑠\\nabla_{\\theta}L=-(r(\\bm{y}^{s})-b)(\\nabla_{\\theta}\\log p(\\bm{y}^{s})+\\nabla_{\\theta}\\log p(\\bm{g}^{s})), (13) where 𝒈ssuperscript𝒈𝑠\\bm{g}^{s} is the sequence of role-shift gates. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_31", "text": " Inference. In testing stage, given an image and one 𝒱​𝒮​ℛ𝒱𝒮ℛ\\mathcal{VSR}, we sequentially use the GSRL, SSP, and captioning model to generate the final captions. Meanwhile, our framework can be easily extended from one 𝒱​𝒮​ℛ𝒱𝒮ℛ\\mathcal{VSR} to multiple 𝒱​𝒮​ℛ​s𝒱𝒮ℛ𝑠\\mathcal{VSR}s as the control signal. Taking an example of two 𝒱​𝒮​ℛ​s𝒱𝒮ℛ𝑠\\mathcal{VSR}s, we first use GSRL and SSP to obtain semantic structures and grounded regions features: (𝒮a,ℛa)superscript𝒮𝑎superscriptℛ𝑎(\\mathcal{S}^{a},\\mathcal{R}^{a}) and (𝒮b,ℛb)superscript𝒮𝑏superscriptℛ𝑏(\\mathcal{S}^{b},\\mathcal{R}^{b}). Then, as shown in Figure 4, we merge them by two steps4: (a) find the sub-roles in both 𝒮asuperscript𝒮𝑎\\mathcal{S}^{a} and 𝒮bsuperscript𝒮𝑏\\mathcal{S}^{b} which refer to the same visual regions (\\eg, s1asubscriptsuperscript𝑠𝑎1s^{a}_{1} and s1bsubscriptsuperscript𝑠𝑏1s^{b}_{1} refer to the same proposal set); (b) insert all other sub-roles between the nearest two selected sub-roles (\\eg, s2∗subscriptsuperscript𝑠2s^{*}_{2} are still between s1∗subscriptsuperscript𝑠1s^{*}_{1} and s3∗subscriptsuperscript𝑠3s^{*}_{3}). Concerning the order of sub-roles from different verbs, we follow the rank of two verbs (\\eg, s2asubscriptsuperscript𝑠𝑎2s^{a}_{2} is in front of s2bsubscriptsuperscript𝑠𝑏2s^{b}_{2}). ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_32", "text": " Flickr30K Entities . It builds upon the Flickr30K  dataset, by manually grounding each noun phrase in the descriptions with one or more visual regions. It consists of 31,000 images, and each image is associated with five captions. We use the same splits as  in our experiments. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_33", "text": " COCO Entities . It builds upon the COCO  dataset which consists of 120,000 images and each image is annotated with five captions. Different from Flickr30K Entities where all grounding entities are annotated by humans, all annotations in COCO Entities are detected automatically. Especially, they align each entity to all the detected proposals with the same object class. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_34", "text": " Although we only assume that there exists at least one verb (\\ie, activity) in each image; unfortunately, there are still a few samples (\\ie, 3.26% in COCO Entities and 0.04% in Flickr30K Entities) having no verbs in their captions. We use the same split as  and further drop the those samples with no verb in the training and testing stages4. We will try to cover these extreme cases and leave it for future work. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_35", "text": " Proposal Generation and Grouping. We utilize a Faster R-CNN  with ResNet-101  to obtain all proposals for each image. Especially, we use the model released by , which is finetuned on VG dataset . For COCO Entities, since the “ground truth” annotations for each noun phrase are the proposals with the same class, we group the proposals by their detected class labels. But for Flickr30K Entities, we directly regard each proposal as a proposal set. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_36", "text": " VSR Annotations. Since there are no ground truth semantic role annotations for CIC datasets, we use a pretrained SRL tool  to annotate verbs and semantic roles for each caption, and regard them as ground truth annotations. For each detected verb, we convert it into its base form and build a verb dictionary for each dataset. The dictionary sizes for COCO and Flickr30K are 2,662 and 2,926, respectively. There are a total of 24 types of semantic roles for all verbs. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_37", "text": " Experimental Settings. For the S-level SSP, the head number of multi-head attention is set to 8, and the hidden size of the transformer is set to 512. The length of the transformer is set to 10. For the R-level SSP, we set the maximum number of entities for each role to 10. For the RL training of the captioning model, we use CIDEr-D  score as the training reward. Due to the limited space, we leave more detailed parameter settings in the supplementary material. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_38", "text": " Settings. To evaluate the controllability of proposed framework, we followed the conventions of prior CIC works (16, 10, 78), and utilized the VSR aligned with ground truth captions as the control signals. Specifically, we compared the proposed framework with several carefully designed baselines666All baselines use the same visual regions as models with VSRs.: 1) C-LSTM: It is a Controllable LSTM model . Given the features of all grounded visual regions, it first averages all region features, and then uses an LSTM to generate the captions. 2) C-UpDn: It is a Controllable UpDn model , which uses an adaptive attention to generate the captions. 3) SCT : It regards the set of visual regions as a control signal, and utilizes a chunk-shift captioning model to generate the captions. 4) Ours w/o verb: We ablate our model by removing the verb information in both the SSP and captioning model. 5) Ours (oracle verb): It is an ideal situation, where the captioning model directly outputs the oracle format of the verb when the attending role is the verb. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_39", "text": " Evaluation Metrics. To evaluate the quality of the generated captions, we use five accuracy-based metrics, including BLEU-4 (B4) , METEOR (M) , ROUGE (R) , CIDEr-D (C) , and SPICE (S) . Particularly, we evaluate the generated captions against the single ground truth caption. We also propose a new recall-based metric to evaluate whether the roles of the generated sentence are consistent with the ground truth caption (\\ie, VSR). It measures the recall rate of the verb, semantic roles, and ordered role pairs, which are denoted as RVV{}_{\\text{V}}, RSR1SR1{}_{\\text{SR1}} and RSR2SR2{}_{\\text{SR2}}, respectively. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_40", "text": " Quantitative Results. The quantitative results are reported in Table 1. From Table 1, we can observe that our framework can achieve the best performance over almost all metrics and benchmarks. By comparing the two different proposal settings (\\ie, GSRL and GT), we can find that the accuracy of GSRL is a major bottleneck of the whole framework. Meanwhile, the ablative model (Ours w/o verb) can only achieve slightly better performance than baseline SCT and much worse performance than our full model, which reflects the importance of the verb in semantic structure learning and caption generation. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_41", "text": " Visualizations. In Figure 6, we illustrate some examples of the generated captions. We can observe that our framework always learns a human-like semantic structure based on the VSR and grounded visual regions (\\eg, Arg1thingthing{}_{\\text{thing}} – sit – Arg2positionposition{}_{\\text{position}} – LOC – MNR). According to the semantic structures, the captioning model can generate near-perfect descriptions. As a by-product, a well-trained SSP can automatically produce several verb-specific semantic structures for a set of user-interested roles, and we show some examples in Figure 6. For each verb and role set, we illustrate the top two structures by using beam search. Particularly, we are surprised to find that we can even learn some structures that never appear in original datasets (the blue tick ones). ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_42", "text": " One of the well-known advantages of controllable image captioning is the ability to generate diverse image captions by feeding different control signals. Thus, we also evaluate the diversity of the captions generated by our framework. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_43", "text": " Settings. We evaluated the quality of diverse captions in two settings: 1) Given a VSR and grounded visual regions of each role aligned with the ground truth caption, we first use an SSP to select two semantic structures, and then respectively generate two diverse captions. For fair comparisons, we utilize the same set of visual regions on two strong baselines: a) BS: an UpDn model uses beam search to produce two captions, and b) SCT: an SCT model takes a permutation of all region sets to generate two captions. 2) For each verb, we can randomly sample a subset of all semantic roles to construct new VSRs. Specifically, we sample two more sets of semantic roles, and generate two diverse captions for each role set following the same manner. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_44", "text": " Evaluation Metrics. We used two types of metrics to evaluate the diverse captions: 1) Accuracy-based: we followed the conventions of the previous works (16, 20, 65) and reported the best-1 accuracy, \\ie, the generated caption with the maximum score for each metric is chosen. Analogously, we evaluate the generated captions against the single ground truth caption. 2) Diversity-based: we followed  and used two metrics which only focus on the language similarity: Div-n (D-n) (4, 20) and self-CIDEr (s-C) . ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_45", "text": " Quantitative Results. The quantitative results are reported in Table 2. From Table 2, we can observe that the diverse captions generated by our framework in both two settings have much higher accuracy (\\eg, CIDEr 267.3 vs. 222.5 in SCT), and that the diversity is slightly behind SCT (\\eg, self-CIDEr 67.0 vs. 69.1 in SCT). This is because SCT generates captions by randomly shuffling regions. Instead, we tend to learn more reasonable structures. Thus, we can achieve much higher results on accuracy, \\ie, our method can achieve a better trade-off between quality and diversity on diverse image captioning than the two strong baselines. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_46", "text": " Visualizations. We further illustrate the generated captions of two images with different VSRs in Figure 7. The captions are generated effectively according to the given VSR, and the diversity of VSR leads to significant diverse captions. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_47", "text": " In this paper, we argued that all existing objective control signals for CIC have overlooked two indispensable characteristics: event-compatible and sample-suitable. To this end, we proposed a novel control signal called VSR. VSR consists of a verb and several semantic roles, \\ie, all components are guaranteed to be event-compatible. Meanwhile, VSR only restricts the involved semantic roles, which is also sample-suitable for all the images containing the activity. We have validated the effectiveness of VSR through extensive experiments. Moving forward, we will plan to 1) design a more effective captioning model to benefit more from the VSR signals; 2) extend VSR to other controllable text generation tasks, \\eg, video captioning ; 3) design a more general framework to cover the images without verbs. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" }, { "id": "2103.12204_all_48", "text": " Acknowledgements. This work was supported by the National Natural Science Foundation of China (U19B2043,61976185), Zhejiang Natural Science Foundation (LR19F020002), Zhejiang Innovation Foundation (2019R52002), and Fundamental Research Funds for Central Universities. ", "title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles" } ]
What value of momentum coefficient (τ) makes the BULR model perform best?
Model gets best performance when the value of parameter tau is larger or equal than 09 and smaller than 1 [32].
[ 32 ]
[ { "id": "2105.06323_all_0", "text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on items are observed (e.g., click, purchase, or browsing history). This problem has remained challenging due to an extreme sparseness of such implicit feedback (i.e., most users have interacted with only a few items among numerous items), and also the non-existence of the negative labels for user-item interactions (i.e., observed feedback is expressions of positive interactions). Precisely, the goal of OCCF is to identify the most likely positive user-item interactions among a huge amount of unobserved interactions, by using only a small number of observed (positively-labeled) interactions. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_1", "text": " The most dominant approach to the OCCF problem is discriminative modeling (Rendle et al., 2009; Hsieh et al., 2017; He et al., 2017; Li et al., 2020; Kim et al., 2019; Wang et al., 2019), which explicitly aims to distinguish positive user-item interactions from the negative counterparts. They define the interaction score indicating how likely each user interacts with each item, based on the similarity (e.g., inner product) between the representation of a user and an item. From matrix factorization (Hu et al., 2008; Rendle et al., 2009) to deep neural networks (He et al., 2017; Wang et al., 2019), a variety of techniques have been studied to effectively model this score. Then, they optimize the scores by using the pointwise prediction loss (Hu et al., 2008; He et al., 2017) or the pairwise ranking loss (Rendle et al., 2009; Hsieh et al., 2017) to discriminate between positive and negative interactions. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_2", "text": " However, since the negative interactions are not available in the OCCF problem, previous discriminative methods assume that all unobserved interactions are negative. In other words, for each user, the items that have not been interacted yet are regarded to be less preferred to positive items. In this sense, they either use all unobserved user-item interactions as negative or adopt a negative sampling, which randomly samples unobserved user-item interactions in a stochastic manner to alleviate the computational burden. For better recommendation performance and faster convergence, advanced negative sampling strategies (Rendle and Freudenthaler, 2014; Ding et al., 2019) are also proposed to sample from non-uniform distributions. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_3", "text": " Nevertheless, the negative sampling approach has critical limitations in the following aspects. First, the underlying assumption about negative interactions becomes less valid as user-item interactions get sparser. This is because as fewer positive interactions are observed, the number of ”positive but unobserved” interactions increases, which consequently makes it even harder to sample correct negative ones. Such uncertainty of supervision eventually degrades the performance for top-K𝐾K recommendation. Second, the convergence speed and the final performance depend on the specific choice of distributions for negative sampling. For example, sampling negative pairs from a non-uniform distribution (Rendle and Freudenthaler, 2014; Ding et al., 2019) (e.g., the multinomial distribution which models the probability of each interaction being actually negative) can improve the final performance, but inevitably incurs high computational costs, especially when a lot of users and items should be considered. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_4", "text": " As a solution to the aforementioned limitations, this paper proposes a novel OCCF framework, named as BUIR, which does not require the negative sampling at all for training the model. The main idea is, given a positive user-item interaction (u𝑢u, v𝑣v), to make representations for u𝑢u and v𝑣v similar to each other, in order to encode the preference information into the representations. However, a naive end-to-end learning framework that guides positive user-item pairs to be similar to each other without any negative supervision can easily converge to a collapsed solution – the encoder network outputs the same representations for all the users and items. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_5", "text": " We argue that the above collapsed solution is incurred by the simultaneous optimization of u𝑢u and v𝑣v within the end-to-end learning framework of a single encoder. Hence, we instead adopt the student-teacher-like network (Tarvainen and Valpola, 2017; Grill et al., 2020) in which only the student’s output u𝑢u (and v𝑣v) is optimized to predict the target v𝑣v (and u𝑢u) presented by the teacher. Specifically, BUIR directly bootstraps111In this paper, the term “bootstrapping” is not used in the statistical meaning, but in the idiomatic meaning (Grill et al., 2020). Strictly speaking, it refers to using estimated values (i.e., the output of networks) for estimating its target values, which serve as supervision for the update. For instance, semi-supervised learning based on predicted pseudo-labels (Tarvainen and Valpola, 2017) also can be thought as a bootstrapping method. the representations of users and items by employing two distinct encoder networks, referred to as online encoder and target encoder. The high-level idea is training only the online encoder for the prediction task between u𝑢u and v𝑣v, where the target for its prediction is provided by the target encoder. That is, the online encoder is optimized so that its user (and item) vectors get closer to the item (and user) vectors computed by the target encoder. At the same time, the target encoder is updated based on momentum-based moving average (Tarvainen and Valpola, 2017; He et al., 2020b; Grill et al., 2020) to slowly approximate the online encoder, which encourages to provide enhanced representations as the target for the online encoder. By doing so, the online encoder can capture the positive relationship between u𝑢u and v𝑣v into the representations, while preventing the model from collapsing to the trivial solution without explicitly using any negative interactions for the optimization. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_6", "text": " Furthermore, we introduce a stochastic data augmentation technique to relieve the data sparsity problem in our framework. Motivated by the recent success of self-supervised learning in various domains (Chen et al., 2020; Devlin et al., 2019), we exploit augmented views of an input interaction, which are generated based on the neighborhood information of each user and item (i.e., the set of the items interacted with a user, and the users interacted with an item). The stochastic augmentation is applied to positive user-item pairs when they are passed to the encoder, so as to produce the different views of the pairs. To be precise, by making our encoder use a random subset of a user’s (and item’s) neighbors for the input features, it produces a similar effect to increasing the number of positive pairs from the data itself without any human intervention. In the end, BUIR is allowed to learn various views of each positive user-item pair. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_7", "text": " Our extensive evaluation on real-world implicit feedback datasets shows that BUIR consistently performs the best for top-K𝐾K recommendation among a wide range of OCCF methods. In particular, the performance improvement becomes more significant in sparser datasets, with the help of utilizing augmented views of positive interactions as well as eliminating the effect of uncertain negative interactions. In addition, comparison results on a downstream task, which classifies the items into their category, support that BUIR learns more effective representations than other OCCF baselines. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_8", "text": " One-class collaborative filtering (OCCF) was firstly introduced to handle the real-world recommendation scenario where only positive user-item interaction can be labeled (Pan et al., 2008; Hu et al., 2008) as a form of users’ implicit feedback on items. That is, only the set of positive user-item pairs, denoted by ℛℛ\\mathcal{R}, is given for training the model. The main challenge of OCCF is to find out the most likely positive interactions among a large number of unobserved user-item pairs in which both positive and negative interactions are mixed together. To handle the absence of negatively-labeled interactions, most existing methods have either treated all unobserved user-item pairs as negative, or sampled some of them (He et al., 2017), assuming that the items that have not been interacted yet are less preferred to positive items. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_9", "text": " To be specific, discriminative methods (Rendle et al., 2009; Wu et al., 2016; Hsieh et al., 2017; He et al., 2017; Li et al., 2020; Kim et al., 2019; Wang et al., 2019) train their model so that it can differentiate the scores between positive and negative interactions. Pairwise learning, which is the most popular approach to personalized ranking, explicitly utilizes the pairs of positive and negative interactions for training. Formally, the pairwise ranking loss optimizes the similarity for a positive interaction to become larger than that for a negative one as follows. (1) ℒ=−∑(u,vp,vn)∈𝒪ϕ​(s​i​m​(u,vp)>s​i​m​(u,vn)),ℒsubscript𝑢superscript𝑣𝑝superscript𝑣𝑛𝒪italic-ϕ𝑠𝑖𝑚𝑢superscript𝑣𝑝𝑠𝑖𝑚𝑢superscript𝑣𝑛\\begin{split}\\mathcal{L}=-\\sum_{(u,v^{p},v^{n})\\in\\mathcal{O}}\\phi(sim(u,v^{p})>sim(u,v^{n})),\\end{split} where 𝒪={(u,vp,vn)|(u,vp)∈ℛ,(u,vn)∉ℛ}𝒪conditional-set𝑢superscript𝑣𝑝superscript𝑣𝑛formulae-sequence𝑢superscript𝑣𝑝ℛ𝑢superscript𝑣𝑛ℛ\\mathcal{O}=\\{(u,v^{p},v^{n})|(u,v^{p})\\in\\mathcal{R},(u,v^{n})\\notin\\mathcal{R}\\}, and ϕitalic-ϕ\\phi is a scoring function to facilitate the optimization. For example, Bayesian personalized ranking (He and McAuley, 2016; Kim et al., 2019; Wang et al., 2019) defines the similarity of a user and an item by the inner product of their representations, and collaborative metric learning (Hsieh et al., 2017; Park et al., 2018; Li et al., 2020) directly learns the latent space by modeling their similarity as the Euclidean distance. However, all these methods obtain the negative interactions from unobserved user-item pairs, thus the convergence speed and final performance largely depend on the negative sampling distribution (Rendle and Freudenthaler, 2014). ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_10", "text": " On the other hand, generative methods (Liang et al., 2018; Wang et al., 2017; Chae et al., 2018; Liu et al., 2019) aim to learn the underlying latent distribution of users, usually represented by binary vectors indicating their interacted items. They employ the architecture of variational autoencoder (VAE) (Liang et al., 2018) or generative adversarial networks (GAN) (Wang et al., 2017; Chae et al., 2018; Liu et al., 2019), in order to infer the users’ preference on each item based on the reconstructed (or generated) user vectors. Rather than exploiting the negative sampling, most of the generative methods implicitly assume that all unobserved user-item pairs are negative in that they learn the partially-observed binary vectors as their inputs. We remark that this assumption is not strictly valid, which eventually leads to limited performance. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_11", "text": " Recently, a self-supervised learning approach has achieved a great success in computer vision and natural language understanding (Chen et al., 2020; Devlin et al., 2019; He et al., 2020b). Most of them basically adopt contrastive learning, which optimizes the representations of positively-related (similar) instances to be close, while those of negatively-related (dissimilar) ones far from each other. Given an unlabeled dataset 𝒟={x1,…,xN}𝒟subscript𝑥1…subscript𝑥𝑁\\mathcal{D}=\\{x_{1},\\ldots,x_{N}\\}, positive pairs for each instance (x,x)p(x,x{}^{p}) is usually obtained from the data itself (i.e., data augmentation), such as geometric transformations on a target image. Note that it does not require any human annotations or additional labels, thus this approach falls into the category of self-supervised learning. The noise contrastive estimator (NCE) loss (Gutmann and Hyvärinen, 2010; Oord et al., 2018) mainly used for contrastive learning is defined by using all the other instances except for x𝑥x as negative: (2) ℒ=−∑x∈𝒟log⁡exp(sim(x,x)p)exp(sim(x,x)p)+∑x∈n𝒟\\{x}exp(sim(x,x)n).\\mathcal{L}=-\\sum_{x\\in\\mathcal{D}}\\log\\frac{\\exp(sim(x,x{}^{p}))}{\\exp(sim(x,x{}^{p}))+\\sum_{x{}^{n}\\in\\mathcal{D}\\backslash\\{x\\}}\\exp(sim(x,x{}^{n}))}. In case of large-scale datasets, the predefined number of negative instances can be selected (i.e., negative sampling). For contrastive learning, negative pairs must be considered for its optimization so as to prevent the representations of all instances from being similar, which is known as the problem of collapsed solutions. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_12", "text": " Pointing out that the contrastive methods need to carefully treat the negative instances during the training for effectiveness and efficiency, the most recent work proposed a bootstrapping-based self-supervised learning framework (Grill et al., 2020; Chen and He, 2021), which is capable of avoiding the collapsed solution without the help of negative instances. Inspired by bootstrapping methods in deep reinforcement learning (Mnih et al., 2015; Mnih et al., 2016), it directly bootstraps the representation of images by using two neural networks that iteratively learn from each other. This approach achieves the state-of-the-art performance for various downstream tasks in computer vision, and also shows better robustness to the choice of data augmentations used for self-supervision. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_13", "text": " In this section, we present our OCCF framework, named as BUIR, which learns the representations of users and items without any assumptions about negative interactions. We first describe the overall learning process with a simple encoder that takes the user-id and item-id as its input (Section 3.2) and how to infer the interaction score using the representations (Section 3.3). We also introduce a stochastic data augmentation technique with an extended encoder to further exploit the neighborhood information (Section 3.4). ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_14", "text": " Let 𝒰={u1,…,uM}𝒰subscript𝑢1…subscript𝑢𝑀\\mathcal{U}=\\{u_{1},\\ldots,u_{M}\\} and 𝒱={v1,…,vN}𝒱subscript𝑣1…subscript𝑣𝑁\\mathcal{V}=\\{v_{1},\\ldots,v_{N}\\} be the set of M𝑀M users and N𝑁N items, respectively. Given a set of observed user-item interactions ℛ={(u,v)|user u is interacted\\mathcal{R}=\\{(u,v)|\\text{user $u$ is interacted} with item v}\\text{with item $v$}\\}, the goal of OCCF is to obtain the interaction (or preference) score s​(u,v)∈ℝ𝑠𝑢𝑣ℝs(u,v)\\in\\mathbb{R} indicating how likely the user u𝑢u interacts with (or prefers to) the item v𝑣v. Based on the interaction scores, we can recommend K𝐾K items with the highest scores for each user, called as top-K𝐾K recommendation. To define the interaction score by using the representations of users and items, we focus on training the encoder network that maps each user and item into a low-dimensional latent space where the users’ preferences on the items are effectively captured. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_15", "text": " Let f𝑓f be the encoder network to produce the representations of users and items. The simplest architecture of the encoder is a single embedding layer (i.e., embedding matrix); this maps each user-id (or item-id) into a D𝐷D-dimensional embedding vector that represents the latent factors of the user (or item). Specifically, each encoder consists of a user encoder and an item encoder, and they take a one-hot vector indicating the user-id and item-id as their input. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_16", "text": " BUIR makes use of two distinct encoder networks that have the same structure: online encoder fθsubscript𝑓𝜃f_{\\theta} and target encoder fξsubscript𝑓𝜉f_{\\xi}. They are parameterized by θ𝜃\\theta and ξ𝜉\\xi, respectively. The key idea of BUIR is to train the online encoder by using outputs of the target encoder as its target, while gradually improving the target encoder as well. The main difference of BUIR from existing end-to-end learning frameworks is that fθsubscript𝑓𝜃f_{\\theta} and fξsubscript𝑓𝜉f_{\\xi} are updated in different ways. The online encoder is trained to minimize the error between its output and the target, whereas the target network is slowly updated based on the momentum update (He et al., 2020b) so as to keep its output consistent. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_17", "text": " To be precise, for each observed interaction (u,v)∈ℛ𝑢𝑣ℛ(u,v)\\in\\mathcal{R}, the BUIR loss is defined based on the mean squared error of the prediction against each other (i.e., representations of u𝑢u and v𝑣v) using the predictor qθ:ℝD→ℝD:subscript𝑞𝜃→superscriptℝ𝐷superscriptℝ𝐷q_{\\theta}:\\mathbb{R}^{D}\\rightarrow\\mathbb{R}^{D} on top of the online encoder. It includes two error terms: one is for updating the online user vector fθ​(u)subscript𝑓𝜃𝑢f_{\\theta}({u}) to accurately predict the target item vector fξ​(v)subscript𝑓𝜉𝑣f_{\\xi}({v}), and the other is for updating the online item vector fθ​(v)subscript𝑓𝜃𝑣f_{\\theta}({v}) to make its prediction as the target user vector fξ​(u)subscript𝑓𝜉𝑢f_{\\xi}({u}). Finally, the loss is described as follows: (3) ℒθ,ξ​(u,v)=l2​(qθ​(fθ​(u)),fξ​(v))+l2​(qθ​(fθ​(v)),fξ​(u))≈−qθ​(fθ​(u))⊤​fξ​(v)∥qθ​(fθ​(u))∥2​∥fξ​(v)∥2−qθ​(fθ​(v))⊤​fξ​(u)∥qθ​(fθ​(v))∥2​∥fξ​(u)∥2,subscriptℒ𝜃𝜉𝑢𝑣subscript𝑙2subscript𝑞𝜃subscript𝑓𝜃𝑢subscript𝑓𝜉𝑣subscript𝑙2subscript𝑞𝜃subscript𝑓𝜃𝑣subscript𝑓𝜉𝑢subscript𝑞𝜃superscriptsubscript𝑓𝜃𝑢topsubscript𝑓𝜉𝑣subscriptdelimited-∥∥subscript𝑞𝜃subscript𝑓𝜃𝑢2subscriptdelimited-∥∥subscript𝑓𝜉𝑣2subscript𝑞𝜃superscriptsubscript𝑓𝜃𝑣topsubscript𝑓𝜉𝑢subscriptdelimited-∥∥subscript𝑞𝜃subscript𝑓𝜃𝑣2subscriptdelimited-∥∥subscript𝑓𝜉𝑢2\\begin{split}\\mathcal{L}_{\\theta,\\xi}(u,v)&={l_{2}}\\left(q_{\\theta}\\left(f_{\\theta}({u})\\right),f_{\\xi}({v})\\right)+{l_{2}}\\left(q_{\\theta}\\left(f_{\\theta}({v})\\right),f_{\\xi}({u})\\right)\\\\ &\\approx-\\frac{q_{\\theta}\\left(f_{\\theta}({u})\\right)^{\\top}f_{\\xi}({v})}{\\lVert q_{\\theta}\\left(f_{\\theta}({u})\\right)\\rVert_{2}\\lVert f_{\\xi}({v})\\rVert_{2}}-\\frac{q_{\\theta}\\left(f_{\\theta}({v})\\right)^{\\top}f_{\\xi}({u})}{\\lVert q_{\\theta}\\left(f_{\\theta}({v})\\right)\\rVert_{2}\\lVert f_{\\xi}({u})\\rVert_{2}},\\end{split} where l2​(𝐱,𝐲)subscript𝑙2𝐱𝐲l_{2}(\\mathbf{x},\\mathbf{y}) is the l2subscript𝑙2l_{2} distance between two normalized vectors 𝐱¯¯𝐱\\overline{\\mathbf{x}} and 𝐲¯¯𝐲\\overline{\\mathbf{y}}; i.e., 𝐱¯=𝐱/‖𝐱‖2¯𝐱𝐱subscriptnorm𝐱2\\overline{\\mathbf{x}}=\\mathbf{x}/\\|\\mathbf{x}\\|_{2} and 𝐲¯=𝐲/‖𝐲‖2¯𝐲𝐲subscriptnorm𝐲2\\overline{\\mathbf{y}}=\\mathbf{y}/\\|\\mathbf{y}\\|_{2}. Since the mean squared errors between two normalized vectors are equivalent to the negative value of their inner product (Equation (3)), we simply use the inner product for the optimization. Note that BUIR updates fθ​(u)subscript𝑓𝜃𝑢f_{\\theta}({u}) to be similar with fξ​(v)subscript𝑓𝜉𝑣f_{\\xi}({v}) instead of fθ​(v)subscript𝑓𝜃𝑣f_{\\theta}({v}) through the predictor, and vice versa. This is because directly reducing the error between fθ​(u)subscript𝑓𝜃𝑢f_{\\theta}({u}) and fθ​(v)subscript𝑓𝜃𝑣f_{\\theta}({v}) leads to the collapsed representations when negative interactions are not considered at all for training the encoder. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_18", "text": " To sum up, the parameters of the online encoder and target encoder are optimized by (4) θ←θ−η⋅∇θℒθ,ξξ←τ⋅ξ+(1−τ)⋅θ.←𝜃𝜃⋅𝜂subscript∇𝜃subscriptℒ𝜃𝜉𝜉←⋅𝜏𝜉⋅1𝜏𝜃\\begin{split}\\theta&\\leftarrow\\theta-\\eta\\cdot\\nabla_{\\theta}\\mathcal{L}_{\\theta,\\xi}\\\\ \\xi&\\leftarrow\\tau\\cdot\\xi+(1-\\tau)\\cdot\\theta.\\end{split} η𝜂\\eta is the learning rate for stochastic optimization, and τ∈(0,1)𝜏01\\tau\\in(0,1) is a momentum coefficient (also called as target decay) for momentum-based moving average. The online encoder fθsubscript𝑓𝜃f_{\\theta} (and the predictor qθsubscript𝑞𝜃q_{\\theta}) is effectively optimized by the gradients back-propagated from the loss (Equation (3)), while the target encoder fξsubscript𝑓𝜉f_{\\xi} is updated as the moving average of the online encoder. By taking a large value of τ𝜏\\tau, the target encoder slowly approximates the online encoder. This momentum-based update makes ξ𝜉\\xi evolve more slowly than θ𝜃\\theta, which enables to bootstrap the representations by providing enhanced but consistent targets to the online encoders (He et al., 2020b; Grill et al., 2020). Figure 1 illustrates the overall framework of BUIR with the simple one-hot encoders. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_19", "text": " Bypassing the collapsed solution.   It is obvious that the loss in Equation (3) admits the collapsed solution with respect to θ𝜃\\theta and ξ𝜉\\xi, which means both the encoders generate the same representations for all users and items. For this reason, the conventional end-to-end learning strategy, which optimizes both fθsubscript𝑓𝜃f_{\\theta} and fξsubscript𝑓𝜉f_{\\xi} to minimize the loss (i.e., cross-prediction error), may easily lead to such collapsed solution. In contrast, our proposed framework updates each of the encoders in different ways. From Equation (4), the online encoder is optimized to minimize the loss, while the target encoder is updated to slowly approximate the online encoder. That is, the direction of updating the target encoder (θ−ξ𝜃𝜉\\theta-\\xi) totally differs from that of updating the online encoder (−∇θℒθ,ξsubscript∇𝜃subscriptℒ𝜃𝜉-\\nabla_{\\theta}\\mathcal{L}_{\\theta,\\xi}), and this effectively keeps both the encoders from converging to the collapsed solution. Several recent work on bootstrapping-based representation learning (Grill et al., 2020; Chen and He, 2021) empirically demonstrated that this kind of dynamics (i.e., updating two networks differently) allows to avoid the collapsed solution without any explicit term to prevent it. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_20", "text": " To retrieve K𝐾K most preferred items for each user (i.e., user-item interactions that are most likely to happen), we define the interaction score s​(u,v)𝑠𝑢𝑣s(u,v) by using the representations of users and items. As we minimize the prediction error between u𝑢u and v𝑣v for each positive interaction (u,v)𝑢𝑣(u,v), their positive relationship is encoded into the l2subscript𝑙2l_{2} distance between their representations (Equation (3)). In other words, a smaller value of ℒθ,ξ​(u,v)subscriptℒ𝜃𝜉𝑢𝑣\\mathcal{L}_{\\theta,\\xi}(u,v) indicates that the user-item pair (u,v)𝑢𝑣(u,v) is more likely to be interacted, which means the loss becomes inversely proportional to the interaction score. To consider the symmetric relationship between u𝑢u and v𝑣v, the interaction score is defined based on the cross-prediction task; the prediction of v𝑣v by u𝑢u, and the prediction of u𝑢u by v𝑣v.222We empirically found that the normalized representations cannot take into account the popularity of users and items, thus simply use the output of the online encoder. (5) s​(u,v)=qθ​(fθ​(u))⊤​fθ​(v)+fθ​(u)⊤​qθ​(fθ​(v)).𝑠𝑢𝑣subscript𝑞𝜃superscriptsubscript𝑓𝜃𝑢topsubscript𝑓𝜃𝑣subscript𝑓𝜃superscript𝑢topsubscript𝑞𝜃subscript𝑓𝜃𝑣s(u,v)=q_{\\theta}\\left(f_{\\theta}({u})\\right)^{\\top}f_{\\theta}({v})+f_{\\theta}({u})^{\\top}q_{\\theta}\\left(f_{\\theta}({v})\\right). For the computation of the interaction scores, we use only the representations obtained from the online encoder, with the target encoder discarded. Since the online encoder and the target encoder finally converge to equilibrium by the slow-moving average, it is possible to effectively infer the interaction score only with the online encoder. Considering the purpose of the target network, which generates the target for training the online network, it does make sense to leave the online encoder in the end. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_21", "text": " Existing discriminative OCCF methods (Rendle et al., 2009; Hsieh et al., 2017) have tried to optimize the latent space where the user-item interactions are directly encoded into their inner product (or Euclidean distance). On the contrary, BUIR additionally uses the predictor to model their interaction, which results in the capability of encoding the high-level relationship between users and items into the representations. In conclusion, with the help of the predictor, BUIR accurately computes the user-item interaction scores as well as optimizes the representation without explicitly using negative samples. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_22", "text": " The another available source for OCCF is the neighborhood information of users and items. The neighbors of user u𝑢u and item v𝑣v, denoted by 𝒱usubscript𝒱𝑢\\mathcal{V}_{u} and 𝒰vsubscript𝒰𝑣\\mathcal{U}_{v}, refer to the set of the items interacted with u𝑢u, and the users interacted with v𝑣v, respectively. From the perspective that user-item interactions can be considered as a bipartite graph between user nodes and item nodes, each node’s neighbors (or its local graph structure) can be a good feature to encode the similarity among the nodes. To take advantage of these neighbors as input features of users and items, we use a neighbor-based encoder (Kim et al., 2019; Wang et al., 2019; He et al., 2020a) which additionally takes a given set of users (or items) as its input. Namely, this encoder is able to learn such set-featured inputs, represented as multi-hot vectors, capturing both the co-occurrence of users (or items) and their relationship. Adding the multi-hot inputs 𝒱usubscript𝒱𝑢\\mathcal{V}_{u} and 𝒰vsubscript𝒰𝑣\\mathcal{U}_{v} to the one-hot inputs u𝑢u and v𝑣v within our framework, the neighbor-based user/item representations, denoted by fθ​(u,𝒱u)subscript𝑓𝜃𝑢subscript𝒱𝑢f_{\\theta}(u,\\mathcal{V}_{u}) and fθ​(v,𝒰v)subscript𝑓𝜃𝑣subscript𝒰𝑣f_{\\theta}(v,\\mathcal{U}_{v}), can be effectively optimized and utilized, instead of fθ​(u)subscript𝑓𝜃𝑢f_{\\theta}({u}) and fθ​(v)subscript𝑓𝜃𝑣f_{\\theta}({v}). In this case, the online encoder parameters related to user u𝑢u (or item v𝑣v) are shared for computing fθ​(u,𝒱u)subscript𝑓𝜃𝑢subscript𝒱𝑢f_{\\theta}(u,\\mathcal{V}_{u}) and fθ​(v,𝒰v)subscript𝑓𝜃𝑣subscript𝒰𝑣f_{\\theta}(v,\\mathcal{U}_{v}), thus they are updated by two types of supervision (i.e., optimized not only as a target but also as one of the neighbors), which brings an effect of regularization. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_23", "text": " For acquisition and exploitation of richer supervision, we extend our framework to consider much more user-item interactions that are augmented based on their neighborhood information in a self-supervised manner. To this end, we introduce a new augmentation technique specifically designed for positive user-item interactions; it does not statically increase the number of interactions as a pre-processing step, rather be stochastically applied to each input interaction during the training. This stochastic data augmentation allows the encoder to learn slightly perturbed interactions, referred to as augmented views of an interaction. By doing so, BUIR can effectively learn the representations even in the case that only a few positive user-item interactions are available for training (i.e., highly sparse dataset). To this end, we first represent each user and item as the pair of its identity and neighbors: (u,𝒱u)𝑢subscript𝒱𝑢(u,\\mathcal{V}_{u}) and (v,𝒰v)𝑣subscript𝒰𝑣(v,\\mathcal{U}_{v}). Then, we apply the following augmentation function ψ𝜓\\psi to the user and item before passing them to the neighbor encoder. (6) ψ​(u,𝒱u)=(u,𝒱u)′, where 𝒱u∼′{𝒮|𝒮⊆𝒱u},ψ​(v,𝒰v)=(v,𝒰v)′, where 𝒰v∼′{𝒮|𝒮⊆𝒰v}.\\begin{split}\\small\\psi(u,\\mathcal{V}_{u})&=(u,\\mathcal{V}_{u}{}^{\\prime}),\\text{ where }\\mathcal{V}_{u}{}^{\\prime}\\sim\\{\\mathcal{S}|\\mathcal{S}\\subseteq\\mathcal{V}_{u}\\},\\\\ \\psi(v,\\mathcal{U}_{v})&=(v,\\mathcal{U}_{v}{}^{\\prime}),\\text{ where }\\mathcal{U}_{v}{}^{\\prime}\\sim\\{\\mathcal{S}|\\mathcal{S}\\subseteq\\mathcal{U}_{v}\\}.\\end{split} This augmentation function chooses one of the subsets of the user’s neighbors (i.e., 𝒱u′\\mathcal{V}_{u}{}^{\\prime}) for an input user, and works in a similar way for an input item. For each input interaction (u,v)𝑢𝑣(u,v), we can make a variety of interactions containing small perturbations (ψ​(u,𝒱u),ψ​(v,𝒰v))𝜓𝑢subscript𝒱𝑢𝜓𝑣subscript𝒰𝑣(\\psi(u,\\mathcal{V}_{u}),\\psi(v,\\mathcal{U}_{v})), and they produce a similar effect to increasing the number of positive pairs from the data itself. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_24", "text": " Similarly to Section 3.2, the online encoder is trained by minimizing ℒθ,ξ​(ψ​(u,𝒱u),ψ​(v,𝒰v))subscriptℒ𝜃𝜉𝜓𝑢subscript𝒱𝑢𝜓𝑣subscript𝒰𝑣\\mathcal{L}_{\\theta,\\xi}(\\psi(u,\\mathcal{V}_{u}),\\psi(v,\\mathcal{U}_{v})), and the target encoder is slowly updated by the momentum mechanism. After the optimization is finished, the interaction score is inferred by fθ​(u,𝒱u)subscript𝑓𝜃𝑢subscript𝒱𝑢f_{\\theta}(u,\\mathcal{V}_{u}) and fθ​(v,𝒰v)subscript𝑓𝜃𝑣subscript𝒰𝑣f_{\\theta}(v,\\mathcal{U}_{v}) (Equation (5)). Figure 2 shows an example of our data augmentation which injects a certain level of perturbations to the neighbors. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_25", "text": " In this section, we describe the experimental results that support the superiority of our proposed framework. We first present comparison results with other OCCF methods for top-K𝐾K recommendation (Section 4.2), then validate the effectiveness of each component through an ablation study (Section 4.3 and 4.4). We also evaluate the quality of obtained representations for a downstream task (Section 4.5) and finally provide the hyperparameter analysis (Section 4.6). ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_26", "text": " Datasets.   In our experiments, we use three real-world datasets: CiteULike (Wang et al., 2013), Ciao (Tang et al., 2012), and FourSquare (Liu et al., 2017). For preprocessing the datasets, we follow previous work (Wang et al., 2019; He et al., 2017; Rendle et al., 2009; Kang et al., 2020) which provide the minimum count of user-item interactions for filtering long-tail users/items, considering the property of each dataset (e.g., the statistics or the domain where the implicit feedback is collected).333We remove users having fewer than 5 (CiteULike, Ciao) & 20 interactions (FourSquare), and remove items having fewer than 5 (Ciao) & 10 interactions (FourSquare). Table 1 summarizes the statistics of the datasets. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_27", "text": " Baselines.   We compare the performance of BUIR with that of baseline OCCF methods, including both discriminative and generative methods. They are re-categorized as either 1) the methods using only the user-id/item-id or 2) the ones additionally using the neighborhood information. Most of the methods in the first category directly optimize the embedding vectors of users and items. • BPR (Rendle et al., 2009): The Bayesian personalized ranking method for OCCF. It optimizes matrix factorization (MF) based on the pairwise ranking loss. • NeuMF (He et al., 2017): The neural network-based method that uses the pointwise prediction loss. It combines MF and multi-layer perceptron (MLP) to model the user-item interaction. • CML (Hsieh et al., 2017): A metric learning approach to the OCCF problem. It optimizes the Euclidean distance between a user and an item based on the pairwise hinge loss. • SML (Li et al., 2020): The state-of-the-art OCCF method based on metric learning. For symmetrization, it considers the Euclidean distance among items as well as between a user and an item. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_28", "text": " Next, the neighbor-based OCCF methods exploit the neighborhood information of users and items to compute the representations. • NGCF (Wang et al., 2019): A neighbor-based method which encodes a user’s (and item’s) neighbors by using graph convolutional networks (GCN). It can consider multi-hop neighbors as well based on a stack of GCN layers. • LGCN (He et al., 2020a): The state-of-the-art method that further tailors the GCN-based user (and item) encoder for the OCCF task. It simplifies the GCN by using the light graph convolution. • M-VAE (Liang et al., 2018): The OCCF method based on a variational autoencoder that reconstructs partially-observed user vectors. It enforces the latent distribution to approximate the prior, assumed to be the normal distribution. • CFGAN (Chae et al., 2018): The state-of-the-art GAN-based OCCF method. The discriminator is trained to distinguish between input (real) user vectors and generated (fake) ones, while the generator is optimized to deceive the discriminator. Among them, NGCF and LGCN are the discriminative methods that optimize their model by using the pairwise loss based on the BPR framework. On the contrary, M-VAE and CFGAN are the generative methods that focus on learning the latent distribution of users, represented by binary vectors indicating their interacted items. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_29", "text": " We build two variants of BUIR using different encoder networks. • BUIRid: The BUIR framework using a single embedding layer as its encoder. It simply takes the user/item vectors from the embedding matrix (Section 3.2). • BUIRnb: The BUIR framework based on the LGCN encoder. It computes the user/item representations by using the lightweight GCN (He et al., 2020a) that adopts the proposed neighbor augmentation technique (Section 3.4). Note that any types of user/item encoder networks, which are originally optimized in a discriminative framework (e.g., BPR), can be easily embedded into our framework. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_30", "text": " Evaluation Protocols.   For each dataset, we randomly split each user’s interaction history into training/validation/test sets, with various split ratios. In detail, to verify the effectiveness of BUIR with varying levels of data sparsity, we build three training sets that include a certain proportion of interactions for each user, i.e., β∈{10%,20%,50%}𝛽percent10percent20percent50\\beta\\in\\{10\\%,20\\%,50\\%\\},444This setting (high sparsity) is more difficult and practical than the traditional setting. then equally divide the rest into the validation set and the test set. We report the average value of five independent runs, each of which uses different random seeds for the split. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_31", "text": " As we focus on the top-K𝐾K recommendation task for implicit feedback, we evaluate the performance of each method by using two widely-used ranking metrics (Chae et al., 2018; Liang et al., 2018; Li et al., 2020): Precision (P@K𝐾K) and Normalized Discounted Cumulative Gain (N@K𝐾K).555As pointed out in (Krichene and Rendle, 2020), a sampled metric where only a smaller set of random items and the relevant items are ranked (e.g., leave-one-out evaluation protocol (He et al., 2017)) cannot correctly indicate the true performance of recommender systems. For this reason, we instead consider the ranked list of all the items with no interaction. P@K𝐾K measures how many test items are included in the list of top-K𝐾K items and N@K𝐾K assigns higher scores on the upper-ranked test items. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_32", "text": " Implementation Details.   We implement the proposed framework and all the baselines by using PyTorch, and use the Adam optimizer to train them. For BUIR, we fix the momentum coefficient τ𝜏\\tau to 0.995, and adopt a single linear layer for the predictor qθsubscript𝑞𝜃q_{\\theta}.666We empirically found that these hyperparameters hardly affect the final performance of BUIR, and the sensitivity analysis on the parameters is provided in Section 4.6. The augmentation function ψ𝜓\\psi simply uses a uniform distribution for drawing a drop probability p∼𝒰​(0,1)similar-to𝑝𝒰01p\\sim\\mathcal{U}(0,1), where each user’s (item’s) neighbor is independently deleted with the probability p𝑝p. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_33", "text": " For each dataset and baseline, we tune the hyperparameters using a grid search, which finds their optimal values that achieve the best performance on the validation set: the dimension size of representations D∈{50,100,150,200,250}𝐷50100150200250D\\in\\{50,100,150,200,250\\}, the weight decay (i.e., coefficient for L2subscript𝐿2L_{2} regularization) λ∈{10−1,10−2,10−3,10−4,10−5}𝜆superscript101superscript102superscript103superscript104superscript105\\lambda\\in\\{10^{-1},10^{-2},10^{-3},10^{-4},10^{-5}\\}, the initial learning rate η∈{10−1,10−2,10−3​10−4}𝜂superscript101superscript102superscript103superscript104\\eta\\in\\{10^{-1},10^{-2},10^{-3}10^{-4}\\}, and the number of negative pairs for each positive pair (particularly for discriminative baselines) n∈{1,2,5,10,20}𝑛1251020n\\in\\{1,2,5,10,20\\}. In case of baseline-specific hyperparameters, we tune them in the ranges suggested by their original papers. We set the maximum number of epochs to 500 and adopt the early stopping strategy; it terminates when P@10 on the validation set does not increase for 50 successive epochs. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_34", "text": " We first measure the top-K𝐾K recommendation performance of BUIR and the baseline methods. Table 2 presents the comparison results on three different sparsity levels of datasets. In summary, BUIR achieves the best performance among all the baselines, and especially shows the significant improvements in highly sparse datasets. We analyze the results from various perspectives. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_35", "text": " For all the datasets, BUIRid shows the substantially higher performance than the discriminative methods taking only user-id/item-id (i.e., BPR, NeuMF, CML, and SML). In particular, the sparser the training set becomes, the larger the performance improvement of BUIRid is achieved over the best baseline (denoted by Improvid). It is obvious that BUIRid is more robust to the extreme sparsity compared to the other baselines that are more likely to explicitly use “positive but unobserved” interactions as negative interactions when positive user-item interactions are more rarely observed. BUIRid is not affected by such inconsistent supervision from uncertain negative interactions because it directly optimizes the representations of users and items by using only positive interactions. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_36", "text": " Furthermore, in terms of the number of retrieved items (denoted by K𝐾K), BUIR shows much larger performance improvements for P@10 and N@10 compared to P@50 and N@50, respectively. In other words, BUIR performs much better at predicting the top-ranked items than the other baselines, which makes it practically advantageous for real-world recommender systems that aim to accurately provide the most preferred items to their users. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_37", "text": " We also observe that BUIRnb significantly outperforms all the other neighbor-based competitors, including discriminative (i.e., NGCF and LGCN) and generative methods (i.e., M-VAE and CFGAN). Similar to Section 4.2.1, there exist a consistent trend on its performance gain (denoted by Improvnb), which becomes more significant as fewer interactions are given for training. Specifically, the neighbor-based baselines improve the recommendation performance over the methods not using the neighborhood information, as they are able to cope with the high sparsity to some degree by leveraging the neighbors of users and items. Nevertheless, most of them, except for LGCN, perform worse than even BUIRid; this strongly indicates that their imperfect assumption on negative interactions severely limits the capability of capturing users’ preference on items even though they utilize rich information sources as well as employ advanced neural architectures. In short, for the OCCF problem where only a small number of positive interactions are given, our BUIR framework is effective regardless of the information sources used for training, in that any assumption on negative interactions is not required. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_38", "text": " In addition, the critical drawback of the generative methods is the difficulty of stable optimization. For example, M-VAE should carefully treat the annealing technique for minimizing Kullback-Leibler (KL) divergence, and CFGAN needs to balance the adversarial updates between the discriminator and generator for their convergence to the equilibrium. In contrast, BUIR can easily train the encoder without any advanced techniques for stable optimization, which makes our framework much practical. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_39", "text": " To examine how much the choice of a negative sampling strategy affects the recommendation performance, we measure P@10 and N@10 of two discriminative methods (i.e., BPR and CML) that adopt different strategies. We vary the number of negative pairs (sampled for each positive pair) in the range of {20,21,22,23,24}superscript20superscript21superscript22superscript23superscript24\\{2^{0},2^{1},2^{2},2^{3},2^{4}\\}, and consider three different distributions for negative sampling (Rendle and Freudenthaler, 2014): 1) uniform sampling, 2) static-and-global sampling which draws a pair based on the item popularity, and 3) adaptive-and-contextual sampling that uses the probability proportional to the interaction score. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_40", "text": " In Figure 3, we observe that the performance of the discriminative methods largely depends on the sampling strategy, whereas BUIRid consistently performs the best. To be specific, the sampling strategies show different tendencies or have different optimal hyperparameter values, depending on each dataset or each method. For instance, CML achieves marginal performance gains from the adaptive-and-contextual sampling compared to the uniform sampling, whereas BPR does not take any benefits from it. This is because CML optimizes its model by the hinge loss, which cannot produce the gradients to update the model parameters for too easily-distinguishable negative pairs. In this case, the adaptive-and-contextual sampling strategy can effectively select the hard-negative pairs for training, which accelerates the convergence and its final performance. We remark that this kind of sampling techniques can improve the performance of the discriminative methods to some extent, but the sampling operation requires a high computational cost itself as well as the process of hyperparameter tuning for each dataset (and method) takes huge efforts. On the contrary, as BUIRid does not rely on negative sampling, it always shows the greater performance (plotted as a solid black line) compared to any of the discriminative methods using various sampling techniques. This result clearly validates the superiority of BUIR in that it is not affected by the choice of the negative sampling strategy any longer. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_41", "text": " To validate the effectiveness of each component in our framework, we measure the performance of the methods that ablate the following components: 1) modeling the interaction score based on the predictor (i.e., cross-prediction score defined in Equation (5)), 2) the neighbor-based encoder that is able to capture the user’s (item’s) neighborhood information, and 3) the stochastic neighbor augmentation that produces various views of an input interaction. In Table 3, we report P@10 on the CiteULike dataset (β𝛽\\beta=50%). ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_42", "text": " First of all, the BPR framework that optimizes the cross-prediction score, q​(f​(u))⊤​f​(v)+f​(u)⊤​q​(f​(v))𝑞superscript𝑓𝑢top𝑓𝑣𝑓superscript𝑢top𝑞𝑓𝑣q\\left(f(u)\\right)^{\\top}f(v)+f(u)^{\\top}q\\left(f(v)\\right), is not as effective as ours; it is even worse compared to the conventional BPR, which optimizes the inner-product score f​(u)⊤​f​(v)𝑓superscript𝑢top𝑓𝑣f(u)^{\\top}f(v). This implies that the performance improvement of BUIR is mainly caused by our learning framework rather than its score modeling based on the predictor. In addition, even without the stochastic augmentation, the neighbor-based encoder (i.e., LGCN) based on the BUIR framework beats LGCN based on the BPR framework, which demonstrates that BUIR successfully addresses the issue of incorrect negative sampling. Lastly, our framework with the stochastic neighbor augmentation further improves the performance by taking benefits from various views of the positive user-item interactions for the optimization. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_43", "text": " For an in-depth analysis on the effect of our stochastic data augmentation function ψ𝜓\\psi, we measure the performance of BUIRnb on the CiteULike and Ciao datasets (β𝛽\\beta=20%), with various magnitudes of the perturbation added to the neighbors of users and items. We modify the augmentation function to randomly select the drop probability from a predefined interval, i.e., p∼𝒰​(0,P)similar-to𝑝𝒰0𝑃p\\sim\\mathcal{U}(0,P) where P𝑃P is the maximum drop probability, then increase P𝑃P from 0.0 to 1.0. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_44", "text": " In Figure 4, our stochastic data augmentation (i.e., P>0𝑃0P>0) brings a significant improvement compared to the case of using the fixed neighborhood information (i.e., P=0𝑃0P=0) as encoder inputs. This result shows that the augmented views of positive interactions encourage BUIR to effectively learn users’ preference on items even in much sparse dataset. Interestingly, in case of the Ciao dataset which is less sparse than CiteULike, the benefit of our augmentation linearly increases with the maximum drop probability. This is because there is room for producing more various views (i.e., larger perturbation) based on a relatively more number of neighbors, and it eventually helps to boost the recommendation performance. To sum up, our framework that adopts the neighbor augmentation function successfully relieves the data sparsity issue of the OCCF problem, by leveraging the augmented views of few positive interactions. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_45", "text": " To evaluate the quality of the obtained representations, we compare the performance for a downstream task by using the representations optimized by BUIR and the other baselines.777In this comparison, we exclude the generative OCCF methods as our baselines, because they do not explicitly output the item representations. We consider an item classification task to evaluate how well each method encodes the items’ characteristics or latent semantics into the representations. We choose two datasets that offer the side information on items, which are Ciao and FourSquare. Ciao provides the 28-category label of each item (i.e., the products), and FourSquare contains the GPS coordinates for each item (i.e., point-of-interest). In case of FourSquare, we first perform k𝑘k-means clustering on the coordinates with k𝑘k=100, and use the clustering results as the class labels. We train a linear and non-linear classifier (i.e., a single-layer perceptron and three-layer perceptron, respectively) to predict the class label of each item by using the fixed item representations as the input. Finally, we perform 10-fold cross-validation and report the average result and standard deviation. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_46", "text": " In Figure 5, BUIRid and BUIRnb achieve significantly higher classification accuracy than the others in each category. This shows that the latent space induced by BUIR more accurately captures the item’s characteristics (or their relationship) compared to the space induced by the baseline methods. Another observation is that the rank of each method for the downstream tasks is consistent with that for top-K𝐾K recommendation (in Table 2). It implies that the observed user-item interactions are positively-correlated with the latent semantic of the items, for this reason, effectively learning the users’ implicit feedback eventually results in a good performance in the downstream tasks as well. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_47", "text": " For the guidance of hyperparameter selection, we provide analyses on the sensitivity of BUIR to its several hyperparameters. We investigate the performance changes of BUIRid on the FourSquare dataset (β𝛽\\beta=50%) with respect to the dimension size D𝐷D, the momentum coefficient τ𝜏\\tau,888Considering that the target encoder should be slowly approximate the online encoder, we investigate τ𝜏\\tau in the range of (0.9, 1.0), as done in previous work (He et al., 2020b; Grill et al., 2020). and the number of layers in the predictor. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_48", "text": " Figure 6 clearly shows that the performance is hardly affected by τ𝜏\\tau in the range of (0.9, 1.0). In other words, any values of τ𝜏\\tau larger than 0.9 allow the target encoder to successfully provide the target representations to the online encoder, by slowly approximating the online encoder; on the contrary, BUIR cannot learn the effective representations at all in case that the target encoder is fixed (i.e., τ=1𝜏1\\tau=1). This observation is consistent with previous work on momentum-based moving average (Tarvainen and Valpola, 2017; He et al., 2020b; Grill et al., 2020) that showed all values of τ𝜏\\tau between 0.9 and 0.999 can yield the best performance. Furthermore, BUIR performs the best with a single-layer predictor, because a multi-layer predictor makes it difficult to optimize the relationship between outputs of the two encoder networks. In conclusion, BUIR is more powerful even with fewer hyperparameters, compared to existing OCCF methods that include a variety of regularization terms or modeling components. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" }, { "id": "2105.06323_all_49", "text": " This paper proposes a novel framework for learning the representations of users and items, termed as BUIR, to address the main challenges of the OCCF problem: the implicit assumption about negative interactions, and high sparsity of observed (positively-labeled) interactions. First, BUIR directly bootstraps the representations of users and items by minimizing their cross-prediction error. This makes BUIR use only partially-observed positive interactions for training the model, and accordingly, it can eliminate the need for negative sampling. In addition, BUIR is able to learn the augmented views of each positive interaction obtained from the neighborhood information, which further relieves the data sparsity issue of the OCCF problem. Through the extensive comparison with a wide range of OCCF methods, we demonstrate that BUIR consistently outperforms all the other baselines in terms of top-K𝐾K recommendation. In particular, the effectiveness of BUIR becomes more significant for much sparse datasets in which the positively-labeled interactions are not enough to optimize the model as well as the assumption about negative interactions becomes less valid. Based on its great compatibility with existing user/item encoder networks, we expect that our BUIR framework can be a major solution for the OCCF problem, replacing the conventional BPR framework. ", "title": "Bootstrapping User and Item Representations for One-Class Collaborative Filtering" } ]
Who were recruited to annotate the visible lesions? and what did they base their annotation on ?
It is implied that the annotations were done by experts at the Neurosciences Critical Care Unit at Addenbrooke's Hospital, Cambridge, UK [50].
[ 50 ]
[ { "id": "1603.05959_all_0", "text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient outcome. For a better understanding of the pathophysiology of diseases, quantitative imaging can reveal clues about the disease characteristics and effects on particular anatomical structures. For example, the associations of different lesion types, their spatial distribution and extent with acute and chronic sequelae after traumatic brain injury (TBI) are still poorly understood (Maas et al. (2015)). However, there is growing evidence that quantification of lesion burden may add insight into the functional outcome of patients (Ding et al. (2008); Moen et al. (2012)). Additionally, exact locations of injuries relate to particular deficits depending on the brain structure that is affected (Lehtonen et al. (2005); Warner et al. (2010); Sharp et al. (2011)). This is in line with estimates that functional deficits caused by stroke are associated with the extent of damage to particular parts of the brain (Carey et al. (2013)). Lesion burden is commonly quantified by means of volume and number of lesions, biomarkers that have been shown to be related to cognitive deficits. For example, volume of white matter lesions (WML) correlates with cognitive decline and increased risk of dementia (Ikram et al. (2010)). In clinical research on multiple sclerosis (MS), lesion count and volume are used to analyse disease progression and effectiveness of pharmaceutical treatment (Rovira and León (2008); Kappos et al. (2007)). Finally, accurate delineation of the pathology is important in the case of brain tumors, where estimation of the relative volume of a tumor’s sub-components is required for planning radiotherapy and treatment follow-up (Wen et al. (2010)). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_1", "text": " The quantitative analysis of lesions requires accurate lesion segmentation in multi-modal, three-dimensional images which is a challenging task for a number of reasons. The heterogeneous appearance of lesions including the large variability in location, size, shape and frequency make it difficult to devise effective segmentation rules. It is thus highly non-trivial to delineate contusions, edema and haemorrhages in TBI (Irimia et al. (2012)), or sub-components of brain tumors such as proliferating cells and necrotic core (Menze et al. (2015)). The arguably most accurate segmentation results can be obtained through manual delineation by a human expert which is tedious, expensive, time-consuming, impractical in larger studies, and introduces inter-observer variability. Additionally, for deciding whether a particular region is part of a lesion multiple image sequences with varying contrasts need to be considered, and the level of expert knowledge and experience are important factors that impact segmentation accuracy. Hence, in clinical routine often only qualitative, visual inspection, or at best crude measures like approximate lesion volume and number of lesions are used (Yuh et al. (2012); Wen et al. (2010)). In order to capture and better understand the complexity of brain pathologies it is important to conduct large studies with many subjects to gain the statistical power for drawing conclusions across a whole patient population. The development of accurate, automatic segmentation algorithms has therefore become a major research focus in medical image computing with the potential to offer objective, reproducible, and scalable approaches to quantitative assessment of brain lesions. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_2", "text": " Figure 1 illustrates some of the challenges that arise when devising a computational approach for the task of automatic lesion segmentation. The figure summarizes statistics and shows examples of brain lesions in the case of TBI, but is representative of other pathologies such as brain tumors and ischemic stroke. Lesions can occur at multiple sites, with varying shapes and sizes, and their image intensity profiles largely overlap with non-affected, healthy parts of the brain or lesions which are not in the focus of interest. For example, stroke and MS lesions have a similar hyper-intense appearance in FLAIR sequences as other WMLs (Mitra et al. (2014); Schmidt et al. (2012)). It is generally difficult to derive statistical prior information about lesion shape and appearance. On the other hand, in some applications there is an expectation on the spatial configuration of segmentation labels, for example there is a hierarchical layout of sub-components in brain tumors. Ideally, a computational approach is able to adjust itself to application specific characteristics by learning from a set of a few example images. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_3", "text": " A multitude of automatic lesion segmentation methods have been proposed over the last decade, and several main categories of approaches can be identified. One group of methods poses the lesion segmentation task as an abnormality detection problem, for example by employing image registration. The early work of Prastawa et al. (2004) and more recent ones by Schmidt et al. (2012) and Doyle et al. (2013) align the pathological scan to a healthy atlas and lesions are detected based on deviations in tissue appearance between the patient and the atlas image. Lesions, however, may cause large structural deformations that may lead to incorrect segmentation due to incorrect registration. Gooya et al. (2011); Parisot et al. (2012) alleviate this problem by jointly solving the segmentation and registration tasks. Liu et al. (2014) showed that registration together with a low-rank decomposition gives as a by-product the abnormal structures in the sparse components, although, this may not be precise enough for detection of small lesions. Abnormality detection has also been proposed within image synthesis works. Representative approaches are those of Weiss et al. (2013) using dictionary learning and Ye et al. (2013) using a patch-based approach. The idea is to synthesize pseudo-healthy images that when compared to the patient scan allow to highlight abnormal regions. In this context, Cardoso et al. (2015) present a generative model for image synthesis that yields a probabilistic segmentation of abnormalities. Another unsupervised technique is proposed by Erihov et al. (2015), a saliency-based method that exploits brain asymmetry in pathological cases. A common advantage of the above methods is that they do not require a training dataset with corresponding manual annotations. In general, these approaches are more suitable for detecting lesions rather than accurately segmenting them. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_4", "text": " Some of the most successful, supervised segmentation methods for brain lesions are based on voxel-wise classifiers, such as Random Forests. Representative work is that of Geremia et al. (2010) on MS lesions, employing intensity features to capture the appearance of the region around each voxel. Zikic et al. (2012) combine this with a generative Gaussian Mixture Model (GMM) to obtain tissue-specific probabilistic priors (Van Leemput et al. (1999)). This framework was adopted in multiple works, with representative pipelines for brain tumors by Tustison et al. (2013) and TBI by Rao et al. (2014). Both works incorporate morphological and contextual features to better capture the heterogeneity of lesions. Rao et al. (2014) also incorporate brain structure segmentation results obtained from a multi-atlas label propagation approach (Ledig et al. (2015)) to provide strong tissue-class priors to the Random Forests. Tustison et al. (2013) additionally use a Markov Random Field (MRF) to incorporate spatial regularization. MRFs are commonly used to encourage spatial continuity of the segmentation (Schmidt et al. (2012); Mitra et al. (2014)). Although those methods have been very successful, it appears that their modeling capabilities still have significant limitations. This is confirmed by the results of the most recent challenges 111links: http://braintumorsegmentation.org/, www.isles-challenge.org, and also by our own experience and experimentation with such approaches. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_5", "text": " At the same time, deep learning techniques have emerged as a powerful alternative for supervised learning with great model capacity and the ability to learn highly discriminative features for the task at hand. These features often outperform hand-crafted and pre-defined feature sets. In particular, Convolutional Neural Networks (CNNs) (LeCun et al. (1998); Krizhevsky et al. (2012)) have been applied with promising results on a variety of biomedical imaging problems. Ciresan et al. (2012) presented the first GPU implementation of a two-dimensional CNN for the segmentation of neural membranes. From the CNN based work that followed, related to our approach are the methods of Zikic et al. (2014); Havaei et al. (2015); Pereira et al. (2015), with the latter being the best performing automatic approach in the BRATS 2015 challenge (Menze et al. (2015)). These methods are based on 2D CNNs, which have been used extensively in computer vision applications on natural images. Here, the segmentation of a 3D brain scan is achieved by processing each 2D slice independently, which is arguably a non-optimal use of the volumetric medical image data. Despite the simplicity in the architecture, the promising results obtained by these methods indicate the potential of CNNs. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_6", "text": " Fully 3D CNNs come with an increased number of parameters and significant memory and computational requirements. Previous work discusses problems and apparent limitations when employing a 3D CNN on medical imaging data (Prasoon et al. (2013); Li et al. (2014); Roth et al. (2014)). To incorporate 3D contextual information, multiple works used 2D CNNs on three orthogonal 2D patches (Prasoon et al. (2013); Roth et al. (2014); Lyksborg et al. (2015)). In their work for structural brain segmentation, Brebisson and Montana (2015) extracted large 2D patches from multiple scales of the image and combined them with small single-scale 3D patches, in order to avoid the memory requirements of fully 3D networks. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_7", "text": " One of the reasons that discouraged the use of 3D CNNs is the slow inference due to the computationally expensive 3D convolutions. In contrast to the 2D/3D hybrid variants (Roth et al. (2014); Brebisson and Montana (2015)), 3D CNNs can fully exploit dense-inference (LeCun et al. (1998); Sermanet et al. (2014)), a technique that greatly decreases inference times and which we will further discuss in section 2.1. By employing dense-inference with 3D CNNs, Brosch et al. (2015) and Urban et al. (2014) reported computation times of a few seconds and approximately a minute respectively for the processing of a single brain scan. Even though the size of their developed networks was limited, a factor that is directly related to a network’s representational power, their results on MS and brain tumor segmentation respectively were very promising. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_8", "text": " Performance of CNNs is significantly influenced by the strategy for extracting training samples. A commonly adopted approach is training on image patches that are equally sampled from each class. This, however, biases the classifier towards rare classes and may result in over-segmentation. To counter this, Cireşan et al. (2013) proposes to train a second CNN on samples with a class distribution close to the real one, but oversample pixels that were incorrectly classified in the first stage. A secondary training stage was also suggested by Havaei et al. (2015), who retrain the classification layer on patches extracted uniformly from the image. In practice, two stage training schemes can be prone to overfitting and sensitive to the state of the first classifier. Alternatively, dense training (Long et al. (2015)) has been used to train a network on multiple or all voxels of a single image per optimisation step (Urban et al. (2014); Brosch et al. (2015); Ronneberger et al. (2015)). This can introduce severe class imbalance, similarly to uniform sampling. Weighted cost functions have been proposed in the two latter works to alleviate this problem. Brosch et al. (2015) manually adjusted the sensitivity of the network, but the method can become difficult to calibrate for multi-class problems. Ronneberger et al. (2015) first balance the cost from each class, which has an effect similar to equal sampling, and further adjust it for the specific task by estimating the difficulty of segmenting each pixel. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_9", "text": " We present a fully automatic approach for lesion segmentation in multi-modal brain MRI based on an 11-layers deep, multi-scale, 3D CNN with the following main contributions: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_10", "text": " 1. We propose an efficient hybrid training scheme, utilizing dense training (Long et al. (2015)) on sampled image segments, and analyze its behaviour in adapting to class imbalance of the segmentation problem at hand. 2. We analyze in depth the development of deeper, thus more discriminative, yet computationally efficient 3D CNNs. We exploit the utilization of small kernels, a design approach previously found beneficial in 2D networks (Simonyan and Zisserman (2014)) that impacts 3D CNNs even more, and present adopted solutions that enable training deeper networks. 3. We employ parallel convolutional pathways for multi-scale processing, a solution to efficiently incorporate both local and contextual information which greatly improves segmentation results. 4. We demonstrate the generalization capabilities of our system, which without significant modifications outperforms the state-of-the-art on a variety of challenging segmentation tasks, with top ranking results in two MICCAI challenges, ISLES and BRATS. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_11", "text": " Furthermore, a detailed analysis of the network reveals valuable insights into the powerful black box of deep learning with CNNs. For example, we have found that our network is capable of learning very complex, high level features that separate gray matter (GM), cerebrospinal fluid (CSF) and other anatomical structures to identify the image regions corresponding to lesions. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_12", "text": " Additionally, we have extended the fully-connected Conditional Random Field (CRF) model by Krähenbühl and Koltun (2011) to 3D which we use for final post-processing of the CNN’s soft segmentation maps. This CRF overcomes limitations of previous models as it can handle arbitrarily large neighborhoods while preserving fast inference times. To the best of our knowledge, this is the first use of a fully connected CRF on medical data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_13", "text": " To facilitate further research and encourage other researchers to build upon our results, the source code of our lesion segmentation method including the CNN and the 3D fully connected CRF is made publicly available on https://biomedia.doc.ic.ac.uk/software/deepmedic/. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_14", "text": " Our proposed lesion segmentation method consists of two main components, a 3D CNN that produces highly accurate, soft segmentation maps, and a fully connected 3D CRF that imposes regularization constraints on the CNN output and produces the final hard segmentation labels. The main contributions of our work are within the CNN component which we describe first in the following. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_15", "text": " CNNs produce estimates for the voxel-wise segmentation labels by classifying each voxel in an image independently taking the neighborhood, i.e. local and contextual image information, into account. This is achieved by sequential convolutions of the input with multiple filters at the cascaded layers of the network. Each layer l∈(1,L)𝑙1𝐿l\\in(1,L) consists of Clsubscript𝐶𝑙C_{l} feature maps (FMs), also referred to as channels. Every FM is a group of neurons that detects a particular pattern, i.e. a feature, in the channels of the previous layer. The pattern is defined by the kernel weights associated with the FM. If the neurons of the m𝑚m-th FM in the l𝑙l-th layer are arranged in a 3D grid, their activations constitute the image 𝐲lm=f​(∑n=1Cl−1𝐤lm,n⋆𝐲l−1n+blm)subscriptsuperscript𝐲𝑚𝑙𝑓superscriptsubscript𝑛1subscript𝐶𝑙1⋆subscriptsuperscript𝐤𝑚𝑛𝑙subscriptsuperscript𝐲𝑛𝑙1subscriptsuperscript𝑏𝑚𝑙\\mathbf{y}^{m}_{l}=f(\\sum_{n=1}^{C_{l-1}}{\\mathbf{k}^{m,n}_{l}\\star\\mathbf{y}^{n}_{l-1}}+b^{m}_{l}). This is the result of convolving each of the previous layer’s channels with a 3-dimensional kernel 𝐤lm,nsubscriptsuperscript𝐤𝑚𝑛𝑙\\mathbf{k}^{m,n}_{l}, adding a learned bias blmsubscriptsuperscript𝑏𝑚𝑙b^{m}_{l} and applying a non-linearity f𝑓f. Each kernel is a matrix of learned hidden weights 𝐖lm,nsubscriptsuperscript𝐖𝑚𝑛𝑙\\mathbf{W}^{m,n}_{l}. The images 𝐲0nsubscriptsuperscript𝐲𝑛0\\mathbf{y}^{n}_{0}, input to the first layer, correspond to the channels of the original input image, for instance a multi-sequence 3D MRI scan of the brain. The concatenation of the kernels 𝐤l=(𝐤lm,1,…,𝐤lm,Cl−1)subscript𝐤𝑙subscriptsuperscript𝐤𝑚1𝑙…subscriptsuperscript𝐤𝑚subscript𝐶𝑙1𝑙\\mathbf{k}_{l}=(\\mathbf{k}^{m,1}_{l},...,\\mathbf{k}^{m,C_{l-1}}_{l}) can be viewed as a 4-dimensional kernel convolving the concatenated channels 𝐲l−1=(𝐲l−11,…,𝐲l−1Cl−1)subscript𝐲𝑙1subscriptsuperscript𝐲1𝑙1…subscriptsuperscript𝐲subscript𝐶𝑙1𝑙1\\mathbf{y}_{l-1}=(\\mathbf{y}^{1}_{l-1},...,\\mathbf{y}^{C_{l-1}}_{l-1}), which then intuitively expresses that the neurons of higher layers combine the patterns extracted in previous layers, which results in the detection of increasingly more complex patterns. The activations of the neurons in the last layer L𝐿L correspond to particular segmentation class labels, hence this layer is also referred to as the classification layer. The neurons are thus grouped in CLsubscript𝐶𝐿C_{L} FMs, one for each of the segmentation classes. Their activations are fed into a position-wise softmax function that produces the predicted posterior pc​(𝐱)=exp⁡(𝐲Lc​(𝐱))/∑c=1CLexp⁡(𝐲Lc​(𝐱))subscript𝑝𝑐𝐱superscriptsubscript𝐲𝐿𝑐𝐱superscriptsubscript𝑐1subscript𝐶𝐿superscriptsubscript𝐲𝐿𝑐𝐱p_{c}(\\mathbf{x})=\\exp(\\mathbf{y}_{L}^{c}(\\mathbf{x}))/\\sum_{c=1}^{C_{L}}\\exp(\\mathbf{y}_{L}^{c}(\\mathbf{x})) for each class c𝑐c, which form soft segmentation maps with (pseudo-)probabilities. 𝐲Lc​(𝐱)superscriptsubscript𝐲𝐿𝑐𝐱\\mathbf{y}_{L}^{c}(\\mathbf{x}) is the activation of the c𝑐c-th classification FM at position 𝐱∈ℕ3𝐱superscriptℕ3\\mathbf{x}\\in\\mathbb{N}^{3}. This baseline network is depicted in Fig. 2. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_16", "text": " The neighborhood of voxels in the input that influence the activation of a neuron is its receptive field. Its size, 𝝋lsubscript𝝋𝑙\\bm{\\varphi}_{l}, increases at each subsequent layer l𝑙l and is given by the 3-dimensional vector: 𝝋l{x,y,z}=𝝋l−1{x,y,z}+(𝜿l{x,y,z}−1)​𝝉l{x,y,z}​ ,superscriptsubscript𝝋𝑙𝑥𝑦𝑧superscriptsubscript𝝋𝑙1𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑥𝑦𝑧1superscriptsubscript𝝉𝑙𝑥𝑦𝑧 ,\\bm{\\varphi}_{l}^{\\{x,y,z\\}}=\\bm{\\varphi}_{l-1}^{\\{x,y,z\\}}+(\\bm{\\kappa}_{l}^{\\{x,y,z\\}}-1)\\bm{\\tau}_{l}^{\\{x,y,z\\}}\\textrm{ ,} (1) where 𝜿l,𝝉l∈ℕ3subscript𝜿𝑙subscript𝝉𝑙superscriptℕ3\\bm{\\kappa}_{l},\\bm{\\tau}_{l}\\in\\mathbb{N}^{3} are vectors expressing the size of the kernels and stride of the receptive field at layer l𝑙l. 𝝉lsubscript𝝉𝑙\\bm{\\tau}_{l} is given by the product of the strides of kernels in layers preceding l𝑙l. In this work only unary strides are used, as larger strides downsample the FMs (Springenberg et al. (2014)), which is unwanted behaviour for accurate segmentation. Thus in our system 𝝉l=(1,1,1)subscript𝝉𝑙111\\bm{\\tau}_{l}=(1,1,1). The receptive field of a neuron in the classification layer corresponds to the image patch that influences the prediction for its central voxel. This is called the CNN’s receptive field, with 𝝋C​N​N=𝝋Lsubscript𝝋𝐶𝑁𝑁subscript𝝋𝐿\\bm{\\varphi}_{CNN}=\\bm{\\varphi}_{L}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_17", "text": " If input of size 𝜹i​nsubscript𝜹𝑖𝑛\\bm{\\delta}_{in} is provided, the dimensions of the FMs in layer l𝑙l are given by: 𝜹l{x,y,z}=⌊(𝜹i​n{x,y,z}−𝝋l{x,y,z})/𝝉l{x,y,z}+1⌋superscriptsubscript𝜹𝑙𝑥𝑦𝑧superscriptsubscript𝜹𝑖𝑛𝑥𝑦𝑧superscriptsubscript𝝋𝑙𝑥𝑦𝑧superscriptsubscript𝝉𝑙𝑥𝑦𝑧1\\bm{\\delta}_{l}^{\\{x,y,z\\}}=\\lfloor(\\bm{\\delta}_{in}^{\\{x,y,z\\}}-\\bm{\\varphi}_{l}^{\\{x,y,z\\}})/\\bm{\\tau}_{l}^{\\{x,y,z\\}}+1\\rfloor (2) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_18", "text": " In the common patch-wise classification setting, an input patch of size 𝜹i​n=𝝋C​N​Nsubscript𝜹𝑖𝑛subscript𝝋𝐶𝑁𝑁\\bm{\\delta}_{in}=\\bm{\\varphi}_{CNN} is provided and the network outputs a single prediction for its central voxel. In this case the classification layer consists of FMs with size 13superscript131^{3}. Networks that are implemented as fully-convolutionals are capable of dense-inference, which is performed when input of size greater than 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} is provided (Sermanet et al. (2014)). In this case, the dimensions of FMs increase according to Eq. (2). This includes the classification FMs which then output multiple predictions simultaneously, one for each stride of the CNN’s receptive field on the input (Fig. 2). All predictions are equally trustworthy, as long as the receptive field is fully contained within the input and captures only original content, i.e. no padding is used. This strategy significantly reduces the computational costs and memory loads since the otherwise repeated computations of convolutions on the same voxels in overlapping patches are avoided. Optimal performance is achieved if the whole image is scanned in one forward pass. If GPU memory constraints do not allow it, such as in the case of large 3D networks where a large number of FMs need to be cached, the volume is tiled in multiple image-segments, which are larger than individual patches, but small enough to fit into memory. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_19", "text": " Before analyzing how we exploit the above dense-inference technique for training, which is the first main contribution of our work, we present the commonly used setting in which CNNs are trained patch-by-patch. Random patches of size 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} are extracted from the training images. A batch is formed out of B𝐵B of these samples, which is then processed by the network for one training iteration of Stochastic Gradient Descent (SGD). This step aims to alter the network’s parameters 𝚯𝚯\\mathbf{\\Theta}, such as weights and biases, in order to maximize the log likelihood of the data or, equally, minimize the Cross Entropy via the cost function: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_20", "text": " J​(𝚯;𝐈i,ci)=−1B​∑i=1Blog⁡(P​(Y=ci|𝐈i,𝚯))=−1B​∑i=1Blog⁡(pci)​ ,𝐽𝚯superscript𝐈𝑖superscript𝑐𝑖1𝐵superscriptsubscript𝑖1𝐵𝑃𝑌conditionalsuperscript𝑐𝑖superscript𝐈𝑖𝚯1𝐵superscriptsubscript𝑖1𝐵subscript𝑝superscript𝑐𝑖 ,J(\\mathbf{\\Theta};\\mathbf{I}^{i},c^{i})=-\\frac{1}{B}\\sum_{i=1}^{B}\\log\\left(P(Y=c^{i}|\\mathbf{I}^{i},\\mathbf{\\Theta})\\right)=-\\frac{1}{B}\\sum_{i=1}^{B}\\log(p_{c^{i}})\\textrm{ ,} (3) where the pair (𝐈i,ci),∀i∈(1,B)superscript𝐈𝑖superscript𝑐𝑖for-all𝑖1𝐵(\\mathbf{I}^{i},c^{i}),\\forall{i}\\in{(1,B)} is the i𝑖i-th patch in the batch and the true label of its central voxel, while the scalar value pcisubscript𝑝superscript𝑐𝑖p_{c^{i}} is the predicted posterior for class cisuperscript𝑐𝑖c^{i}. Regularization terms were omitted for simplicity. Multiple sequential optimization steps over different batches gradually lead to convergence. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_21", "text": " Larger training batch sizes B𝐵B are preferred as they approximate the overall data more accurately and lead to better estimation of the true gradient by SGD. However, the memory requirement and computation time increase with the batch size. This limitation is especially relevant for 3D CNNs, where only a few dozens of patches can be processed within reasonable time on modern GPUs. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_22", "text": " To overcome this problem, we devise a training strategy that exploits the dense inference technique on image segments. Following from Eq. (2), if an image segment of size greater than 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} is given as input to our network, the output is a posterior probability for multiple voxels V=∏i={x,y,z}𝜹L(i)𝑉subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜹𝐿𝑖V=\\prod_{i=\\{x,y,z\\}}{\\bm{\\delta}_{L}^{(i)}}. If the training batches are formed of B𝐵B segments extracted from the training images, the cost function (3) in the case of dense-training becomes: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_23", "text": " JD​(𝚯;𝐈s,𝐜s)=−1B⋅V​∑s=1B∑v=1Vlog⁡(pcsv​(𝐱v))​ ,subscript𝐽𝐷𝚯subscript𝐈𝑠subscript𝐜𝑠1⋅𝐵𝑉superscriptsubscript𝑠1𝐵superscriptsubscript𝑣1𝑉subscript𝑝superscriptsubscript𝑐𝑠𝑣superscript𝐱𝑣 ,J_{D}(\\mathbf{\\Theta};\\mathbf{I}_{s},\\mathbf{c}_{s})=-\\frac{1}{B\\cdot V}\\sum_{s=1}^{B}\\sum_{v=1}^{V}\\log(p_{c_{s}^{v}}(\\mathbf{x}^{v}))\\textrm{ ,} (4) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_24", "text": " where 𝐈ssubscript𝐈𝑠\\mathbf{I}_{s} and 𝐜ssubscript𝐜𝑠\\mathbf{c}_{s} are the s𝑠s-th segment of the batch and the true labels of its V𝑉V predicted voxels respectively. csvsuperscriptsubscript𝑐𝑠𝑣c_{s}^{v} is the true label of the v𝑣v-th voxel, 𝐱vsuperscript𝐱𝑣\\mathbf{x}^{v} the corresponding position in the classification FMs and pcsvsubscript𝑝superscriptsubscript𝑐𝑠𝑣p_{c_{s}^{v}} the output of the softmax function. The effective batch size is increased by a factor of V𝑉V without a corresponding increase in computational and memory requirements, as earlier discussed in Sec. 2.1. Notice that this is a hybrid scheme between the commonly used training on individual patches and the dense training scheme on a whole image (Long et al. (2015)), with the latter being problematic to apply for training large 3D CNNs on volumes of high resolution due to memory limitations. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_25", "text": " An appealing consequence of this scheme is that the sampling of input segments provides a flexible and automatic way to balance the distribution of training samples from different segmentation classes which is an important issue that directly impacts the segmentation accuracy. Specifically, we build the training batches by extracting segments from the training images with 50% probability being centred on a foreground or background voxel, alleviating class-imbalance. Note that the predicted voxels V𝑉V in a segment do not have to be of the same class, something that occurs when a segment is sampled from a region near class boundaries (Fig. 3). Hence, the sampling rate of the proposed hybrid method adjusts to the true distribution of the segmentation task’s classes. Specifically, the smaller a labelled object, the more background voxels will be captured within segments centred on the foreground voxel. Implicitly, this yields a balance between sensitivity and specificity in the case of binary segmentation tasks. In multi-class problems, the rate at which different classes are captured within a segment centred on foreground reflects the real relative distribution of the foreground classes, while adjusting their frequency relatively to the background. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_26", "text": " Deeper networks have greater discriminative power due to the additional non-linearities and better quality of local optima (Choromanska et al. (2015)). However, convolutions with 3D kernels are computationally expensive in comparison to the 2D variants, which hampers the addition of more layers. Additionally, 3D architectures have a larger number of trainable parameters, with each layer adding Cl​Cl−1​∏i={x,y,z}𝜿l(i)subscript𝐶𝑙subscript𝐶𝑙1subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑖C_{l}C_{l-1}\\prod_{i=\\{x,y,z\\}}{\\bm{\\kappa}_{l}^{(i)}} weights to the model. Clsubscript𝐶𝑙C_{l} is the number of FMs in layer l𝑙l and 𝜿l{x,y,z}superscriptsubscript𝜿𝑙𝑥𝑦𝑧\\bm{\\kappa}_{l}^{\\{x,y,z\\}} the size of its kernel in the respective spatial dimension. Overall this makes the network increasingly prone to over-fitting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_27", "text": " In order to build a deeper 3D architecture, we adopt the sole use of small 33superscript333^{3} kernels that are faster to convolve with and contain less weights. This design approach was previously found beneficial for classification of natural images (Simonyan and Zisserman (2014)) but its effect is even more drastic on 3D networks. When compared to common kernel choices of 53superscript535^{3} (Zikic et al. (2014); Urban et al. (2014); Prasoon et al. (2013)) and in our baseline CNN, the smaller 33superscript333^{3} kernels reduce the element-wise multiplications by a factor of approximately 53/33≈4.6superscript53superscript334.65^{3}/3^{3}\\approx 4.6 while reducing the number of trainable parameters by the same factor. Thus deeper network variants that are implicitly regularised and more efficient can be designed by simply replacing each layer of common architectures with more layers that use smaller kernels (Fig. 4). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_28", "text": " However, deeper networks are more difficult to train. It has been shown that the forward (neuron activations) and backwards (gradients) propagated signal may explode or vanish if care is not given to retain its variance (Glorot and Bengio (2010)). This occurs because at every successive layer l𝑙l, the variance of the signal is multiplied by nli​n⋅v​a​r​(𝐖l)⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙n^{in}_{l}\\cdot var(\\mathbf{W}_{l}), where nli​n=Cl−1​∏i={x,y,z}𝜿l(i)subscriptsuperscript𝑛𝑖𝑛𝑙subscript𝐶𝑙1subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑖n^{in}_{l}=C_{l-1}\\prod_{i=\\{x,y,z\\}}{\\bm{\\kappa}_{l}^{(i)}} is the number of weights through which a neuron of layer l𝑙l is connected to its input and v​a​r​(𝐖l)𝑣𝑎𝑟subscript𝐖𝑙var(\\mathbf{W}_{l}) is the variance of the layer’s weights. To better preserve the signal in the initial training stage we adopt a scheme recently derived for ReLu-based networks by He et al. (2015) and initialize the kernel weights of our system by sampling from the normal distribution 𝒩​(0,2/nli​n)𝒩02subscriptsuperscript𝑛𝑖𝑛𝑙\\mathcal{N}(0,\\sqrt{2/n^{in}_{l}}). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_29", "text": " A phenomenon of similar nature that hinders the network’s performance is the “internal covariate shift” (Ioffe and Szegedy (2015)). It occurs throughout training, because the weight updates to deeper layers result in a continuously changing distribution of signal at higher layers, which hinders the convergence of their weights. Specifically, at training iteration t𝑡t the weight updates may cause deviation ϵl,tsubscriptitalic-ϵ𝑙𝑡\\epsilon_{l,t} to the variance of the weights. At the next iteration the signal will be amplified by nli​n⋅v​a​r​(𝐖l,t+1)=nli​n⋅(v​a​r​(𝐖l,t)+ϵl,t)⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙𝑡1⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙𝑡subscriptitalic-ϵ𝑙𝑡n^{in}_{l}\\cdot var(\\mathbf{W}_{l,t+1})=n^{in}_{l}\\cdot(var(\\mathbf{W}_{l,t})+\\epsilon_{l,t}). Thus before influencing the signal, any deviation ϵl,tsubscriptitalic-ϵ𝑙𝑡\\epsilon_{l,t} is amplified by nli​nsubscriptsuperscript𝑛𝑖𝑛𝑙n^{in}_{l} which is exponential in the number of dimensions. For this reason the problem affects training of 3D CNNs more severely than conventional 2D systems. For countering it, we adopt the recently proposed Batch Normalisation (BN) technique to all hidden layers (Ioffe and Szegedy (2015)), which allows normalization of the FM activations at every optimization step in order to better preserve the signal. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_30", "text": " The segmentation of each voxel is performed by taking into account the contextual information that is captured by the receptive field of the CNN when it is centred on the voxel. The spatial context is providing important information for being able to discriminate voxels that otherwise appear very similar when considering only local appearance. From Eq. (1) follows that an increase of the CNN’s receptive field requires bigger kernels or more convolutional layers, which increases computation and memory requirements. An alternative would be the use of pooling (LeCun et al. (1998)), which however leads to loss of the exact position of the segmented voxel and thus can negatively impact accuracy. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_31", "text": " In order to incorporate both local and larger contextual information into our 3D CNN, we add a second pathway that operates on down-sampled images. Thus, our dual pathway 3D CNN simultaneously processes the input image at multiple scales (Fig. 5). Higher level features such as the location within the brain are learned in the second pathway, while the detailed local appearance of structures is captured in the first. As the two pathways are decoupled in this architecture, arbitrarily large context can be processed by the second pathway by simply adjusting the down-sampling factor FDsubscript𝐹𝐷F_{D}. The size of the pathways can be independently adjusted according to the computational capacity and the task at hand, which may require relatively more or less filters focused on the down-sampled context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_32", "text": " To preserve the capability of dense inference, spatial correspondence of the activations in the FMs of the last convolutional layers of the two pathways, L​1𝐿1L1 and L​2𝐿2L2, should be ensured. In networks where only unary kernel strides are used, such as the proposed architecture, this requires that for every FDsubscript𝐹𝐷F_{D} shifts of the receptive field 𝝋L​1subscript𝝋𝐿1\\bm{\\varphi}_{L1} over the normal resolution input, only one shift is performed by 𝝋L​2subscript𝝋𝐿2\\bm{\\varphi}_{L2} over the down-sampled input. Hence it is required that the dimensions of the FMs in L​2𝐿2L2 are 𝜹L​2{x,y,z}=⌈𝜹L​1{x,y,z}/FD⌉superscriptsubscript𝜹𝐿2𝑥𝑦𝑧superscriptsubscript𝜹𝐿1𝑥𝑦𝑧subscript𝐹𝐷\\bm{\\delta}_{L2}^{\\{x,y,z\\}}=\\lceil\\bm{\\delta}_{L1}^{\\{x,y,z\\}}/F_{D}\\rceil. From Eq. (2), the size of the input to the second pathway is 𝜹i​n​2{x,y,z}=𝝋L​2{x,y,z}+𝜹L​2{x,y,z}−1superscriptsubscript𝜹𝑖𝑛2𝑥𝑦𝑧superscriptsubscript𝝋𝐿2𝑥𝑦𝑧superscriptsubscript𝜹𝐿2𝑥𝑦𝑧1\\bm{\\delta}_{in2}^{\\{x,y,z\\}}=\\bm{\\varphi}_{L2}^{\\{x,y,z\\}}+\\bm{\\delta}_{L2}^{\\{x,y,z\\}}-1 and similar is the relation between 𝜹i​n​1subscript𝜹𝑖𝑛1\\bm{\\delta}_{in1} and 𝜹L​1subscript𝜹𝐿1\\bm{\\delta}_{L1}. These establish the relation between the required dimensions of the input segments from the two resolutions, which can then be extracted centered on the same image location. The FMs of L​2𝐿2L2 are up-sampled to match the dimensions of L​1𝐿1L1’s FMs and are then concatenated together. We add two more hidden layers for combining the multi-scale features before the final classification, as shown in Fig. 5. Integration of the multi-scale parallel pathways in architectures with non-unary strides is discussed in A. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_33", "text": " Combining multi-scale features has been found beneficial in other recent works (Long et al. (2015); Ronneberger et al. (2015)), in which whole 2D images are processed in the network by applying a few number of convolutions and then down-sampling the FMs for further processing at various scales. Our decoupled pathways allow arbitrarily large context to be provided while avoiding the need to load large parts of the 3D volume into memory. Additionally, our architecture extracts features completely independently from the multiple resolutions. This way, the features learned by the first pathway retain finest details, as they are not involved in processing low resolution context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_34", "text": " Because neighboring voxels share substantial spatial context, the soft segmentation maps produced by the CNN tend to be smooth, even though neighborhood dependencies are not modeled directly. However, local minima in training and noise in the input images can still result in some spurious outputs, with small isolated regions or holes in the predictions. We employ a fully connected CRF (Krähenbühl and Koltun (2011)) as a post-processing step to achieve more structured predictions. As we describe below, this CRF is capable of modeling arbitrarily large voxel-neighborhoods but is also computationally efficient, making it ideal for processing 3D multi-modal medical scans. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_35", "text": " For an input image 𝐈𝐈\\mathbf{I} and the label configuration (segmentation) 𝐳𝐳\\mathbf{z}, the Gibbs energy in a CRF model is given by ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_36", "text": " E​(𝐳)=∑iψu​(zi)+∑i​j,i≠jψp​(zi,zj)​ .𝐸𝐳subscript𝑖subscript𝜓𝑢subscript𝑧𝑖subscript𝑖𝑗𝑖𝑗subscript𝜓𝑝subscript𝑧𝑖subscript𝑧𝑗 .E(\\mathbf{z})=\\sum_{i}{\\psi_{u}(z_{i})}+\\sum_{ij,i\\neq j}{\\psi_{p}(z_{i},z_{j})}\\textrm{ .} (5) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_37", "text": " The unary potential is the negative log-likelihood ψu​(zi)=−l​o​g​P​(zi|𝐈)subscript𝜓𝑢subscript𝑧𝑖𝑙𝑜𝑔𝑃conditionalsubscript𝑧𝑖𝐈\\psi_{u}(z_{i})=-logP(z_{i}|\\mathbf{I}), where in our case P​(zi|𝐈)𝑃conditionalsubscript𝑧𝑖𝐈P(z_{i}|\\mathbf{I}) is the CNN’s output for voxel i𝑖i. In a fully connected CRF, the pairwise potential is of form ψp​(zi,zj)=μ​(zi,zj)​k​(𝐟𝐢,𝐟𝐣)subscript𝜓𝑝subscript𝑧𝑖subscript𝑧𝑗𝜇subscript𝑧𝑖subscript𝑧𝑗𝑘subscript𝐟𝐢subscript𝐟𝐣\\psi_{p}(z_{i},z_{j})=\\mu(z_{i},z_{j})k(\\mathbf{f_{i}},\\mathbf{f_{j}}) between any pair of voxels, regardless of their spatial distance. The Pott’s Model is commonly used as the label compatibility function, giving μ​(zi,zj)=(zi≠zj)𝜇subscript𝑧𝑖subscript𝑧𝑗delimited-()subscript𝑧𝑖subscript𝑧𝑗\\mu(z_{i},z_{j})=(z_{i}\\neq z_{j}). The corresponding energy penalty is given by the function k𝑘k, which is defined over an arbitrary feature space, with 𝐟𝐢,𝐟𝐣subscript𝐟𝐢subscript𝐟𝐣\\mathbf{f_{i}},\\mathbf{f_{j}} being the feature vectors of the pair of voxels. Krähenbühl and Koltun (2011) observed that if the penalty function is defined as a linear combination of Gaussian kernels, k​(𝐟𝐢,𝐟𝐣)=∑m=1Mw(m)​k(m)​(𝐟𝐢,𝐟𝐣)𝑘subscript𝐟𝐢subscript𝐟𝐣superscriptsubscript𝑚1𝑀superscript𝑤𝑚superscript𝑘𝑚subscript𝐟𝐢subscript𝐟𝐣k(\\mathbf{f_{i}},\\mathbf{f_{j}})=\\sum_{m=1}^{M}{w^{(m)}k^{(m)}(\\mathbf{f_{i}},\\mathbf{f_{j}})}, the model lends itself for very efficient inference with mean field approximation, after expressing message passing as convolutions with the Gaussian kernels in the space of the feature vectors 𝐟𝐢,𝐟𝐣subscript𝐟𝐢subscript𝐟𝐣\\mathbf{f_{i}},\\mathbf{f_{j}}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_38", "text": " We extended the work of the original authors and implemented a 3D version of the CRF for processing multi-modal scans. We make use of two Gaussian kernels, which operate in the feature space defined by the voxel coordinates pi,dsubscript𝑝𝑖𝑑p_{i,d} and the intensities of the c𝑐c-th modality-channel Ii,csubscript𝐼𝑖𝑐I_{i,c} for voxel i𝑖i. The smoothness kernel, k(1)​(𝐟𝐢,𝐟𝐣)=e​x​p​(−∑d={x,y,z}|pi,d−pj,d|22​σα,d2)superscript𝑘1subscript𝐟𝐢subscript𝐟𝐣𝑒𝑥𝑝subscript𝑑𝑥𝑦𝑧superscriptsubscript𝑝𝑖𝑑subscript𝑝𝑗𝑑22superscriptsubscript𝜎𝛼𝑑2k^{(1)}(\\mathbf{f_{i}},\\mathbf{f_{j}})=exp\\Big{(}-\\sum_{d=\\{x,y,z\\}}{\\frac{|p_{i,d}-p_{j,d}|^{2}}{2\\sigma_{\\alpha,d}^{2}}}\\Big{)}, is defined by a diagonal covariance matrix with elements the configurable parameters σα,dsubscript𝜎𝛼𝑑\\sigma_{\\alpha,d}, one for each axis. These parameters express the size and shape of neighborhoods that homogeneous labels are encouraged. The appearance kernel k(2)​(𝐟𝐢,𝐟𝐣)=e​x​p​(−∑d={x,y,z}|pi,d−pj,d|22​σβ,d2−∑c=1C|Ii,c−Ij,c|22​σγ,c2)superscript𝑘2subscript𝐟𝐢subscript𝐟𝐣𝑒𝑥𝑝subscript𝑑𝑥𝑦𝑧superscriptsubscript𝑝𝑖𝑑subscript𝑝𝑗𝑑22superscriptsubscript𝜎𝛽𝑑2superscriptsubscript𝑐1𝐶superscriptsubscript𝐼𝑖𝑐subscript𝐼𝑗𝑐22superscriptsubscript𝜎𝛾𝑐2k^{(2)}(\\mathbf{f_{i}},\\mathbf{f_{j}})=exp\\Big{(}-\\sum_{d=\\{x,y,z\\}}{\\frac{|p_{i,d}-p_{j,d}|^{2}}{2\\sigma_{\\beta,d}^{2}}}-\\sum_{c=1}^{C}{\\frac{|I_{i,c}-I_{j,c}|^{2}}{2\\sigma_{\\gamma,c}^{2}}}\\Big{)} is defined similarly. The additional parameters σγ,csubscript𝜎𝛾𝑐\\sigma_{\\gamma,c} can be interpreted as how strongly to enforce homogeneous appearance in the C𝐶C input channels, when voxels in an area spatially defined by σβ,dsubscript𝜎𝛽𝑑\\sigma_{\\beta,d} are identically labelled. Finally, the configurable weights w(1),w(2)superscript𝑤1superscript𝑤2w^{(1)},w^{(2)} define the relative strength of the two factors. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_39", "text": " In this section we present a series of experiments in order to analyze the impact of each of the main contributions and to justify the choices made in the design of the proposed 11-layers, multi-scale 3D CNN architecture, referred to as the DeepMedic. Starting from the CNN baseline as discussed in Sec. 2.1, we first explore the benefit of our proposed dense training scheme (cf. Sec. 2.2), then investigate the use of deeper models (cf. Sec. 2.3) and then evaluate the influence of the multi-scale dual pathway (cf. Sec. 2.4). Finally, we compare our method with corresponding 2D variants to assess the benefit of processing 3D context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_40", "text": " The following experiments are conducted using the TBI dataset with 61 multi-channel MRIs which is described in more detail later in Sec. 4.1. Here, the images are randomly split into a validation and training set, with 15 and 46 images each. The same sets are used in all analyses. To monitor the progress of segmentation accuracy during training, we extract 10k random patches at regular intervals, with equal numbers extracted from each of the validation images. The patches are uniformly sampled from the brain region in order to approximate the true distribution of lesions and healthy tissue. Full segmentation of the validation datasets is performed every five epochs and the mean Dice similarity coefficient (DSC) is determined. Details on the configuration of the networks are provided in B. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_41", "text": " We compare our proposed dense training method with two other commonly used training schemes on the 5-layers baseline CNN (see Fig. 2). The first common scheme trains on 173superscript17317^{3} patches extracted uniformly from the brain region, and the second scheme samples patches equally from the lesion and background class. We refer to these schemes as Puniuni{}_{\\text{uni}} and Peqeq{}_{\\text{eq}}. The results shown in Fig. 6 show a correlation of sensitivity and specificity with the percentage of training samples that come from the lesion class. Peqeq{}_{\\text{eq}} performs poorly because of over-segmentation (high sensitivity, low specificity). Puniuni{}_{\\text{uni}} has better classification on the background class (high specificity), which leads to high mean voxel-wise accuracy since the majority corresponds to background, but not particularly high DSC scores due to under-segmentation (low sensitivity). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_42", "text": " To evaluate our dense training scheme, we train multiple models with varying sized image segments, equally sampled from lesions and background. The tested sizes of the segments go from 193superscript19319^{3} upwards to 293superscript29329^{3}. The models are referred to as “S-d𝑑d”, where d𝑑d is the side length of the cubic segments. For fair comparison, the batch sizes in all the experiments are adjusted to have a similar memory footprint and lead to similar training times as compared to training on Puni and Peq222Dense training on a whole volume was inapplicable in these experimental settings due to memory limitations but was previously shown to give similar results as training on uniformly sampled patches (Long et al. (2015)).. We observe a great performance increase for model S-1919{19} over Peqeq{}_{\\text{eq}}. We account this partly to the efficient increase of the effective batch size (B⋅V⋅𝐵𝑉B\\cdot V in Eq. (4)), but also to the altered distribution of training samples. As we increase the size of the training segments further, we quickly reach a balance between the sensitivity of Peq and the specificity of Puni, which results in improved segmentation as expressed by the DSC. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_43", "text": " The segment size is a hyper-parameter in our model. We observe that the increase in performance with increasing segment size quickly levels off, and similar performance is obtained for a wide range of segment sizes, which allows for easy configuration. For the remaining experiments, all models were trained on segments of size 253superscript25325^{3}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_44", "text": " The 5-layers baseline CNN (Fig. 2), here referred to as the “Shallow” model, is extended to 9-layers by replacing each convolutional layer that uses 53superscript535^{3} kernels with two layers that use 33superscript333^{3} kernels (Fig. 4). This model is referred to as “Deep”. Training the latter, however, utterly fails with the model making only predictions corresponding to the background class. This problem is related to the challenge of preserving the signal as it propagates through deep networks and its variance gets multiplied with the variance of the weights, as previously discussed in Sec. 2.3. One of the causes is that the weights of both models have been initialized with the commonly used scheme of sampling from the normal distribution 𝒩​(0,0.01)𝒩00.01\\mathcal{N}(0,0.01) (cf. Krizhevsky et al. (2012)). In comparison, the initialization scheme by He et al. (2015), derived for preserving the signal in the initial stage of training, results in higher values and overcomes this problem. Further preservation of the signal is obtained by employing Batch Normalization. This results in an enhanced 9-layers model which we refer to as “Deep+”, and using the same enhancements on the Shallow model yields “Shallow+”. The significant performance improvement of Deep+ over Shallow+, as shown in Fig. 7, is the result of the greater representational power of the deeper network. The two models need similar computational times, which highlights the benefits of utilizing small kernels in the design of 3D CNNs. Although the deeper model requires more sequential (layer by layer) computations on the GPU, those are faster due to the smaller kernel size. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_45", "text": " The final version of the proposed network architecture, referred to as “DeepMedic”, is built by extending the Deep+ model with a second convolutional pathway that is identical to the first one. Two hidden layers are added for combining the multi-scale features before the classification layer, resulting in a deep network of 11-layers (cf. Fig. 5). The input segments to the second pathway are extracted from the images down-sampled by a factor of three. Thus, the network is capable of capturing context in a 513superscript51351^{3} area of the original image through the 173superscript17317^{3} receptive field of the lower-resolution pathway, while only doubling the computational and memory requirements over the single pathway CNN. In comparison, the most recent 2D CNN systems proposed for lesion segmentation (Havaei et al. (2015); Pereira et al. (2015)) have a receptive field limited to 332superscript33233^{2} voxels. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_46", "text": " Figure 8 shows the improvement DeepMedic achieves over the single pathway model Deep+. In Fig. 9 we show two representative visual examples of this improvement when using the multi-scale CNN. Finally, we confirm that the performance increase can be accounted to the additional context and not the additional capacity of DeepMedic. To this end, we build a big single-scale model by doubling the FMs at each of the 9-layers of Deep+ and adding two hidden layers. This 11-layers deep and wide model, referred to as “BigDeep+”, has the same number of parameters as DeepMedic. The performance of the model is not improved, while showing signs of over-fitting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_47", "text": " Acquired brain MRI scans are often anisotropic. Such is the case for most sequences in our TBI dataset, which have been acquired with lower axial resolution, except for the isotropic MPRAGE. We perform a series of experiments to investigate the behaviour of 2D networks and assess the benefit of processing 3D context in this setting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_48", "text": " DeepMedic can be converted to 2D by setting the third dimension of each kernel to one. This way only information from the surrounding context on the axial plane influences the classification of each voxel. If 2D segments are given as input, the dimensionality of the feature maps decreases and so does the memory required. This allows developing 2D variants with increased width, depth and size of training batch with similar requirements as the 3D version, which are valid candidates for model selection in practical scenarios. We assess various configurations and present some representatives in Table 1(b) along with their performance. Best segmentation among investigated 2D variants is achieved by a 19-layers, multi-scale network, reaching 61.5% average DSC on the validation fold. The decline from the 66.6% DSC achieved by the 3D version of DeepMedic indicates the importance of processing 3D context even in settings where most acquired sequences have low resolution along a certain axis. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_49", "text": " The proposed system consisting of the DeepMedic CNN architecture, optionally coupled with a fully connected CRF, is evaluated on three lesion segmentation tasks including challenging clinical data from patients with traumatic brain injuries, brain tumors, and ischemic stroke. Quantitative evaluation and comparisons with state-of-the-art are reported for each of the tasks. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_50", "text": " Sixty-six patients with moderate-to-severe TBI who required admission to the Neurosciences Critical Care Unit at Addenbrooke’s Hospital, Cambridge, UK, underwent imaging using a 3-Tesla Siemens Magnetom TIM Trio within the first week of injury. Ethical approval was obtained from the Local Research Ethics Committee (LREC 97/290) and written assent via consultee agreement was obtained for all patients. The structural MRI sequences that are used in this work are isotropic MPRAGE (1mm×mm\\times1mm×mm\\times1m​m𝑚𝑚mm), axial FLAIR, T2 and Proton Density (PD) (0.7mm×mm\\times0.7mm×mm\\times5m​m𝑚𝑚mm), and Gradient-Echo (GE) (0.86mm×mm\\times0.86mm×mm\\times5m​m𝑚𝑚mm). All visible lesions were manually annotated on the FLAIR and GE sequences with separate labeling for each lesion type. In nine patients the presence of hyperintense white matter lesions that were felt to be chronic in nature were also annotated. Artifacts, for example, signal loss secondary to intraparenchymal pressure probes, were also noted. For the purpose of this study we focus on binary segmentation of all abnormalities within the brain tissue. Thus, we merged all classes that correspond to intra-cerebral abnormalities into a single “lesion” label. Extra-cerebral pathologies such as epidural and subdural hematoma were treated as background. We excluded two datasets because of corrupted FLAIR images, two cases because no lesions were found and one case because of a major scanning artifact corrupting the images. This results in a total of 61 cases used for quantitative evaluation. Brain masks were obtained using the ROBEX tool (Iglesias et al. (2011)). All images were resampled to an isotropic 1​m​m31𝑚superscript𝑚31mm^{3} resolution, with dimensions 193×\\times229×\\times193 and affinely registered (Studholme et al. (1999)) to MNI space using the atlas by Grabner et al. (2006). No bias field correction was used as preliminary results showed that this can negatively affect lesion appearance. Image intensities were normalized to have zero-mean and unit variance, as it has been reported that this improves CNN results (Jarrett et al. (2009)). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_51", "text": " Network configuration and training: The network architecture corresponds to the one described in Sec. 3.4, i.e. a dual-pathway, 11-layers deep CNN. The training data is augmented by adding images reflected along the sagittal axis. To make the network invariant to absolute intensities we also shift the intensities of each MR channel c𝑐c of every training segment by ic=rc​σcsubscript𝑖𝑐subscript𝑟𝑐subscript𝜎𝑐i_{c}=r_{c}\\sigma_{c}. rcsubscript𝑟𝑐r_{c} is sampled for every segment from 𝒩​(0,0.1)𝒩00.1\\mathcal{N}(0,0.1) and σcsubscript𝜎𝑐\\sigma_{c} is the standard deviation of intensities under the brain mask in the corresponding image. The network is regularized using dropout (Hinton et al. (2012)) with a rate of 2% on all convolutional layers, which is in addition to a 50% rate used on the last two layers. The network is evaluated with 5-fold cross-validation on the 61 subjects. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_52", "text": " CRF configuration: The parameters of the fully connected CRF are determined in a configuration experiment using random-search and 15 randomly selected subjects from the TBI database with predictions from a preliminary version of the corresponding model. The 15 subjects are reshuffled into the 5-folds used for subsequent evaluation. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_53", "text": " Random Forest baseline: We have done our best to set up a competitive baseline for comparison. We employ a context-sensitive Random Forest, similar to the model presented by Zikic et al. (2012) for brain tumors except that we apply the forest to the MR images without additional tissue specific priors. We train a forest with 50 trees and maximum depth of 30. Larger size did not improve results. Training data points are approximately equally sampled from lesion and background classes, with the optimal balance empirically chosen. Two hundred randomized cross-channel box features are evaluated at each split node with maximum offsets and box sizes of 20mm. The same folds of training and test sets are used as for our CNN approach. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_54", "text": " Table 1 summarizes the results on TBI. Our CNN significantly outperforms the Random Forest baseline, while the relatively overall low DSC values indicate the difficulty of the task. Due to randomness during training the local minima where a network converges are different between training sessions and some errors they produce differ (Choromanska et al. (2015)). To clear the unbiased errors of the network we form an ensemble of three similar networks, aggregating their output by averaging. This ensemble yields better performance in all metrics but also allows us to investigate the behaviour of our network focusing only on the biased errors. Fig. 10 shows the DSC obtained by the ensemble on each subject in relation to the manually segmented and predicted lesion volume. The network is capable of segmenting cases with very small lesions, although, performance is less robust in these cases as even small errors have large influence on the DSC metric. Investigation of the predicted lesion volume, which is an important biomarker for prognostication, shows that the network is neither biased towards the lesion nor background class, with promising results even on cases with very small lesions. Furthermore, we separately evaluate the influence of the post-processing with the fully connected CRF. As shown in Table 1, the CRF yields improvements over all classifiers. Effects are more prominent when the performance of the primary segmenter degrades, which shows the robustness of this regulariser. Fig. 11 shows three representative cases. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_55", "text": " For brain tumors, we evaluate our system on the data from the 2015 Brain Tumor Segmentation Challenge (BRATS) (Menze et al. (2015)). The training set consists of 220 cases with high grade (HG) and 54 cases with low grade (LG) glioma for which corresponding reference segmentations are provided. The segmentations include the following tumor tissue classes: 1) necrotic core, 2) edema, 3) non-enhancing and 4) enhancing core. The test set consists of 110 cases of both HG and LG but the grade is not revealed. Reference segmentations for the test set are hidden and evaluation is carried out via an online system. For evaluation, the four predicted labels are merged into different sets of whole tumor (all four classes), the core (classes 1,3,4), and the enhancing tumor (class 4)333For interpretation of the results note that, to the best of our knowledge, cases where the “enhancing tumor” class is not present in the manual segmentation are considered as zeros for the calculation of average performance by the evaluation platform, lowering the upper bound for this class.. For each subject, four MRI sequences are available, FLAIR, T1, T1-contrast and T2. The datasets are pre-processed by the organizers and provided as skull-stripped, registered to a common space and resampled to isotropic 1​m​m31𝑚superscript𝑚31mm^{3} resolution. Dimensions of each volume are 240×\\times240×\\times155. We add minimal pre-processing of normalizing the brain-tissue intensities of each sequence to have zero-mean and unit variance. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_56", "text": " Network configuration and training: We modify the DeepMedic architecture to handle multi-class problems by extending the classification layer to five feature maps (four tumor classes plus background). The rest of the configuration remains unchanged. We enrich the dataset with sagittal reflections. Opposite to the experiments on TBI, we do not employ the intensity perturbation and dropout on convolutional layers, because the network should not require as much regularisation with this large database. The network is trained on image segments extracted with equal probability centred on the whole tumor and healthy tissue. The distribution of the classes captured by our training scheme is provided in C. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_57", "text": " To examine our network’s behaviour, we first evaluate it on the training data of the challenge. For this, we run a 5-fold cross validation where each fold contains both HG and LG images. We then retrain the network using all training images, before applying it on the test data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_58", "text": " CRF configuration: For the multi-class problem it is challenging to find a global set of parameters for the CRF which can consistently improve the segmentation of all classes. So instead we merge the four predicted probability maps into a single “whole tumor” map for CRF post-processing. The CRF then only refines the boundaries between tumor and background and additionally removes isolated false positives. Similarly to the experiments on TBI, the CRF is configured on a random subset of 44 HG and 18 LG training images, which are then reshuffled into the subsequent 5-fold cross validation. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_59", "text": " Quantitative results from the application of the DeepMedic, the CRF and an ensemble of three similar networks on the training data are presented in Table 2. The latter two offer an improvement, albeit fairly small since the performance of DeepMedic is already rather high in this task. Also shown are results from previous works, as reported on the online evaluation platform. Various settings may vary among submissions, such as the pre-processing pipeline or the number of folds used for cross-validation. Still it appears that our system performs favourably compared to previous state-of-the-art, including the semi-automatic system of Bakas et al. (2015) (bakas1) who won the latest challenge and the method of Pereira et al. (2015) (peres1), which is based on grade-specific 2D CNNs and requires visual inspection of the tumor and identification of the grade by the user prior to segmentation. Examples of segmentations obtained with our method are shown in Fig. 12. DeepMedic behaves very well in preserving the hierarchical structure of the tumor, which we account to the large context processed by our multi-scale network. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_60", "text": " Table 3 shows the results of our method on the BRATS test data. Results of other submissions are not accessible. The decrease in performance is possibly due to the the inclusion of test images that vary significantly from the training data, such as cases acquired in clinical centers that did not provide any of the training images, something that was confirmed by the organisers. Note that performance gains obtained with the CRF are larger in this case. This indicates not only that its configuration has not overfitted to the training database but also that the CRF is robust to factors of variation between acquisition sites, which complements nicely the more sensitive CNN. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_61", "text": " We participated in the 2015 Ischemic Stroke Lesion Segmentation (ISLES) challenge, where our system achieved the best results among all participants on sub-acute ischemic stroke lesions (Maier et al. (2017)). In the training phase of the challenge, 28 datasets have been made available, along with manual segmentations. Each dataset included T1, T1-contrast, FLAIR and DWI sequences. All images were provided as skull-stripped and resampled to isotropic 1​m​m31𝑚superscript𝑚31mm^{3} voxel resolution. Each volume is of size 230×\\times230×\\times154. In the testing stage, teams were provided with 36 datasets for evaluation. The test data were acquired in two clinical centers, with one of them being the same that provided all training images. Corresponding expert segmentations were hidden and results had to be submitted to an online evaluation platform. Similar to BRATS, the only pre-processing that we applied is the normalization of each image to the zero-mean and unit variance. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_62", "text": " Network Configuration and Training: The configuration of the network employed is described in Kamnitsas et al. (2015). The main difference with the configuration used for TBI and tumors as employed above is the relatively smaller number of FMs in the low-resolution pathway. This choice should not significantly influence accuracy on the generally small SISS lesions but it allowed us to lower the computational cost. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_63", "text": " Similar to the other experiments, we evaluate our network with a 5-fold cross validation on the training datasets. We use data augmentation with sagittal reflections. For the testing phase of the challenge, we trained an ensemble of three networks on all training cases and aggregate their predictions by averaging. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_64", "text": " CRF configuration: The parameters of the CRF were configured via a random search on the whole training dataset. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_65", "text": " The performance of our system on the training data is shown in Table 4. Significant improvement is achieved by the structural regularisation offered by the CRF, although it could be partially accounted for by overfitting the training data during the CRF’s configuration. Examples for visual inspection are shown in Fig. 13. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_66", "text": " For the testing phase of the challenge we formed an ensemble of three networks, coupled with the fully connected CRF. Our submission ranked first, indicating superior performance on this challenging task among 14 submissions. Table 5 shows our results, along with the other two top entries (Feng et al. (2015); Halme et al. (2015)). Among the other participating methods was the CNN of Havaei et al. (2015) with 3 layers of 2D convolutions. That method perfomed less well on this challenging task (Maier et al. (2017)). This points out the advantage offered by 3D context, the large field of view of DeepMedic thanks to multi-scale processing and the representational power of deeper networks. It is important to note the decrease of performance in comparison to the training set. All methods performed worse on the data coming from the second clinical center, including the method of Feng et al. (2015) that is not machine-learning based. This highlights a general difficulty with current approaches when applied on multi-center data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_67", "text": " Our CNN is implemented using the Theano library (Bastien et al. (2012)). Each training session requires approximately one day on an NVIDIA GTX Titan X GPU using cuDNN v5.0. The efficient architecture of DeepMedic also allows models to be trained on GPUs with only 3GB of memory. Note that although dimensions of the volumes in the processed databases do not allow dense training on whole volumes for this size of network, dense inference on a whole volume is still possible, as it requires only a forward-pass and thus less memory. In this fashion segmentation of a volume takes less than 30 seconds but requires 12 GB of GPU memory. Tiling the volume into multiple segments of size 353superscript35335^{3} allows inference on 3 GB GPUs in less than three minutes. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_68", "text": " Our 3D fully connected CRF is implemented by extending the original source code by Krähenbühl and Koltun (2011). A CPU implementation is fast, capable of processing a five-channel brain scan in under three minutes. Further speed-up could be achieved with a GPU implementation, but was not found necessary in the scope of this work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_69", "text": " We have presented DeepMedic, a 3D CNN architecture for automatic lesion segmentation that surpasses state-of-the-art on challenging data. The proposed novel training scheme is not only computationally efficient but also offers an adaptive way of partially alleviating the inherent class-imbalance of segmentation problems. We analyzed the benefits of using small convolutional kernels in 3D CNNs, which allowed us to develop a deeper and thus more discriminative network, without increasing the computational cost and number of trainable parameters. We discussed the challenges of training deep neural networks and the adopted solutions from the latest advances in deep learning. Furthermore, we proposed an efficient solution for processing large image context by the use of parallel convolutional pathways for multi-scale processing, alleviating one of the main computational limitations of previous 3D CNNs. Finally, we presented the first application of a 3D fully connected CRF on medical data, employed as a post-processing step to refine the network’s output, a method that has also been shown promising for processing 2D natural images (Chen et al. (2014)). The design of the proposed system is well suited for processing medical volumes thanks to its generic 3D nature. The capabilities of DeepMedic and the employed CRF for capturing 3D patterns exceed those of 2D networks and locally connected random fields, models that have been commonly used in previous work. At the same time, our system is very efficient at inference time, which allows its adoption in a variety of research and clinical settings. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_70", "text": " The generic nature of our system allows its straightforward application for different lesion segmentation tasks without major adaptations. To the best of our knowledge, our system achieved the highest reported accuracy on a cohort of patients with severe TBI. As a comparison, we improved over the reported performance of the pipeline in Rao et al. (2014). Important to note is that the latter work focused only on segmentation of contusions, while our system has been shown capable of segmenting even small and diffused pathologies. Additionally, our pipeline achieved state-of-the-art performance on both public benchmarks of brain tumors (BRATS 2015) and stroke lesions (SISS ISLES 2015). We believe performance can be further improved with task- and data-specific adjustments, for instance in the pre-processing, but our results show the potential of this generically designed segmentation system. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_71", "text": " When applying our pipeline to new tasks, a laborious process is the reconfiguration of the CRF. The model improved our system’s performance with statistical significance in all investigated tasks, most profoundly when the performance of the underlying classifier degrades, proving its flexibility and robustness. Finding optimal parameters for each task, however, can be challenging. This became most obvious on the task of multi-class tumor segmentation. Because the tumor’s substructures vary significantly in appearance, finding a global set of parameters that yields improvements on all classes proved difficult. Instead, we applied the CRF in a binary fashion. This CRF model can be configured with a separate set of parameters for each class. However the larger parameter space would complicate its configuration further. Recent work from Zheng et al. (2015) showed that this particular CRF can be casted as a neural network and its parameters can be learned with regular gradient descent. Training it in an end-to-end fashion on top of a neural network would alleviate the discussed problems. This will be explored as part of future work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_72", "text": " The discriminative power of the learned features is indicated by the success of recent CNN-based systems in matching human performance in domains where it was previously considered too ambitious (He et al. (2015); Silver et al. (2016)). Analysis of the automatically extracted information could potentially provide novel insights and facilitate research on pathologies for which little prior knowledge is currently available. In an attempt to illustrate this, we explore what patterns have been learned automatically for the lesion segmentation tasks. We visualize the activations of DeepMedic’s FMs when processing a subject from our TBI database. Many appearing patterns are difficult to interpret, especially in deeper layers. In Fig. 14 we provide some examples that have an intuitive explanation. One of the most interesting findings is that the network learns to identify the ventricles, CSF, white and gray matter. This reveals that differentiation of tissue type is beneficial for lesion segmentation. This is in line with findings in the literature, where segmentation performance of traditional classifiers was significantly improved by incorporation of tissue priors (Van Leemput et al. (1999); Zikic et al. (2012)). It is intuitive that different types of lesions affect different parts of the brain depending on the underlying mechanisms of the pathology. A rigorous analysis of spatial cues extracted by the network may reveal correlations that are not well defined yet. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_73", "text": " Similarly intriguing is the information extracted in the low-resolution pathway. As they process greater context, these neurons gain additional localization capabilities. The activations of certain FMs form fields in the surrounding areas of the brain. These patterns are preserved in the deepest hidden layers, which indicates they are beneficial for the final segmentation (see two last rows of Fig. 14). We believe these cues provide a spatial bias to the system, for instance that large TBI contusions tend to occur towards the front and sides of the brain (see Fig. 1(c)). Furthermore, the interaction of the multi-resolution features can be observed in FMs of the hidden layer that follows the concatenation of the pathways. The network learns to weight the output of the two pathways, preserving low resolution in certain parts and show fine details in others (bottom row of Fig. 14, first three FMs). Our assumption is that the low-resolution pathway provides a rough localization of large pathologies and brain areas that are challenging to segment, which reserves the rest of the network’s capacity for learning detailed patterns associated with the detection of smaller lesions, fine structures and ambiguous areas. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_74", "text": " The findings of the above exploration lead us to believe that great potential lies into fusing the discriminative power of the “deep black box” with the knowledge acquired over years of targeted biomedical research. Clinical knowledge is available for certain pathologies, such as spatial priors for white matter lesions. Previously engineered models have been proven effective in tackling fundamental imaging problems, such as brain extraction, tissue segmentation and bias field correction. We show that a network is capable of automatically extracting some of this information. It would be interesting, however, to investigate structured ways for incorporating such existing information as priors into the network’s feature space, which should simplify the optimization problem while letting a specialist guide the network towards an optimal solution. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_75", "text": " Although neural networks seem promising for medical image analysis, making the inference process more interpretable is required. This would allow understanding when the network fails, an important aspect in biomedical applications. Although the output is bounded in the (0,1)01(0,1) range and commonly referred to as probability for convenience, it is not a true probability in a Bayesian sense. Research towards Bayesian networks aims to alleviate this limitation. An example is the recent work of Gal and Ghahramani (2015) who show that model confidence can be estimated via sampling the dropout mask. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_76", "text": " A general point should be made about the performance drop observed when our system is applied on test datasets of BRATS and ISLES in comparison to its cross-validated performance on the training data. In both cases, subsets of the test images were acquired in clinical centers different from the ones of training datasets. Differences in scanner type and acquisition protocols have significant impact on the appearance of the images. The issue of multi-center data heterogeneity is considered a major bottleneck for enabling large-scale imaging studies. This is not specific to our approach, but a general problem in medical image analysis. One possible way of making the CNN invariant to the data heterogeneity is to learn a generative model for the data acquisition process, and use this model in the data augmentation step. This is a direction we explore as part of future work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_77", "text": " In order to facilitate further research in this area and to provide a baseline for future evaluations, we make the source code of the entire system publicly available. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" } ]
what is the definition of BLEU?
BLEU score is the metric to compute performance of the language translation task [48]. On the WMT 2014 English-to-German translation task, big transformer model establishes a new state-of-the-art BLEU score of 284 [49]. BLUE score also drops with single head or too many heads [53].
[ 48, 49, 53 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5). Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures (38, 24, 15). ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_1", "text": " Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states htsubscriptℎ𝑡h_{t}, as a function of the previous hidden state ht−1subscriptℎ𝑡1h_{t-1} and the input for position t𝑡t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks and conditional computation , while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_2", "text": " Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences (2, 19). In all but a few cases , however, such attention mechanisms are used in conjunction with a recurrent network. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_3", "text": " In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_4", "text": " The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU , ByteNet and ConvS2S , all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions . In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_5", "text": " Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations (4, 27, 28, 22). ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_6", "text": " End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks . ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_7", "text": " To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as (17, 18) and . ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_8", "text": " Most competitive neural sequence transduction models have an encoder-decoder structure (5, 2, 35). Here, the encoder maps an input sequence of symbol representations (x1,…,xn)subscript𝑥1…subscript𝑥𝑛(x_{1},...,x_{n}) to a sequence of continuous representations 𝐳=(z1,…,zn)𝐳subscript𝑧1…subscript𝑧𝑛\\mathbf{z}=(z_{1},...,z_{n}). Given 𝐳𝐳\\mathbf{z}, the decoder then generates an output sequence (y1,…,ym)subscript𝑦1…subscript𝑦𝑚(y_{1},...,y_{m}) of symbols one element at a time. At each step the model is auto-regressive , consuming the previously generated symbols as additional input when generating the next. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_9", "text": " The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_10", "text": " The encoder is composed of a stack of N=6𝑁6N=6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection around each of the two sub-layers, followed by layer normalization . That is, the output of each sub-layer is LayerNorm​(x+Sublayer​(x))LayerNorm𝑥Sublayer𝑥\\mathrm{LayerNorm}(x+\\mathrm{Sublayer}(x)), where Sublayer​(x)Sublayer𝑥\\mathrm{Sublayer}(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel=512subscript𝑑model512d_{\\text{model}}=512. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_11", "text": " The decoder is also composed of a stack of N=6𝑁6N=6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i𝑖i can depend only on the known outputs at positions less than i𝑖i. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_12", "text": " An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_13", "text": " We call our particular attention \"Scaled Dot-Product Attention\" (Figure 2). The input consists of queries and keys of dimension dksubscript𝑑𝑘d_{k}, and values of dimension dvsubscript𝑑𝑣d_{v}. We compute the dot products of the query with all keys, divide each by dksubscript𝑑𝑘\\sqrt{d_{k}}, and apply a softmax function to obtain the weights on the values. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_14", "text": " In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q𝑄Q. The keys and values are also packed together into matrices K𝐾K and V𝑉V. We compute the matrix of outputs as: ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_15", "text": " Attention​(Q,K,V)=softmax​(Q​KTdk)​VAttention𝑄𝐾𝑉softmax𝑄superscript𝐾𝑇subscript𝑑𝑘𝑉\\mathrm{Attention}(Q,K,V)=\\mathrm{softmax}(\\frac{QK^{T}}{\\sqrt{d_{k}}})V (1) ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_16", "text": " The two most commonly used attention functions are additive attention , and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1dk1subscript𝑑𝑘\\frac{1}{\\sqrt{d_{k}}}. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_17", "text": " While for small values of dksubscript𝑑𝑘d_{k} the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dksubscript𝑑𝑘d_{k} . We suspect that for large values of dksubscript𝑑𝑘d_{k}, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 111To illustrate why the dot products get large, assume that the components of q𝑞q and k𝑘k are independent random variables with mean 00 and variance 111. Then their dot product, q⋅k=∑i=1dkqi​ki⋅𝑞𝑘superscriptsubscript𝑖1subscript𝑑𝑘subscript𝑞𝑖subscript𝑘𝑖q\\cdot k=\\sum_{i=1}^{d_{k}}q_{i}k_{i}, has mean 00 and variance dksubscript𝑑𝑘d_{k}.. To counteract this effect, we scale the dot products by 1dk1subscript𝑑𝑘\\frac{1}{\\sqrt{d_{k}}}. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_18", "text": " Instead of performing a single attention function with dmodelsubscript𝑑modeld_{\\text{model}}-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values hℎh times with different, learned linear projections to dksubscript𝑑𝑘d_{k}, dksubscript𝑑𝑘d_{k} and dvsubscript𝑑𝑣d_{v} dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dvsubscript𝑑𝑣d_{v}-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_19", "text": " Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_20", "text": " MultiHead​(Q,K,V)MultiHead𝑄𝐾𝑉\\displaystyle\\mathrm{MultiHead}(Q,K,V) =Concat​(head1,…,headh)​WOabsentConcatsubscripthead1…subscriptheadhsuperscript𝑊𝑂\\displaystyle=\\mathrm{Concat}(\\mathrm{head_{1}},...,\\mathrm{head_{h}})W^{O} where​headiwheresubscriptheadi\\displaystyle\\text{where}~{}\\mathrm{head_{i}} =Attention​(Q​WiQ,K​WiK,V​WiV)absentAttention𝑄subscriptsuperscript𝑊𝑄𝑖𝐾subscriptsuperscript𝑊𝐾𝑖𝑉subscriptsuperscript𝑊𝑉𝑖\\displaystyle=\\mathrm{Attention}(QW^{Q}_{i},KW^{K}_{i},VW^{V}_{i}) ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_21", "text": " Where the projections are parameter matrices WiQ∈ℝdmodel×dksubscriptsuperscript𝑊𝑄𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑘W^{Q}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{k}}, WiK∈ℝdmodel×dksubscriptsuperscript𝑊𝐾𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑘W^{K}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{k}}, WiV∈ℝdmodel×dvsubscriptsuperscript𝑊𝑉𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑣W^{V}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{v}} and WO∈ℝh​dv×dmodelsuperscript𝑊𝑂superscriptℝℎsubscript𝑑𝑣subscript𝑑modelW^{O}\\in\\mathbb{R}^{hd_{v}\\times d_{\\text{model}}}. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_22", "text": " In this work we employ h=8ℎ8h=8 parallel attention layers, or heads. For each of these we use dk=dv=dmodel/h=64subscript𝑑𝑘subscript𝑑𝑣subscript𝑑modelℎ64d_{k}=d_{v}=d_{\\text{model}}/h=64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_23", "text": " The Transformer uses multi-head attention in three different ways: • In \"encoder-decoder attention\" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as (38, 2, 9). • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞-\\infty) all values in the input of the softmax which correspond to illegal connections. See Figure 2. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_24", "text": " In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_25", "text": " FFN​(x)=max⁡(0,x​W1+b1)​W2+b2FFN𝑥0𝑥subscript𝑊1subscript𝑏1subscript𝑊2subscript𝑏2\\mathrm{FFN}(x)=\\max(0,xW_{1}+b_{1})W_{2}+b_{2} (2) ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_26", "text": " While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel=512subscript𝑑model512d_{\\text{model}}=512, and the inner-layer has dimensionality df​f=2048subscript𝑑𝑓𝑓2048d_{ff}=2048. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_27", "text": " Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodelsubscript𝑑modeld_{\\text{model}}. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to . In the embedding layers, we multiply those weights by dmodelsubscript𝑑model\\sqrt{d_{\\text{model}}}. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_28", "text": " Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add \"positional encodings\" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodelsubscript𝑑modeld_{\\text{model}} as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed . ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_29", "text": " In this work, we use sine and cosine functions of different frequencies: ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_30", "text": " P​E(p​o​s,2​i)=s​i​n​(p​o​s/100002​i/dmodel)𝑃subscript𝐸𝑝𝑜𝑠2𝑖𝑠𝑖𝑛𝑝𝑜𝑠superscript100002𝑖subscript𝑑model\\displaystyle PE_{(pos,2i)}=sin(pos/10000^{2i/d_{\\text{model}}}) P​E(p​o​s,2​i+1)=c​o​s​(p​o​s/100002​i/dmodel)𝑃subscript𝐸𝑝𝑜𝑠2𝑖1𝑐𝑜𝑠𝑝𝑜𝑠superscript100002𝑖subscript𝑑model\\displaystyle PE_{(pos,2i+1)}=cos(pos/10000^{2i/d_{\\text{model}}}) ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_31", "text": " where p​o​s𝑝𝑜𝑠pos is the position and i𝑖i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2​π2𝜋2\\pi to 10000⋅2​π⋅100002𝜋10000\\cdot 2\\pi. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k𝑘k, P​Ep​o​s+k𝑃subscript𝐸𝑝𝑜𝑠𝑘PE_{pos+k} can be represented as a linear function of P​Ep​o​s𝑃subscript𝐸𝑝𝑜𝑠PE_{pos}. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_32", "text": " We also experimented with using learned positional embeddings instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_33", "text": " In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1,…,xn)subscript𝑥1…subscript𝑥𝑛(x_{1},...,x_{n}) to another sequence of equal length (z1,…,zn)subscript𝑧1…subscript𝑧𝑛(z_{1},...,z_{n}), with xi,zi∈ℝdsubscript𝑥𝑖subscript𝑧𝑖superscriptℝ𝑑x_{i},z_{i}\\in\\mathbb{R}^{d}, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_34", "text": " One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_35", "text": " The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies . Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_36", "text": " As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O​(n)𝑂𝑛O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n𝑛n is smaller than the representation dimensionality d𝑑d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece and byte-pair representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r𝑟r in the input sequence centered around the respective output position. This would increase the maximum path length to O​(n/r)𝑂𝑛𝑟O(n/r). We plan to investigate this approach further in future work. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_37", "text": " A single convolutional layer with kernel width k<n𝑘𝑛k<n does not connect all pairs of input and output positions. Doing so requires a stack of O​(n/k)𝑂𝑛𝑘O(n/k) convolutional layers in the case of contiguous kernels, or O​(l​o​gk​(n))𝑂𝑙𝑜subscript𝑔𝑘𝑛O(log_{k}(n)) in the case of dilated convolutions , increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k𝑘k. Separable convolutions , however, decrease the complexity considerably, to O​(k⋅n⋅d+n⋅d2)𝑂⋅𝑘𝑛𝑑⋅𝑛superscript𝑑2O(k\\cdot n\\cdot d+n\\cdot d^{2}). Even with k=n𝑘𝑛k=n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_38", "text": " As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_39", "text": " This section describes the training regime for our models. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_40", "text": " We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding , which has a shared source-target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary . Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_41", "text": " We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_42", "text": " We used the Adam optimizer  with β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.98subscript𝛽20.98\\beta_{2}=0.98 and ϵ=10−9italic-ϵsuperscript109\\epsilon=10^{-9}. We varied the learning rate over the course of training, according to the formula: ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_43", "text": " l​r​a​t​e=dmodel−0.5⋅min⁡(s​t​e​p​_​n​u​m−0.5,s​t​e​p​_​n​u​m⋅w​a​r​m​u​p​_​s​t​e​p​s−1.5)𝑙𝑟𝑎𝑡𝑒⋅superscriptsubscript𝑑model0.5𝑠𝑡𝑒𝑝_𝑛𝑢superscript𝑚0.5⋅𝑠𝑡𝑒𝑝_𝑛𝑢𝑚𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝superscript𝑠1.5lrate=d_{\\text{model}}^{-0.5}\\cdot\\min({step\\_num}^{-0.5},{step\\_num}\\cdot{warmup\\_steps}^{-1.5}) (3) ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_44", "text": " This corresponds to increasing the learning rate linearly for the first w​a​r​m​u​p​_​s​t​e​p​s𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝𝑠warmup\\_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used w​a​r​m​u​p​_​s​t​e​p​s=4000𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝𝑠4000warmup\\_steps=4000. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_45", "text": " We employ three types of regularization during training: ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_46", "text": " We apply dropout to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pd​r​o​p=0.1subscript𝑃𝑑𝑟𝑜𝑝0.1P_{drop}=0.1. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_47", "text": " During training, we employed label smoothing of value ϵl​s=0.1subscriptitalic-ϵ𝑙𝑠0.1\\epsilon_{ls}=0.1 . This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_48", "text": " On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.02.02.0 BLEU, establishing a new state-of-the-art BLEU score of 28.428.428.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.53.53.5 days on 888 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_49", "text": " On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.041.041.0, outperforming all of the previously published single models, at less than 1/4141/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pd​r​o​p=0.1subscript𝑃𝑑𝑟𝑜𝑝0.1P_{drop}=0.1, instead of 0.30.30.3. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_50", "text": " For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 444 and length penalty α=0.6𝛼0.6\\alpha=0.6 . These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 505050, but terminate early when possible . ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_51", "text": " Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 222We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_52", "text": " To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_53", "text": " In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_54", "text": " In Table 3 rows (B), we observe that reducing the attention key size dksubscript𝑑𝑘d_{k} hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings , and observe nearly identical results to the base model. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_55", "text": " To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes . ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_56", "text": " We trained a 4-layer transformer with dm​o​d​e​l=1024subscript𝑑𝑚𝑜𝑑𝑒𝑙1024d_{model}=1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank , about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences . We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_57", "text": " We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we increased the maximum output length to input length + 300300300. We used a beam size of 212121 and α=0.3𝛼0.3\\alpha=0.3 for both WSJ only and the semi-supervised setting. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_58", "text": " Our results in Table 4 show that despite the lack of task-specific tuning our model performs surprisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar . ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_59", "text": " In contrast to RNN sequence-to-sequence models , the Transformer outperforms the BerkeleyParser even when training only on the WSJ training set of 40K sentences. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_60", "text": " In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_61", "text": " For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_62", "text": " We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_63", "text": " The code we used to train and evaluate our models is available at https://github.com/tensorflow/tensor2tensor. ", "title": "Attention Is All You Need" }, { "id": "1706.03762_all_64", "text": " We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. ", "title": "Attention Is All You Need" } ]
How is complexity calculated given scale factor of the ShuffleNet model? Given scale factor 0.25 and complexity of ShuffleNet 1x is 140 MFLOPS
As it is shown in Table 2, the complexity of ShufflNet 025x will be 13 MFLOPs [17].
[ 17 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computation at billions of FLOPs. This report examines the opposite extreme: pursuing the best accuracy in very limited computational budgets at tens or hundreds of MFLOPs, focusing on common mobile platforms such as drones, robots, and smartphones. Note that many existing works (16, 22, 43, 42, 38, 27) focus on pruning, compressing, or low-bit representing a “basic” network architecture. Here we aim to explore a highly efficient basic architecture specially designed for our desired computing ranges. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_1", "text": " We notice that state-of-the-art basic architectures such as Xception  and ResNeXt  become less efficient in extremely small networks because of the costly dense 1×1111\\times 1 convolutions. We propose using pointwise group convolutions to reduce computation complexity of 1×1111\\times 1 convolutions. To overcome the side effects brought by group convolutions, we come up with a novel channel shuffle operation to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient architecture called ShuffleNet. Compared with popular structures like  (30, 9, 40), for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_2", "text": " We evaluate our models on the challenging ImageNet classification (4, 29) and MS COCO object detection  tasks. A series of controlled experiments shows the effectiveness of our design principles and the better performance over other structures. Compared with the state-of-the-art architecture MobileNet , ShuffleNet achieves superior performance by a significant margin, e.g. absolute 7.8% lower ImageNet top-1 error at level of 40 MFLOPs. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_3", "text": " We also examine the speedup on real hardware, i.e. an off-the-shelf ARM-based computing core. The ShuffleNet model achieves ∼similar-to\\sim13×\\times actual speedup (theoretical speedup is 18×\\times) over AlexNet  while maintaining comparable accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_4", "text": " The last few years have seen the success of deep neural networks in computer vision tasks (21, 36, 28), in which model designs play an important role. The increasing needs of running high quality deep neural networks on embedded devices encourage the study on efficient model designs . For example, GoogLeNet  increases the depth of networks with much lower complexity compared to simply stacking convolution layers. SqueezeNet  reduces parameters and computation significantly while maintaining accuracy. ResNet (9, 10) utilizes the efficient bottleneck structure to achieve impressive performance. SENet  introduces an architectural unit that boosts performance at slight computation cost. Concurrent with us, a very recent work  employs reinforcement learning and model search to explore efficient model designs. The proposed mobile NASNet model achieves comparable performance with our counterpart ShuffleNet model (26.0% @ 564 MFLOPs vs. 26.3% @ 524 MFLOPs for ImageNet classification error). But  do not report results on extremely tiny models (e.g. complexity less than 150 MFLOPs), nor evaluate the actual inference time on mobile devices. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_5", "text": " The concept of group convolution, which was first introduced in AlexNet  for distributing the model over two GPUs, has been well demonstrated its effectiveness in ResNeXt . Depthwise separable convolution proposed in Xception  generalizes the ideas of separable convolutions in Inception series (34, 32). Recently, MobileNet  utilizes the depthwise separable convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolution and depthwise separable convolution in a novel form. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_6", "text": " To the best of our knowledge, the idea of channel shuffle operation is rarely mentioned in previous work on efficient model design, although CNN library cuda-convnet  supports “random sparse convolution” layer, which is equivalent to random channel shuffle followed by a group convolutional layer. Such “random shuffle” operation has different purpose and been seldom exploited later. Very recently, another concurrent work   also adopt this idea for a two-stage convolution. However,   did not specially investigate the effectiveness of channel shuffle itself and its usage in tiny model design. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_7", "text": " This direction aims to accelerate inference while preserving accuracy of a pre-trained model. Pruning network connections (6, 7) or channels  reduces redundant connections in a pre-trained model while maintaining performance. Quantization (31, 27, 39, 45, 44) and factorization (22, 16, 18, 37) are proposed in literature to reduce redundancy in calculations to speed up inference. Without modifying the parameters, optimized convolution algorithms implemented by FFT (25, 35) and other methods  decrease time consumption in practice. Distilling  transfers knowledge from large models into small ones, which makes training small models easier. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_8", "text": " Modern convolutional neural networks (30, 33, 34, 32, 9, 10) usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception  and ResNeXt  introduce efficient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1×1111\\times 1 convolutions (also called pointwise convolutions in  ) into account, which require considerable complexity. For example, in ResNeXt  only 3×3333\\times 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in  ). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might significantly damage the accuracy. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_9", "text": " To address the issue, a straightforward solution is to apply channel sparse connections, for example group convolutions, also on 1×1111\\times 1 layers. By ensuring that each convolution operates only on the corresponding input channel group, group convolution significantly reduces computation cost. However, if multiple group convolutions stack together, there is one side effect: outputs from a certain channel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convolution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information flow between channel groups and weakens representation. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_10", "text": " If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups. This can be efficiently and elegantly implemented by a channel shuffle operation (Fig 1 (c)): suppose a convolutional layer with g𝑔g groups whose output has g×n𝑔𝑛g\\times n channels; we first reshape the output channel dimension into (g,n)𝑔𝑛(g,n), transposing and then flattening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_11", "text": " Channel shuffle operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efficient network unit with channel shuffle and group convolution. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_12", "text": " Taking advantage of the channel shuffle operation, we propose a novel ShuffleNet unit specially designed for small networks. We start from the design principle of bottleneck unit  in Fig 2 (a). It is a residual block. In its residual branch, for the 3×3333\\times 3 layer, we apply a computational economical 3×3333\\times 3 depthwise convolution  on the bottleneck feature map. Then, we replace the first 1×1111\\times 1 layer with pointwise group convolution followed by a channel shuffle operation, to form a ShuffleNet unit, as shown in Fig 2 (b). The purpose of the second pointwise group convolution is to recover the channel dimension to match the shortcut path. For simplicity, we do not apply an extra channel shuffle operation after the second pointwise layer as it results in comparable scores. The usage of batch normalization (BN)  and nonlinearity is similar to  (9, 40), except that we do not use ReLU after depthwise convolution as suggested by  . As for the case where ShuffleNet is applied with stride, we simply make two modifications (see Fig 2 (c)): (i) add a 3×3333\\times 3 average pooling on the shortcut path; (ii) replace the element-wise addition with channel concatenation, which makes it easy to enlarge channel dimension with little extra computation cost. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_13", "text": " Thanks to pointwise group convolution with channel shuffle, all components in ShuffleNet unit can be computed efficiently. Compared with ResNet  (bottleneck design) and ResNeXt , our structure has less complexity under the same settings. For example, given the input size c×h×w𝑐ℎ𝑤c\\times h\\times w and the bottleneck channels m𝑚m, ResNet unit requires h​w​(2​c​m+9​m2)ℎ𝑤2𝑐𝑚9superscript𝑚2hw(2cm+9m^{2}) FLOPs and ResNeXt has h​w​(2​c​m+9​m2/g)ℎ𝑤2𝑐𝑚9superscript𝑚2𝑔hw(2cm+9m^{2}/g) FLOPs, while our ShuffleNet unit requires only h​w​(2​c​m/g+9​m)ℎ𝑤2𝑐𝑚𝑔9𝑚hw(2cm/g+9m) FLOPs, where g𝑔g means the number of groups for convolutions. In other words, given a computational budget, ShuffleNet can use wider feature maps. We find this is critical for small networks, as tiny networks usually have an insufficient number of channels to process the information. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_14", "text": " In addition, in ShuffleNet depthwise convolution only performs on bottleneck feature maps. Even though depthwise convolution usually has very low theoretical complexity, we find it difficult to efficiently implement on low-power mobile devices, which may result from a worse computation/memory access ratio compared with other dense operations. Such drawback is also referred in  , which has a runtime library based on TensorFlow . In ShuffleNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possible. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_15", "text": " Built on ShuffleNet units, we present the overall ShuffleNet architecture in Table 1. The proposed network is mainly composed of a stack of ShuffleNet units grouped into three stages. The first building block in each stage is applied with stride = 2. Other hyper-parameters within a stage stay the same, and for the next stage the output channels are doubled. Similar to  , we set the number of bottleneck channels to 1/4 of the output channels for each ShuffleNet unit. Our intent is to provide a reference design as simple as possible, although we find that further hyper-parameter tunning might generate better results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_16", "text": " In ShuffleNet units, group number g𝑔g controls the connection sparsity of pointwise convolutions. Table 1 explores different group numbers and we adapt the output channels to ensure overall computation cost roughly unchanged (∼similar-to\\sim140 MFLOPs). Obviously, larger group numbers result in more output channels (thus more convolutional filters) for a given complexity constraint, which helps to encode more information, though it might also lead to degradation for an individual convolutional filter due to limited corresponding input channels. In Sec 4.1.1 we will study the impact of this number subject to different computational constrains. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_17", "text": " To customize the network to a desired complexity, we can simply apply a scale factor s𝑠s on the number of channels. For example, we denote the networks in Table 1 as ”ShuffleNet 1×\\times”, then ”ShuffleNet s×s\\times” means scaling the number of filters in ShuffleNet 1×\\times by s𝑠s times thus overall complexity will be roughly s2superscript𝑠2s^{2} times of ShuffleNet 1×\\times. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_18", "text": " We mainly evaluate our models on the ImageNet 2012 classification dataset (29, 4). We follow most of the training settings and hyper-parameters used in  , with two exceptions: (i) we set the weight decay to 4e-5 instead of 1e-4 and use linear-decay learning rate policy (decreased from 0.5 to 0); (ii) we use slightly less aggressive scale augmentation for data preprocessing. Similar modifications are also referenced in   because such small networks usually suffer from underfitting rather than overfitting. It takes 1 or 2 days to train a model for 3×1053superscript1053\\times 10^{5} iterations on 4 GPUs, whose batch size is set to 1024. To benchmark, we compare single crop top-1 performance on ImageNet validation set, i.e. cropping 224×224224224224\\times 224 center view from 256×256\\times input image and evaluating classification accuracy. We use exactly the same settings for all models to ensure fair comparisons. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_19", "text": " The core idea of ShuffleNet lies in pointwise group convolution and channel shuffle operation. In this subsection we evaluate them respectively. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_20", "text": " To evaluate the importance of pointwise group convolutions, we compare ShuffleNet models of the same complexity whose numbers of groups range from 1 to 8. If the group number equals 1, no pointwise group convolution is involved and then the ShuffleNet unit becomes an ”Xception-like”  structure. For better understanding, we also scale the width of the networks to 3 different complexities and compare their classification performance respectively. Results are shown in Table 2. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_21", "text": " From the results, we see that models with group convolutions (g>1𝑔1g>1) consistently perform better than the counterparts without pointwise group convolutions (g=1𝑔1g=1). Smaller models tend to benefit more from groups. For example, for ShuffleNet 1×\\times the best entry (g=8𝑔8g=8) is 1.2% better than the counterpart, while for ShuffleNet 0.5×\\times and 0.25×\\times the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it benefits more from enlarged feature maps. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_22", "text": " Table 2 also shows that for some models (e.g. ShuffleNet 0.5×\\times) when group numbers become relatively large (e.g. g=8𝑔8g=8), the classification score saturates or even drops. With an increase in group number (thus wider feature maps), input channels for each convolutional filter become fewer, which may harm representation capability. Interestingly, we also notice that for smaller models such as ShuffleNet 0.25×\\times larger group numbers tend to better results consistently, which suggests wider feature maps bring more benefits for smaller models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_23", "text": " The purpose of shuffle operation is to enable cross-group information flow for multiple group convolution layers. Table 3 compares the performance of ShuffleNet structures (group number is set to 3 or 8 for instance) with/without channel shuffle. The evaluations are performed under three different scales of complexity. It is clear that channel shuffle consistently boosts classification scores for different settings. Especially, when group number is relatively large (e.g. g=8𝑔8g=8), models with channel shuffle outperform the counterparts by a significant margin, which shows the importance of cross-group information interchange. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_24", "text": " Recent leading convolutional units in VGG , ResNet , GoogleNet , ResNeXt  and Xception  have pursued state-of-the-art results with large models (e.g. ≥1absent1\\geq 1GFLOPs), but do not fully explore low-complexity conditions. In this section we survey a variety of building blocks and make comparisons with ShuffleNet under the same complexity constraint. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_25", "text": " For fair comparison, we use the overall network architecture as shown in Table 1. We replace the ShuffleNet units in Stage 2-4 with other structures, then adapt the number of channels to ensure the complexity remains unchanged. The structures we explored include: ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_26", "text": " • VGG-like. Following the design principle of VGG net , we use a two-layer 3×\\times3 convolutions as the basic building block. Different from  , we add a Batch Normalization layer  after each of the convolutions to make end-to-end training easier. • ResNet. We adopt the ”bottleneck” design in our experiment, which has been demonstrated more efficient in   . Same as  , the bottleneck ratio111In the bottleneck-like units (like ResNet, ResNeXt or ShuffleNet) bottleneck ratio implies the ratio of bottleneck channels to output channels. For example, bottleneck ratio = 1:4:141:4 means the output feature map is 4 times the width of the bottleneck feature map. is also 1:4:141:4. • Xception-like. The original structure proposed in   involves fancy designs or hyper-parameters for different stages, which we find difficult for fair comparison on small models. Instead, we remove the pointwise group convolutions and channel shuffle operation from ShuffleNet (also equivalent to ShuffleNet with g=1𝑔1g=1). The derived structure shares the same idea of “depthwise separable convolution” as in  , which is called an Xception-like structure here. • ResNeXt. We use the settings of cardinality =16absent16=16 and bottleneck ratio =1:2:absent12=1:2 as suggested in  . We also explore other settings, e.g. bottleneck ratio =1:4:absent14=1:4, and get similar results. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_27", "text": " We use exactly the same settings to train these models. Results are shown in Table 4. Our ShuffleNet models outperform most others by a significant margin under different complexities. Interestingly, we find an empirical relationship between feature map channels and classification accuracy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, ShuffleNet models are 50, 192, 192, 288, 576 respectively, which is consistent with the increase of accuracy. Since the efficient design of ShuffleNet, we can use more channels for a given computation budget, thus usually resulting in better performance. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_28", "text": " Note that the above comparisons do not include GoogleNet or Inception series (33, 34, 32). We find it nontrivial to generate such Inception structures to small networks because the original design of Inception module involves too many hyper-parameters. As a reference, the first GoogleNet version  has 31.3% top-1 error at the cost of 1.5 GFLOPs (See Table 6). More sophisticated Inception versions (34, 32) are more accurate, however, involve significantly increased complexity. Recently, Kim et al. propose a lightweight network structure named PVANET  which adopts Inception units. Our reimplemented PVANET (with 224×\\times224 input size) has 29.7% classification error with a computation complexity of 557 MFLOPs, while our ShuffleNet 2x model (g=3𝑔3g=3) gets 26.3% with 524 MFLOPs (see Table 6). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_29", "text": " Recently Howard et al. have proposed MobileNets  which mainly focus on efficient network architecture for mobile devices. MobileNet takes the idea of depthwise separable convolution from   and achieves state-of-the-art results on small models. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_30", "text": " Table 5 compares classification scores under a variety of complexity levels. It is clear that our ShuffleNet models are superior to MobileNet for all the complexities. Though our ShuffleNet network is specially designed for small models (<150absent150<150 MFLOPs), we find it is still better than MobileNet for higher computation cost, e.g. 3.1% more accurate than MobileNet 1×\\times at the cost of 500 MFLOPs. For smaller networks (∼similar-to\\sim40 MFLOPs) ShuffleNet surpasses MobileNet by 7.8%. Note that our ShuffleNet architecture contains 50 layers while MobileNet only has 28 layers. For better understanding, we also try ShuffleNet on a 26-layer architecture by removing half of the blocks in Stage 2-4 (see ”ShuffleNet 0.5×\\times shallow (g=3𝑔3g=3)” in Table 5). Results show that the shallower model is still significantly better than the corresponding MobileNet, which implies that the effectiveness of ShuffleNet mainly results from its efficient structure, not the depth. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_31", "text": " Table 6 compares our ShuffleNet with a few popular models. Results show that with similar accuracy ShuffleNet is much more efficient than others. For example, ShuffleNet 0.5×\\times is theoretically 18×\\times faster than AlexNet  with comparable classification score. We will evaluate the actual running time in Sec 4.5. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_32", "text": " It is also worth noting that the simple architecture design makes it easy to equip ShuffeNets with the latest advances such as (13, 26). For example, in the authors propose Squeeze-and-Excitation (SE) blocks which achieve state-of-the-art results on large ImageNet models. We find SE modules also take effect in combination with the backbone ShuffleNets, for instance, boosting the top-1 error of ShuffleNet 2×\\times to 24.7% (shown in Table 5). Interestingly, though negligible increase of theoretical complexity, we find ShuffleNets with SE modules are usually 25∼40%similar-to25percent4025\\sim 40\\% slower than the “raw” ShuffleNets on mobile devices, which implies that actual speedup evaluation is critical on low-cost architecture design. In Sec 4.5 we will make further discussion. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_33", "text": " To evaluate the generalization ability for transfer learning, we test our ShuffleNet model on the task of MS COCO object detection . We adopt Faster-RCNN  as the detection framework and use the publicly released Caffe code (28, 17) for training with default settings. Similar to  , the models are trained on the COCO train+val dataset excluding 5000 minival images and we conduct testing on the minival set. Table 7 shows the comparison of results trained and evaluated on two input resolutions. Comparing ShuffleNet 2×\\times with MobileNet whose complexity are comparable (524 vs. 569 MFLOPs), our ShuffleNet 2×\\times surpasses MobileNet by a significant margin on both resolutions; our ShuffleNet 1×\\times also achieves comparable results with MobileNet on 600×\\times resolution, but has ∼similar-to\\sim4×\\times complexity reduction. We conjecture that this significant gain is partly due to ShuffleNet’s simple design of architecture without bells and whistles. ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" }, { "id": "1707.01083_all_34", "text": " Finally, we evaluate the actual inference speed of ShuffleNet models on a mobile device with an ARM platform. Though ShuffleNets with larger group numbers (e.g. g=4𝑔4g=4 or g=8𝑔8g=8) usually have better performance, we find it less efficient in our current implementation. Empirically g=3𝑔3g=3 usually has a proper trade-off between accuracy and actual inference time. As shown in Table 8, three input resolutions are exploited for the test. Due to memory access and other overheads, we find every 4×\\times theoretical complexity reduction usually results in ∼similar-to\\sim2.6×\\times actual speedup in our implementation. Nevertheless, compared with AlexNet  our ShuffleNet 0.5×\\times model still achieves ∼similar-to\\sim13×\\times actual speedup under comparable classification accuracy (the theoretical speedup is 18×\\times), which is much faster than previous AlexNet-level models or speedup approaches such as  (14, 16, 22, 42, 43, 38). ", "title": "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices" } ]
Why do they divide domain classification loss in two terms for real image and fake image, respectively?
to translate x into an output image y, which is properly classified to the target domain c, authors divided classification loss in two terms [13].
[ 13 ]
[ { "id": "1711.09020_all_0", "text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation). This task has experienced significant improvements following the introduction of generative adversarial networks (GANs), with results ranging from changing hair color , reconstructing photos from edge maps , and changing the seasons of scenery images . ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_1", "text": " Given training data from two different domains, these models learn to translate images from one domain to the other. We denote the terms attribute as a meaningful feature inherent in an image such as hair color, gender or age, and attribute value as a particular value of an attribute, e.g., black/blond/brown for hair color or male/female for gender. We further denote domain as a set of images sharing the same attribute value. For example, images of women can represent one domain while those of men represent another. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_2", "text": " Several image datasets come with a number of labeled attributes. For instance, the CelebA dataset contains 40 labels related to facial attributes such as hair color, gender, and age, and the RaFD dataset has 8 labels for facial expressions such as ‘happy’, ‘angry’ and ‘sad’. These settings enable us to perform more interesting tasks, namely multi-domain image-to-image translation, where we change images according to attributes from multiple domains. The first five columns in Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation show how a CelebA image can be translated according to any of the four domains, ‘blond hair’, ‘gender’, ‘aged’, and ‘pale skin’. We can further extend to training multiple domains from different datasets, such as jointly training CelebA and RaFD images to change a CelebA image’s facial expression using features learned by training on RaFD, as in the rightmost columns of Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_3", "text": " However, existing models are both inefficient and ineffective in such multi-domain image translation tasks. Their inefficiency results from the fact that in order to learn all mappings among k𝑘k domains, k​(k−1)𝑘𝑘1k(k\\mathbb{-}1) generators have to be trained. Fig. 2 (a) illustrates how twelve distinct generator networks have to be trained to translate images among four different domains. Meanwhile, they are ineffective that even though there exist global features that can be learned from images of all domains such as face shapes, each generator cannot fully utilize the entire training data and only can learn from two domains out of k𝑘k. Failure to fully utilize training data is likely to limit the quality of generated images. Furthermore, they are incapable of jointly training domains from different datasets because each dataset is partially labeled, which we further discuss in Section 3.2. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_4", "text": " As a solution to such problems we propose StarGAN, a novel and scalable approach capable of learning mappings among multiple domains. As demonstrated in Fig. 2 (b), our model takes in training data of multiple domains, and learns the mappings between all available domains using only a single generator. The idea is simple. Instead of learning a fixed translation (e.g., black-to-blond hair), our generator takes in as inputs both image and domain information, and learns to flexibly translate the image into the corresponding domain. We use a label (e.g., binary or one-hot vector) to represent domain information. During training, we randomly generate a target domain label and train the model to flexibly translate an input image into the target domain. By doing so, we can control the domain label and translate the image into any desired domain at testing phase. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_5", "text": " We also introduce a simple but effective approach that enables joint training between domains of different datasets by adding a mask vector to the domain label. Our proposed method ensures that the model can ignore unknown labels and focus on the label provided by a particular dataset. In this manner, our model can perform well on tasks such as synthesizing facial expressions of CelebA images using features learned from RaFD, as shown in the rightmost columns of Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. As far as our knowledge goes, our work is the first to successfully perform multi-domain image translation across different datasets. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_6", "text": " Overall, our contributions are as follows: ∙∙\\bullet We propose StarGAN, a novel generative adversarial network that learns the mappings among multiple domains using only a single generator and a discriminator, training effectively from images of all domains. ∙∙\\bullet We demonstrate how we can successfully learn multi-domain image translation between multiple datasets by utilizing a mask vector method that enables StarGAN to control all available domain labels. ∙∙\\bullet We provide both qualitative and quantitative results on facial attribute transfer and facial expression synthesis tasks using StarGAN, showing its superiority over baseline models. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_7", "text": " Generative Adversarial Networks. Generative adversarial networks (GANs) have shown remarkable results in various computer vision tasks such as image generation (6, 24, 32, 8), image translation (7, 9, 33), super-resolution imaging , and face image synthesis (10, 16, 26, 31). A typical GAN model consists of two modules: a discriminator and a generator. The discriminator learns to distinguish between real and fake samples, while the generator learns to generate fake samples that are indistinguishable from real samples. Our approach also leverages the adversarial loss to make the generated images as realistic as possible. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_8", "text": " Conditional GANs. GAN-based conditional image generation has also been actively studied. Prior studies have provided both the discriminator and generator with class information in order to generate samples conditioned on the class  (20, 21, 22). Other recent approaches focused on generating particular images highly relevant to a given text description  (25, 30). The idea of conditional image generation has also been successfully applied to domain transfer (9, 28), super-resolution imaging, and photo editing (2, 27). In this paper, we propose a scalable GAN framework that can flexibly steer the image translation to various target domains, by providing conditional domain information. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_9", "text": " Image-to-Image Translation. Recent work have achieved impressive results in image-to-image translation (7, 9, 17, 33). For instance, pix2pix learns this task in a supervised manner using cGANs. It combines an adversarial loss with a L1 loss, thus requires paired data samples. To alleviate the problem of obtaining data pairs, unpaired image-to-image translation frameworks (9, 17, 33) have been proposed. UNIT combines variational autoencoders (VAEs) with CoGAN , a GAN framework where two generators share weights to learn the joint distribution of images in cross domains. CycleGAN and DiscoGAN preserve key attributes between the input and the translated image by utilizing a cycle consistency loss. However, all these frameworks are only capable of learning the relations between two different domains at a time. Their approaches have limited scalability in handling multiple domains since different models should be trained for each pair of domains. Unlike the aforementioned approaches, our framework can learn the relations among multiple domains using only a single model. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_10", "text": " We first describe our proposed StarGAN, a framework to address multi-domain image-to-image translation within a single dataset. Then, we discuss how StarGAN incorporates multiple datasets containing different label sets to flexibly perform image translations using any of these labels. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_11", "text": " Our goal is to train a single generator G𝐺G that learns mappings among multiple domains. To achieve this, we train G𝐺G to translate an input image x𝑥x into an output image y𝑦y conditioned on the target domain label c𝑐c, G​(x,c)→y→𝐺𝑥𝑐𝑦G(x,c)\\rightarrow y. We randomly generate the target domain label c𝑐c so that G𝐺G learns to flexibly translate the input image. We also introduce an auxiliary classifier that allows a single discriminator to control multiple domains. That is, our discriminator produces probability distributions over both sources and domain labels, D:x→{Ds​r​c​(x),Dc​l​s​(x)}:𝐷→𝑥subscript𝐷𝑠𝑟𝑐𝑥subscript𝐷𝑐𝑙𝑠𝑥D:x\\rightarrow\\{{D}_{src}(x),{D}_{cls}(x)\\}. Fig. 3 illustrates the training process of our proposed approach. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_12", "text": " Adversarial Loss. To make the generated images indistinguishable from real images, we adopt an adversarial loss ℒa​d​v=𝔼x​(log⁡Ds​r​c​(x))+𝔼x,c​(log⁡(1−Ds​r​c​(G​(x,c)))),subscriptℒ𝑎𝑑𝑣subscript𝔼𝑥delimited-()subscript𝐷𝑠𝑟𝑐𝑥subscript𝔼𝑥𝑐delimited-()1subscript𝐷𝑠𝑟𝑐𝐺𝑥𝑐\\begin{split}\\mathcal{L}_{adv}=&\\thinspace{\\mathbb{E}}_{x}\\left(\\log{{D}_{src}(x)}\\right)\\>\\>+\\\\ &\\thinspace{\\mathbb{E}}_{x,c}(\\log{(1-{D}_{src}(G(x,c)))}),\\end{split} (1) where G𝐺G generates an image G​(x,c)𝐺𝑥𝑐G(x,c) conditioned on both the input image x𝑥x and the target domain label c𝑐c, while D𝐷D tries to distinguish between real and fake images. In this paper, we refer to the term Ds​r​c​(x)subscript𝐷𝑠𝑟𝑐𝑥{D}_{src}(x) as a probability distribution over sources given by D𝐷D. The generator G𝐺G tries to minimize this objective, while the discriminator D𝐷D tries to maximize it. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_13", "text": " Domain Classification Loss. For a given input image x𝑥x and a target domain label c𝑐c, our goal is to translate x𝑥x into an output image y𝑦y, which is properly classified to the target domain c𝑐c. To achieve this condition, we add an auxiliary classifier on top of D𝐷D and impose the domain classification loss when optimizing both D𝐷D and G𝐺G. That is, we decompose the objective into two terms: a domain classification loss of real images used to optimize D𝐷D, and a domain classification loss of fake images used to optimize G𝐺G. In detail, the former is defined as ℒc​l​sr=𝔼x,c′​(−log⁡Dc​l​s​(c′|x)),superscriptsubscriptℒ𝑐𝑙𝑠𝑟subscript𝔼𝑥superscript𝑐′delimited-()subscript𝐷𝑐𝑙𝑠conditionalsuperscript𝑐′𝑥\\mathcal{L}_{cls}^{r}={\\mathbb{E}}_{x,c^{\\prime}}(-\\log{{D}_{cls}(c^{\\prime}|x)}), (2) where the term Dc​l​s​(c′|x)subscript𝐷𝑐𝑙𝑠conditionalsuperscript𝑐′𝑥{D}_{cls}(c^{\\prime}|x) represents a probability distribution over domain labels computed by D𝐷D. By minimizing this objective, D𝐷D learns to classify a real image x𝑥x to its corresponding original domain c′superscript𝑐′c^{\\prime}. We assume that the input image and domain label pair (x,c′)𝑥superscript𝑐′(x,c^{\\prime}) is given by the training data. On the other hand, the loss function for the domain classification of fake images is defined as ℒc​l​sf=𝔼x,c​(−log⁡Dc​l​s​(c|G​(x,c))).superscriptsubscriptℒ𝑐𝑙𝑠𝑓subscript𝔼𝑥𝑐delimited-()subscript𝐷𝑐𝑙𝑠conditional𝑐𝐺𝑥𝑐\\mathcal{L}_{cls}^{f}={\\mathbb{E}}_{x,c}(-\\log{{D}_{cls}(c|G(x,c))}). (3) In other words, G𝐺G tries to minimize this objective to generate images that can be classified as the target domain c𝑐c. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_14", "text": " Reconstruction Loss. By minimizing the adversarial and classification losses, G𝐺G is trained to generate images that are realistic and classified to its correct target domain. However, minimizing the losses (Eqs. (1) and (3)) does not guarantee that translated images preserve the content of its input images while changing only the domain-related part of the inputs. To alleviate this problem, we apply a cycle consistency loss (9, 33) to the generator, defined as ℒr​e​c=𝔼x,c,c′​(‖x−G​(G​(x,c),c′)‖1),subscriptℒ𝑟𝑒𝑐subscript𝔼𝑥𝑐superscript𝑐′delimited-()subscriptnorm𝑥𝐺𝐺𝑥𝑐superscript𝑐′1\\mathcal{L}_{rec}={\\mathbb{E}}_{x,c,c^{\\prime}}({||x-G(G(x,c),c^{\\prime})||}_{1}), (4) where G𝐺G takes in the translated image G​(x,c)𝐺𝑥𝑐G(x,c) and the original domain label c′superscript𝑐′c^{\\prime} as input and tries to reconstruct the original image x𝑥x. We adopt the L1 norm as our reconstruction loss. Note that we use a single generator twice, first to translate an original image into an image in the target domain and then to reconstruct the original image from the translated image. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_15", "text": " Full Objective. Finally, the objective functions to optimize G𝐺G and D𝐷D are written, respectively, as ℒD=−ℒa​d​v+λc​l​s​ℒc​l​sr,subscriptℒ𝐷subscriptℒ𝑎𝑑𝑣subscript𝜆𝑐𝑙𝑠superscriptsubscriptℒ𝑐𝑙𝑠𝑟\\mathcal{L}_{D}=-\\mathcal{L}_{adv}+{\\lambda}_{cls}\\thinspace\\mathcal{L}_{cls}^{r}, (5) ℒG=ℒa​d​v+λc​l​s​ℒc​l​sf+λr​e​c​ℒr​e​c,subscriptℒ𝐺subscriptℒ𝑎𝑑𝑣subscript𝜆𝑐𝑙𝑠superscriptsubscriptℒ𝑐𝑙𝑠𝑓subscript𝜆𝑟𝑒𝑐subscriptℒ𝑟𝑒𝑐\\mathcal{L}_{G}=\\mathcal{L}_{adv}+{\\lambda}_{cls}\\thinspace\\mathcal{L}_{cls}^{f}+{\\lambda}_{rec}\\thinspace\\mathcal{L}_{rec}, (6) where λc​l​ssubscript𝜆𝑐𝑙𝑠{\\lambda}_{cls} and λr​e​csubscript𝜆𝑟𝑒𝑐{\\lambda}_{rec} are hyper-parameters that control the relative importance of domain classification and reconstruction losses, respectively, compared to the adversarial loss. We use λc​l​s=1subscript𝜆𝑐𝑙𝑠1{\\lambda}_{cls}=1 and λr​e​c=10subscript𝜆𝑟𝑒𝑐10{\\lambda}_{rec}=10 in all of our experiments. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_16", "text": " An important advantage of StarGAN is that it simultaneously incorporates multiple datasets containing different types of labels, so that StarGAN can control all the labels at the test phase. An issue when learning from multiple datasets, however, is that the label information is only partially known to each dataset. In the case of CelebA  and RaFD , while the former contains labels for attributes such as hair color and gender, it does not have any labels for facial expressions such as ‘happy’ and ‘angry’, and vice versa for the latter. This is problematic because the complete information on the label vector c′superscript𝑐′c^{\\prime} is required when reconstructing the input image x𝑥x from the translated image G​(x,c)𝐺𝑥𝑐G(x,c) (See Eq. (4)). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_17", "text": " Mask Vector. To alleviate this problem, we introduce a mask vector m𝑚m that allows StarGAN to ignore unspecified labels and focus on the explicitly known label provided by a particular dataset. In StarGAN, we use an n𝑛n-dimensional one-hot vector to represent m𝑚m, with n𝑛n being the number of datasets. In addition, we define a unified version of the label as a vector c~=(c1,…,cn,m),~𝑐subscript𝑐1…subscript𝑐𝑛𝑚\\tilde{c}=({c}_{1},...,{c}_{n},m), (7) where (⋅)delimited-()⋅(\\cdot) refers to concatenation, and cisubscript𝑐𝑖{c}_{i} represents a vector for the labels of the i𝑖i-th dataset. The vector of the known label cisubscript𝑐𝑖{c}_{i} can be represented as either a binary vector for binary attributes or a one-hot vector for categorical attributes. For the remaining n−1𝑛1n\\mathbb{-}1 unknown labels we simply assign zero values. In our experiments, we utilize the CelebA and RaFD datasets, where n𝑛n is two. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_18", "text": " Training Strategy. When training StarGAN with multiple datasets, we use the domain label c~~𝑐\\tilde{c} defined in Eq. (7) as input to the generator. By doing so, the generator learns to ignore the unspecified labels, which are zero vectors, and focus on the explicitly given label. The structure of the generator is exactly the same as in training with a single dataset, except for the dimension of the input label c~~𝑐\\tilde{c}. On the other hand, we extend the auxiliary classifier of the discriminator to generate probability distributions over labels for all datasets. Then, we train the model in a multi-task learning setting, where the discriminator tries to minimize only the classification error associated to the known label. For example, when training with images in CelebA, the discriminator minimizes only classification errors for labels related to CelebA attributes, and not facial expressions related to RaFD. Under these settings, by alternating between CelebA and RaFD the discriminator learns all of the discriminative features for both datasets, and the generator learns to control all the labels in both datasets. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_19", "text": " Improved GAN Training. To stabilize the training process and generate higher quality images, we replace Eq. (1) with Wasserstein GAN objective with gradient penalty (1, 4) defined as ℒa​d​v=𝔼x​(Ds​r​c​(x))−𝔼x,c​(Ds​r​c​(G​(x,c)))−λg​p​𝔼x^​((‖▽x^​Ds​r​c​(x^)‖2−1)2),subscriptℒ𝑎𝑑𝑣subscript𝔼𝑥delimited-()subscript𝐷𝑠𝑟𝑐𝑥subscript𝔼𝑥𝑐delimited-()subscript𝐷𝑠𝑟𝑐𝐺𝑥𝑐subscript𝜆𝑔𝑝subscript𝔼^𝑥delimited-()superscriptsubscriptnormsubscript▽^𝑥subscript𝐷𝑠𝑟𝑐^𝑥212\\begin{split}\\mathcal{L}_{adv}=\\thinspace&{\\mathbb{E}}_{x}({D}_{src}(x))-{\\mathbb{E}}_{x,c}({D}_{src}(G(x,c)))\\thinspace\\thinspace\\\\ &-{\\lambda}_{gp}\\thinspace{\\mathbb{E}}_{\\hat{x}}({{(||{\\triangledown}_{\\hat{x}}{D}_{src}(\\hat{x})||}_{2}-1)}^{2})\\thinspace,\\end{split} (8) where x^^𝑥\\hat{x} is sampled uniformly along a straight line between a pair of a real and a generated images. We use λg​p=10subscript𝜆𝑔𝑝10{\\lambda}_{gp}=10 for all experiments. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_20", "text": " Network Architecture. Adapted from CycleGAN , StarGAN has the generator network composed of two convolutional layers with the stride size of two for downsampling, six residual blocks , and two transposed convolutional layers with the stride size of two for upsampling. We use instance normalization for the generator but no normalization for the discriminator. We leverage PatchGANs (7, 15, 33) for the discriminator network, which classifies whether local image patches are real or fake. See the appendix (Section 7.2) for more details about the network architecture. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_21", "text": " In this section, we first compare StarGAN against recent methods on facial attribute transfer by conducting user studies. Next, we perform a classification experiment on facial expression synthesis. Lastly, we demonstrate empirical results that StarGAN can learn image-to-image translation from multiple datasets. All our experiments were conducted by using the model output from unseen images during the training phase. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_22", "text": " As our baseline models, we adopt DIAT and CycleGAN , both of which performs image-to-image translation between two different domains. For comparison, we trained these models multiple times for every pair of two different domains. We also adopt IcGAN as a baseline which can perform attribute transfer using a cGAN . ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_23", "text": " DIAT uses an adversarial loss to learn the mapping from x∈X𝑥𝑋x\\in X to y∈Y𝑦𝑌y\\in Y, where x𝑥x and y𝑦y are face images in two different domains X𝑋X and Y𝑌Y, respectively. This method has a regularization term on the mapping as ‖x−F​(G​(x))‖1subscriptnorm𝑥𝐹𝐺𝑥1{||x-F(G(x))||}_{1} to preserve identity features of the source image, where F𝐹F is a feature extractor pretrained on a face recognition task. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_24", "text": " CycleGAN also uses an adversarial loss to learn the mapping between two different domains X𝑋X and Y𝑌Y. This method regularizes the mapping via cycle consistency losses, ‖x−(GY​X​(GX​Y​(x)))‖1subscriptnorm𝑥subscript𝐺𝑌𝑋subscript𝐺𝑋𝑌𝑥1{||x-({G}_{YX}({G}_{XY}(x)))||}_{1} and ‖y−(GX​Y​(GY​X​(y)))‖1subscriptnorm𝑦subscript𝐺𝑋𝑌subscript𝐺𝑌𝑋𝑦1{||y-({G}_{XY}({G}_{YX}(y)))||}_{1}. This method requires two generators and discriminators for each pair of two different domains. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_25", "text": " IcGAN combines an encoder with a cGAN model. cGAN learns the mapping G:{z,c}→x:𝐺→𝑧𝑐𝑥G:\\{z,c\\}\\rightarrow x that generates an image x𝑥x conditioned on both the latent vector z𝑧z and the conditional vector c𝑐c. In addition, IcGAN introduces an encoder to learn the inverse mappings of cGAN, Ez:x→z:subscript𝐸𝑧→𝑥𝑧{E}_{z}:x\\rightarrow z and Ec:x→c:subscript𝐸𝑐→𝑥𝑐{E}_{c}:x\\rightarrow c. This allows IcGAN to synthesis images by only changing the conditional vector and preserving the latent vector. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_26", "text": " CelebA. The CelebFaces Attributes (CelebA) dataset contains 202,599 face images of celebrities, each annotated with 40 binary attributes. We crop the initial 178×218178218178\\times 218 size images to 178×178178178178\\times 178, then resize them as 128×128128128128\\times 128. We randomly select 2,000 images as test set and use all remaining images for training data. We construct seven domains using the following attributes: hair color (black, blond, brown), gender (male/female), and age (young/old). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_27", "text": " RaFD. The Radboud Faces Database (RaFD) consists of 4,824 images collected from 67 participants. Each participant makes eight facial expressions in three different gaze directions, which are captured from three different angles. We crop the images to 256×256256256256\\times 256, where the faces are centered, and then resize them to 128×128128128128\\times 128. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_28", "text": " All models are trained using Adam with β1=0.5subscript𝛽10.5{\\beta}_{1}=0.5 and β2=0.999subscript𝛽20.999{\\beta}_{2}=0.999. For data augmentation we flip the images horizontally with a probability of 0.5. We perform one generator update after five discriminator updates as in . The batch size is set to 16 for all experiments. For experiments on CelebA, we train all models with a learning rate of 0.0001 for the first 10 epochs and linearly decay the learning rate to 0 over the next 10 epochs. To compensate for the lack of data, when training with RaFD we train all models for 100 epochs with a learning rate of 0.0001 and apply the same decaying strategy over the next 100 epochs. Training takes about one day on a single NVIDIA Tesla M40 GPU. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_29", "text": " We first compare our proposed method to the baseline models on a single and multi-attribute transfer tasks. We train the cross-domain models such as DIAT and CycleGAN multiple times considering all possible attribute value pairs. In the case of DIAT and CycleGAN, we perform multi-step translations to synthesize multiple attributes (e.g. transferring a gender attribute after changing a hair color). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_30", "text": " Qualitative evaluation. Fig. 4 shows the facial attribute transfer results on CelebA. We observed that our method provides a higher visual quality of translation results on test data compared to the cross-domain models. One possible reason is the regularization effect of StarGAN through a multi-task learning framework. In other words, rather than training a model to perform a fixed translation (e.g., brown-to-blond hair), which is prone to overfitting, we train our model to flexibly translate images according to the labels of the target domain. This allows our model to learn reliable features universally applicable to multiple domains of images with different facial attribute values. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_31", "text": " Furthermore, compared to IcGAN, our model demonstrates an advantage in preserving the facial identity feature of an input. We conjecture that this is because our method maintains the spatial information by using activation maps from the convolutional layer as latent representation, rather than just a low-dimensional latent vector as in IcGAN. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_32", "text": " Quantitative evaluation protocol. For quantitative evaluations, we performed two user studies in a survey format using Amazon Mechanical Turk (AMT) to assess single and multiple attribute transfer tasks. Given an input image, the Turkers were instructed to choose the best generated image based on perceptual realism, quality of transfer in attribute(s), and preservation of a figure’s original identity. The options were four randomly shuffled images generated from four different methods. The generated images in one study have a single attribute transfer in either hair color (black, blond, brown), gender, or age. In another study, the generated images involve a combination of attribute transfers. Each Turker was asked 30 to 40 questions with a few simple yet logical questions for validating human effort. The number of validated Turkers in each user study is 146 and 100 in single and multiple transfer tasks, respectively. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_33", "text": " Quantitative results. Tables 1 and 2 show the results of our AMT experiment on single- and multi-attribute transfer tasks, respectively. StarGAN obtained the majority of votes for best transferring attributes in all cases. In the case of gender changes in Table 1, the voting difference between our model and other models was marginal, e.g., 39.1% for StarGAN vs. 31.4% for DIAT. However, in multi-attribute changes, e.g., the ‘G+A’ case in Table 2, the performance difference becomes significant, e.g., 49.8% for StarGAN vs. 20.3% for IcGAN), clearly showing the advantages of StarGAN in more complicated, multi-attribute transfer tasks. This is because unlike the other methods, StarGAN can handle image translation involving multiple attribute changes by randomly generating a target domain label in the training phase. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_34", "text": " We next train our model on the RaFD dataset to learn the task of synthesizing facial expressions. To compare StarGAN and baseline models, we fix the input domain as the ‘neutral’ expression, but the target domain varies among the seven remaining expressions. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_35", "text": " Qualitative evaluation. As seen in Fig. 5, StarGAN clearly generates the most natural-looking expressions while properly maintaining the personal identity and facial features of the input. While DIAT and CycleGAN mostly preserve the identity of the input, many of their results are shown blurry and do not maintain the degree of sharpness as seen in the input. IcGAN even fails to preserve the personal identity in the image by generating male images. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_36", "text": " We believe that the superiority of StarGAN in the image quality is due to its implicit data augmentation effect from a multi-task learning setting. RaFD images contain a relatively small size of samples, e.g., 500 images per domain. When trained on two domains, DIAT and CycleGAN can only use 1,000 training images at a time, but StarGAN can use 4,000 images in total from all the available domains for its training. This allows StarGAN to properly learn how to maintain the quality and sharpness of the generated output. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_37", "text": " Quantitative evaluation. For a quantitative evaluation, we compute the classification error of a facial expression on synthesized images. We trained a facial expression classifier on the RaFD dataset (90%/10% splitting for training and test sets) using a ResNet-18 architecture , resulting in a near-perfect accuracy of 99.55%. We then trained each of image translation models using the same training set and performed image translation on the same, unseen test set. Finally, we classified the expression of these translated images using the above-mentioned classifier. As can be seen in Table 3, our model achieves the lowest classification error, indicating that our model produces the most realistic facial expressions among all the methods compared. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_38", "text": " Another important advantage of our model is the scalability in terms of the number of parameters required. The last column in Table 3 shows that the number of parameters required to learn all translations by StarGAN is seven times smaller than that of DIAT and fourteen times smaller than that of CycleGAN. This is because StarGAN requires only a single generator and discriminator pair, regardless of the number of domains, while in the case of cross-domain models such as CycleGAN, a completely different model should be trained for each source-target domain pair. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_39", "text": " Finally, we empirically demonstrate that our model can learn not only from multiple domains within a single dataset, but also from multiple datasets. We train our model jointly on the CelebA and RaFD datasets using the mask vector (see Section 3.2). To distinguish between the model trained only on RaFD and the model trained on both CelebA and RaFD, we denote the former as StarGAN-SNG (single) and the latter as StarGAN-JNT (joint). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_40", "text": " Effects of joint training. Fig. 6 shows qualitative comparisons between StarGAN-SNG and StarGAN-JNT, where the task is to synthesize facial expressions of images in CelebA. StarGAN-JNT exhibits emotional expressions with high visual quality, while StarGAN-SNG generates reasonable but blurry images with gray backgrounds. This difference is due to the fact that StarGAN-JNT learns to translate CelebA images during training but not StarGAN-SNG. In other words, StarGAN-JNT can leverage both datasets to improve shared low-level tasks such facial keypoint detection and segmentation. By utilizing both CelebA and RaFD, StarGAN-JNT can improve these low-level tasks, which is beneficial to learning facial expression synthesis. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_41", "text": " Learned role of mask vector. In this experiment, we gave a one-hot vector c𝑐c by setting the dimension of a particular facial expression (available from the second dataset, RaFD) to one. In this case, since the label associated with the second data set is explicitly given, the proper mask vector would be (0,1)01(0,1). Fig. 7 shows the case where this proper mask vector was given and the opposite case where a wrong mask vector of (1,0)10(1,0) was given. When the wrong mask vector was used, StarGAN-JNT fails to synthesize facial expressions, and it manipulates the age of the input image. This is because the model ignores the facial expression label as unknown and treats the facial attribute label as valid by the mask vector. Note that since one of the facial attributes is ‘young’, the model translates the image from young to old when it takes in a zero vector as input. From this behavior, we can confirm that StarGAN properly learned the intended role of a mask vector in image-to-image translations when involving all the labels from multiple datasets altogether. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_42", "text": " In this paper, we proposed StarGAN, a scalable image-to-image translation model among multiple domains using a single generator and a discriminator. Besides the advantages in scalability, StarGAN generated images of higher visual quality compared to existing methods (16, 23, 33), owing to the generalization capability behind the multi-task learning setting. In addition, the use of the proposed simple mask vector enables StarGAN to utilize multiple datasets with different sets of domain labels, thus handling all available labels from them. We hope our work to enable users to develop interesting image translation applications across multiple domains. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_43", "text": " Acknowledgements. This work was mainly done while the first author did a research internship at Clova AI Research, NAVER. We thank all the researchers at NAVER, especially Donghyun Kwak, for insightful discussions. This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (No. NRF2016R1C1B2015924). Jaegul Choo is the corresponding author. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_44", "text": " Fig. 8 shows an overview of StarGAN when learning from both the CelebA and RaFD datasets. As can be seen at the top of the figure, the label for CelebA contains binary attributes (Black, Blond, Brown, Male, and Young), while the label for RaFD provides information on categorical attributes (Angry, Fearful, Happy, Sad, and Disgusted). The mask vector is a two-dimensional one-hot vector which indicates whether the CelebA or RaFD label is valid. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_45", "text": " The network architectures of StarGAN are shown in Table 4 and 5. For the generator network, we use instance normalization in all layers except the last output layer. For the discriminator network, we use Leaky ReLU with a negative slope of 0.01. There are some notations; ndsubscript𝑛𝑑{n}_{d}: the number of domain, ncsubscript𝑛𝑐{n}_{c}: the dimension of domain labels (nd+2subscript𝑛𝑑2{n}_{d}+2 when training with both the CelebA and RaFD datasets, otherwise same as ndsubscript𝑛𝑑{n}_{d}), N: the number of output channels, K: kernel size, S: stride size, P: padding size, IN: instance normalization. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_46", "text": " Figs. 9, 10, 11, and 12 show additional images with 256×256256256256\\times 256 resolutions generated by StarGAN. All images were generated by a single generator trained on both the CelebA and RaFD datasets. We trained StarGAN on a single NVIDIA Pascal M40 GPU for seven days. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" } ]
How can one explain the statement "inferior detection accuracy that does not match the network’s superior classification accuracy" mentioned by the authors ?
The authors show in Table 2 that a naive implementation of Faster R-CNN, which used the methodology that the authors claimed to have inferior detection accuracy, actually does have inferior detection accuracy [1].
[ 1 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not share computation. This decomposition was historically resulted from the pioneering classification architectures, such as AlexNet and VGG Nets , that consist of two subnetworks by design — a convolutional subnetwork ending with a spatial pooling layer, followed by several fully-connected (fc) layers. Thus the (last) spatial pooling layer in image classification networks is naturally turned into the RoI pooling layer in object detection networks (8, 6, 18). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_1", "text": " But recent state-of-the-art image classification networks such as Residual Nets (ResNets) and GoogLeNets (24, 26) are by design fully convolutional111Only the last layer is fully-connected, which is removed and replaced when fine-tuning for object detection.. By analogy, it appears natural to use all convolutional layers to construct the shared, convolutional subnetwork in the object detection architecture, leaving the RoI-wise subnetwork no hidden layer. However, as empirically investigated in this work, this naïve solution turns out to have considerably inferior detection accuracy that does not match the network’s superior classification accuracy. To remedy this issue, in the ResNet paper the RoI pooling layer of the Faster R-CNN detector is unnaturally inserted between two sets of convolutional layers — this creates a deeper RoI-wise subnetwork that improves accuracy, at the cost of lower speed due to the unshared per-RoI computation. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_2", "text": " We argue that the aforementioned unnatural design is caused by a dilemma of increasing translation invariance for image classification vs. respecting translation variance for object detection. On one hand, the image-level classification task favors translation invariance — shift of an object inside an image should be indiscriminative. Thus, deep (fully) convolutional architectures that are as translation-invariant as possible are preferable as evidenced by the leading results on ImageNet classification (9, 24, 26). On the other hand, the object detection task needs localization representations that are translation-variant to an extent. For example, translation of an object inside a candidate box should produce meaningful responses for describing how good the candidate box overlaps the object. We hypothesize that deeper convolutional layers in an image classification network are less sensitive to translation. To address this dilemma, the ResNet paper’s detection pipeline inserts the RoI pooling layer into convolutions — this region-specific operation breaks down translation invariance, and the post-RoI convolutional layers are no longer translation-invariant when evaluated across different regions. However, this design sacrifices training and testing efficiency since it introduces a considerable number of region-wise layers (Table 1). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_3", "text": " In this paper, we develop a framework called Region-based Fully Convolutional Network (R-FCN) for object detection. Our network consists of shared, fully convolutional architectures as is the case of FCN . To incorporate translation variance into FCN, we construct a set of position-sensitive score maps by using a bank of specialized convolutional layers as the FCN output. Each of these score maps encodes the position information with respect to a relative spatial position (e.g., “to the left of an object”). On top of this FCN, we append a position-sensitive RoI pooling layer that shepherds information from these score maps, with no weight (convolutional/fc) layers following. The entire architecture is learned end-to-end. All learnable layers are convolutional and shared on the entire image, yet encode spatial information required for object detection. Figure 1 illustrates the key idea and Table 1 compares the methodologies among region-based detectors. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_4", "text": " Using the 101-layer Residual Net (ResNet-101) as the backbone, our R-FCN yields competitive results of 83.6% mAP on the PASCAL VOC 2007 set and 82.0% the 2012 set. Meanwhile, our results are achieved at a test-time speed of 170ms per image using ResNet-101, which is 2.5×\\times to 20×\\times faster than the Faster R-CNN + ResNet-101 counterpart in . These experiments demonstrate that our method manages to address the dilemma between invariance/variance on translation, and fully convolutional image-level classifiers such as ResNets can be effectively converted to fully convolutional object detectors. Code is made publicly available at: https://github.com/daijifeng001/r-fcn. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_5", "text": " Overview. Following R-CNN , we adopt the popular two-stage object detection strategy (7, 8, 6, 18, 1, 22) that consists of: (i) region proposal, and (ii) region classification. Although methods that do not rely on region proposal do exist (e.g., (17, 14)), region-based systems still possess leading accuracy on several benchmarks (5, 13, 20). We extract candidate regions by the Region Proposal Network (RPN) , which is a fully convolutional architecture in itself. Following , we share the features between RPN and R-FCN. Figure 2 shows an overview of the system. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_6", "text": " Given the proposal regions (RoIs), the R-FCN architecture is designed to classify the RoIs into object categories and background. In R-FCN, all learnable weight layers are convolutional and are computed on the entire image. The last convolutional layer produces a bank of k2superscript𝑘2k^{2} position-sensitive score maps for each category, and thus has a k2​(C+1)superscript𝑘2𝐶1k^{2}(C+1)-channel output layer with C𝐶C object categories (+11+1 for background). The bank of k2superscript𝑘2k^{2} score maps correspond to a k×k𝑘𝑘k\\times k spatial grid describing relative positions. For example, with k×k=3×3𝑘𝑘33k\\times k=3\\times 3, the 9 score maps encode the cases of {top-left, top-center, top-right, …, bottom-right} of an object category. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_7", "text": " R-FCN ends with a position-sensitive RoI pooling layer. This layer aggregates the outputs of the last convolutional layer and generates scores for each RoI. Unlike (8, 6), our position-sensitive RoI layer conducts selective pooling, and each of the k×k𝑘𝑘k\\times k bin aggregates responses from only one score map out of the bank of k×k𝑘𝑘k\\times k score maps. With end-to-end training, this RoI layer shepherds the last convolutional layer to learn specialized position-sensitive score maps. Figure 1 illustrates this idea. Figure 4 and 4 visualize an example. The details are introduced as follows. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_8", "text": " Backbone architecture. The incarnation of R-FCN in this paper is based on ResNet-101 , though other networks (10, 23) are applicable. ResNet-101 has 100 convolutional layers followed by global average pooling and a 1000-class fc layer. We remove the average pooling layer and the fc layer and only use the convolutional layers to compute feature maps. We use the ResNet-101 released by the authors of , pre-trained on ImageNet . The last convolutional block in ResNet-101 is 2048-d, and we attach a randomly initialized 1024-d 1×\\times1 convolutional layer for reducing dimension (to be precise, this increases the depth in Table 1 by 1). Then we apply the k2​(C+1)superscript𝑘2𝐶1k^{2}(C+1)-channel convolutional layer to generate score maps, as introduced next. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_9", "text": " Position-sensitive score maps & Position-sensitive RoI pooling. To explicitly encode position information into each RoI, we divide each RoI rectangle into k×k𝑘𝑘k\\times k bins by a regular grid. For an RoI rectangle of a size w×h𝑤ℎw\\times h, a bin is of a size ≈wk×hkabsent𝑤𝑘ℎ𝑘\\approx\\frac{w}{k}\\times\\frac{h}{k} (8, 6). In our method, the last convolutional layer is constructed to produce k2superscript𝑘2k^{2} score maps for each category. Inside the (i,j)𝑖𝑗(i,j)-th bin (0≤i,j≤k−1formulae-sequence0𝑖𝑗𝑘10\\leq i,j\\leq k-1), we define a position-sensitive RoI pooling operation that pools only over the (i,j)𝑖𝑗(i,j)-th score map: rc​(i,j|Θ)=∑(x,y)∈bin​(i,j)zi,j,c​(x+x0,y+y0|Θ)/n.subscript𝑟𝑐𝑖conditional𝑗Θsubscript𝑥𝑦bin𝑖𝑗subscript𝑧𝑖𝑗𝑐𝑥subscript𝑥0𝑦conditionalsubscript𝑦0Θ𝑛r_{c}(i,j\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\Theta)=\\sum_{(x,y)\\in\\text{bin}(i,j)}z_{i,j,c}(x+x_{0},y+y_{0}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\Theta)/n. (1) Here rc​(i,j)subscript𝑟𝑐𝑖𝑗r_{c}(i,j) is the pooled response in the (i,j)𝑖𝑗(i,j)-th bin for the c𝑐c-th category, zi,j,csubscript𝑧𝑖𝑗𝑐z_{i,j,c} is one score map out of the k2​(C+1)superscript𝑘2𝐶1k^{2}(C+1) score maps, (x0,y0)subscript𝑥0subscript𝑦0(x_{0},y_{0}) denotes the top-left corner of an RoI, n𝑛n is the number of pixels in the bin, and ΘΘ\\Theta denotes all learnable parameters of the network. The (i,j)𝑖𝑗(i,j)-th bin spans ⌊i​wk⌋≤x<⌈(i+1)​wk⌉𝑖𝑤𝑘𝑥𝑖1𝑤𝑘\\lfloor i\\frac{w}{k}\\rfloor\\leq x<\\lceil(i+1)\\frac{w}{k}\\rceil and ⌊j​hk⌋≤y<⌈(j+1)​hk⌉𝑗ℎ𝑘𝑦𝑗1ℎ𝑘\\lfloor j\\frac{h}{k}\\rfloor\\leq y<\\lceil(j+1)\\frac{h}{k}\\rceil. The operation of Eqn.(1) is illustrated in Figure 1, where a color represents a pair of (i,j)𝑖𝑗(i,j). Eqn.(1) performs average pooling (as we use throughout this paper), but max pooling can be conducted as well. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_10", "text": " The k2superscript𝑘2k^{2} position-sensitive scores then vote on the RoI. In this paper we simply vote by averaging the scores, producing a (C+1)𝐶1(C+1)-dimensional vector for each RoI: rc​(Θ)=∑i,jrc​(i,j|Θ)subscript𝑟𝑐Θsubscript𝑖𝑗subscript𝑟𝑐𝑖conditional𝑗Θr_{c}(\\Theta)=\\sum_{i,j}r_{c}(i,j\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\Theta). Then we compute the softmax responses across categories: sc​(Θ)=erc​(Θ)/∑c′=0Cerc′​(Θ)subscript𝑠𝑐Θsuperscript𝑒subscript𝑟𝑐Θsuperscriptsubscriptsuperscript𝑐′0𝐶superscript𝑒subscript𝑟superscript𝑐′Θs_{c}(\\Theta)=e^{r_{c}(\\Theta)}/\\sum_{c^{\\prime}=0}^{C}e^{r_{c^{\\prime}}(\\Theta)}. They are used for evaluating the cross-entropy loss during training and for ranking the RoIs during inference. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_11", "text": " We further address bounding box regression (7, 6) in a similar way. Aside from the above k2​(C+1)superscript𝑘2𝐶1k^{2}(C+1)-d convolutional layer, we append a sibling 4​k24superscript𝑘24k^{2}-d convolutional layer for bounding box regression. The position-sensitive RoI pooling is performed on this bank of 4​k24superscript𝑘24k^{2} maps, producing a 4​k24superscript𝑘24k^{2}-d vector for each RoI. Then it is aggregated into a 444-d vector by average voting. This 444-d vector parameterizes a bounding box as t=(tx,ty,tw,th)𝑡subscript𝑡𝑥subscript𝑡𝑦subscript𝑡𝑤subscript𝑡ℎt=(t_{x},t_{y},t_{w},t_{h}) following the parameterization in . We note that we perform class-agnostic bounding box regression for simplicity, but the class-specific counterpart (i.e., with a 4​k2​C4superscript𝑘2𝐶4k^{2}C-d output layer) is applicable. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_12", "text": " The concept of position-sensitive score maps is partially inspired by that develops FCNs for instance-level semantic segmentation. We further introduce the position-sensitive RoI pooling layer that shepherds learning of the score maps for object detection. There is no learnable layer after the RoI layer, enabling nearly cost-free region-wise computation and speeding up both training and inference. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_13", "text": " Training. With pre-computed region proposals, it is easy to end-to-end train the R-FCN architecture. Following , our loss function defined on each RoI is the summation of the cross-entropy loss and the box regression loss: L​(s,tx,y,w,h)=Lc​l​s​(sc∗)+λ​(c∗>0)​Lr​e​g​(t,t∗)𝐿𝑠subscript𝑡𝑥𝑦𝑤ℎsubscript𝐿𝑐𝑙𝑠subscript𝑠superscript𝑐𝜆delimited-()superscript𝑐0subscript𝐿𝑟𝑒𝑔𝑡superscript𝑡L(s,t_{x,y,w,h})=L_{cls}(s_{c^{*}})+\\lambda(c^{*}>0)L_{reg}(t,t^{*}). Here c∗superscript𝑐c^{*} is the RoI’s ground-truth label (c∗=0superscript𝑐0c^{*}=0 means background). Lc​l​s​(sc∗)=−log⁡(sc∗)subscript𝐿𝑐𝑙𝑠subscript𝑠superscript𝑐subscript𝑠superscript𝑐L_{cls}(s_{c^{*}})=-\\log(s_{c^{*}}) is the cross-entropy loss for classification, Lr​e​gsubscript𝐿𝑟𝑒𝑔L_{reg} is the bounding box regression loss as defined in , and t∗superscript𝑡t^{*} represents the ground truth box. (c∗>0)delimited-()superscript𝑐0(c^{*}>0) is an indicator which equals to 1 if the argument is true and 0 otherwise. We set the balance weight λ=1𝜆1\\lambda=1 as in . We define positive examples as the RoIs that have intersection-over-union (IoU) overlap with a ground-truth box of at least 0.5, and negative otherwise. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_14", "text": " It is easy for our method to adopt online hard example mining (OHEM) during training. Our negligible per-RoI computation enables nearly cost-free example mining. Assuming N𝑁N proposals per image, in the forward pass, we evaluate the loss of all N𝑁N proposals. Then we sort all RoIs (positive and negative) by loss and select B𝐵B RoIs that have the highest loss. Backpropagation is performed based on the selected examples. Because our per-RoI computation is negligible, the forward time is nearly not affected by N𝑁N, in contrast to OHEM Fast R-CNN in that may double training time. We provide comprehensive timing statistics in Table 5 in the next section. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_15", "text": " We use a weight decay of 0.0005 and a momentum of 0.9. By default we use single-scale training: images are resized such that the scale (shorter side of image) is 600 pixels (6, 18). Each GPU holds 1 image and selects B=128𝐵128B=128 RoIs for backprop. We train the model with 8 GPUs (so the effective mini-batch size is 8×8\\times). We fine-tune R-FCN using a learning rate of 0.001 for 20k mini-batches and 0.0001 for 10k mini-batches on VOC. To have R-FCN share features with RPN (Figure 2), we adopt the 4-step alternating training222Although joint training is applicable, it is not straightforward to perform example mining jointly. in , alternating between training RPN and training R-FCN. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_16", "text": " Inference. As illustrated in Figure 2, the feature maps shared between RPN and R-FCN are computed (on an image with a single scale of 600). Then the RPN part proposes RoIs, on which the R-FCN part evaluates category-wise scores and regresses bounding boxes. During inference we evaluate 300 RoIs as in for fair comparisons. The results are post-processed by non-maximum suppression (NMS) using a threshold of 0.3 IoU , as standard practice. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_17", "text": " À trous and stride. Our fully convolutional architecture enjoys the benefits of the network modifications that are widely used by FCNs for semantic segmentation (15, 2). Particularly, we reduce ResNet-101’s effective stride from 32 pixels to 16 pixels, increasing the score map resolution. All layers before and on the conv444 stage (stride=16) are unchanged; the stride=2 operations in the first conv555 block is modified to have stride=1, and all convolutional filters on the conv555 stage are modified by the “hole algorithm” (15, 2) (“Algorithme à trous” ) to compensate for the reduced stride. For fair comparisons, the RPN is computed on top of the conv444 stage (that are shared with R-FCN), as is the case in with Faster R-CNN, so the RPN is not affected by the à trous trick. The following table shows the ablation results of R-FCN (k×k=7×7𝑘𝑘77k\\times k=7\\times 7, no hard example mining). The à trous trick improves mAP by 2.6 points. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_18", "text": " Visualization. In Figure 4 and 4 we visualize the position-sensitive score maps learned by R-FCN when k×k=3×3𝑘𝑘33k\\times k=3\\times 3. These specialized maps are expected to be strongly activated at a specific relative position of an object. For example, the “top-center-sensitive” score map exhibits high scores roughly near the top-center position of an object. If a candidate box precisely overlaps with a true object (Figure 4), most of the k2superscript𝑘2k^{2} bins in the RoI are strongly activated, and their voting leads to a high score. On the contrary, if a candidate box does not correctly overlaps with a true object (Figure 4), some of the k2superscript𝑘2k^{2} bins in the RoI are not activated, and the voting score is low. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_19", "text": " R-CNN has demonstrated the effectiveness of using region proposals (27, 28) with deep networks. R-CNN evaluates convolutional networks on cropped and warped regions, and computation is not shared among regions (Table 1). SPPnet , Fast R-CNN , and Faster R-CNN are “semi-convolutional”, in which a convolutional subnetwork performs shared computation on the entire image and another subnetwork evaluates individual regions. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_20", "text": " There have been object detectors that can be thought of as “fully convolutional” models. OverFeat detects objects by sliding multi-scale windows on the shared convolutional feature maps; similarly, in Fast R-CNN and , sliding windows that replace region proposals are investigated. In these cases, one can recast a sliding window of a single scale as a single convolutional layer. The RPN component in Faster R-CNN is a fully convolutional detector that predicts bounding boxes with respect to reference boxes (anchors) of multiple sizes. The original RPN is class-agnostic in , but its class-specific counterpart is applicable (see also ) as we evaluate in the following. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_21", "text": " Another family of object detectors resort to fully-connected (fc) layers for generating holistic object detection results on an entire image, such as (25, 4, 17). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_22", "text": " We perform experiments on PASCAL VOC that has 20 object categories. We train the models on the union set of VOC 2007 trainval and VOC 2012 trainval (“07+12”) following , and evaluate on VOC 2007 test set. Object detection accuracy is measured by mean Average Precision (mAP). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_23", "text": " Comparisons with Other Fully Convolutional Strategies ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_24", "text": " Though fully convolutional detectors are available, experiments show that it is nontrivial for them to achieve good accuracy. We investigate the following fully convolutional strategies (or “almost” fully convolutional strategies that have only one classifier fc layer per RoI), using ResNet-101: ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_25", "text": " Naïve Faster R-CNN. As discussed in the introduction, one may use all convolutional layers in ResNet-101 to compute the shared feature maps, and adopt RoI pooling after the last convolutional layer (after conv555). An inexpensive 21-class fc layer is evaluated on each RoI (so this variant is “almost” fully convolutional). The à trous trick is used for fair comparisons. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_26", "text": " Class-specific RPN. This RPN is trained following , except that the 2-class (object or not) convolutional classifier layer is replaced with a 21-class convolutional classifier layer. For fair comparisons, for this class-specific RPN we use ResNet-101’s conv555 layers with the à trous trick. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_27", "text": " R-FCN without position-sensitivity. By setting k=1𝑘1k=1 we remove the position-sensitivity of the R-FCN. This is equivalent to global pooling within each RoI. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_28", "text": " Analysis. Table 2 shows the results. We note that the standard (not naïve) Faster R-CNN in the ResNet paper achieves 76.4% mAP with ResNet-101 (see also Table 5), which inserts the RoI pooling layer between conv444 and conv555 . As a comparison, the naïve Faster R-CNN (that applies RoI pooling after conv555) has a drastically lower mAP of 68.9% (Table 2). This comparison empirically justifies the importance of respecting spatial information by inserting RoI pooling between layers for the Faster R-CNN system. Similar observations are reported in . ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_29", "text": " The class-specific RPN has an mAP of 67.6% (Table 2), about 9 points lower than the standard Faster R-CNN’s 76.4%. This comparison is in line with the observations in (6, 12) — in fact, the class-specific RPN is similar to a special form of Fast R-CNN that uses dense sliding windows as proposals, which shows inferior results as reported in (6, 12). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_30", "text": " On the other hand, our R-FCN system has significantly better accuracy (Table 2). Its mAP (76.6%) is on par with the standard Faster R-CNN’s (76.4%, Table 5). These results indicate that our position-sensitive strategy manages to encode useful spatial information for locating objects, without using any learnable layer after RoI pooling. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_31", "text": " The importance of position-sensitivity is further demonstrated by setting k=1𝑘1k=1, for which R-FCN is unable to converge. In this degraded case, no spatial information can be explicitly captured within an RoI. Moreover, we report that naïve Faster R-CNN is able to converge if its RoI pooling output resolution is 1×1111\\times 1, but the mAP further drops by a large margin to 61.7% (Table 2). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_32", "text": " Comparisons with Faster R-CNN Using ResNet-101 ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_33", "text": " Next we compare with standard “Faster R-CNN + ResNet-101” which is the strongest competitor and the top-performer on the PASCAL VOC, MS COCO, and ImageNet benchmarks. We use k×k=7×7𝑘𝑘77k\\times k=7\\times 7 in the following. Table 5 shows the comparisons. Faster R-CNN evaluates a 10-layer subnetwork for each region to achieve good accuracy, but R-FCN has negligible per-region cost. With 300 RoIs at test time, Faster R-CNN takes 0.42s per image, 2.5×\\times slower than our R-FCN that takes 0.17s per image (on a K40 GPU; this number is 0.11s on a Titan X GPU). R-FCN also trains faster than Faster R-CNN. Moreover, hard example mining adds no cost to R-FCN training (Table 5). It is feasible to train R-FCN when mining from 2000 RoIs, in which case Faster R-CNN is 6×\\times slower (2.9s vs. 0.46s). But experiments show that mining from a larger set of candidates (e.g., 2000) has no benefit (Table 5). So we use 300 RoIs for both training and inference in other parts of this paper. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_34", "text": " Table 5 shows more comparisons. Following the multi-scale training in , we resize the image in each training iteration such that the scale is randomly sampled from {400,500,600,700,800} pixels. We still test a single scale of 600 pixels, so add no test-time cost. The mAP is 80.5%. In addition, we train our model on the MS COCO trainval set and then fine-tune it on the PASCAL VOC set. R-FCN achieves 83.6% mAP (Table 5), close to the “Faster R-CNN +++” system in that uses ResNet-101 as well. We note that our competitive result is obtained at a test speed of 0.17 seconds per image, 20×\\times faster than Faster R-CNN +++ that takes 3.36 seconds as it further incorporates iterative box regression, context, and multi-scale testing . These comparisons are also observed on the PASCAL VOC 2012 test set (Table 5). ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_35", "text": " On the Impact of Depth ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_36", "text": " The following table shows the R-FCN results using ResNets of different depth . Our detection accuracy increases when the depth is increased from 50 to 101, but gets saturated with a depth of 152. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_37", "text": " training data test data ResNet-50 ResNet-101 ResNet-152 R-FCN 07+12 07 77.0 79.5 79.6 R-FCN multi-sc train 07+12 07 78.7 80.5 80.4 ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_38", "text": " On the Impact of Region Proposals ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_39", "text": " R-FCN can be easily applied with other region proposal methods, such as Selective Search (SS) and Edge Boxes (EB) . The following table shows the results (using ResNet-101) with different proposals. R-FCN performs competitively using SS or EB, showing the generality of our method. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_40", "text": " training data test data RPN SS EB R-FCN 07+12 07 79.5 77.2 77.8 ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_41", "text": " Next we evaluate on the MS COCO dataset that has 80 object categories. Our experiments involve the 80k train set, 40k val set, and 20k test-dev set. We set the learning rate as 0.001 for 90k iterations and 0.0001 for next 30k iterations, with an effective mini-batch size of 8. We extend the alternating training from 4-step to 5-step (i.e., stopping after one more RPN training step), which slightly improves accuracy on this dataset when the features are shared; we also report that 2-step training is sufficient to achieve comparably good accuracy but the features are not shared. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_42", "text": " The results are in Table 6. Our single-scale trained R-FCN baseline has a val result of 48.9%/27.6%. This is comparable to the Faster R-CNN baseline (48.4%/27.2%), but ours is 2.5×\\times faster testing. It is noteworthy that our method performs better on objects of small sizes (defined by ). Our multi-scale trained (yet single-scale tested) R-FCN has a result of 49.1%/27.8% on the val set and 51.5%/29.2% on the test-dev set. Considering COCO’s wide range of object scales, we further evaluate a multi-scale testing variant following , and use testing scales of {200,400,600,800,1000}. The mAP is 53.2%/31.5%. This result is close to the 1st-place result (Faster R-CNN +++ with ResNet-101, 55.7%/34.9%) in the MS COCO 2015 competition. Nevertheless, our method is simpler and adds no bells and whistles such as context or iterative box regression that were used by , and is faster for both training and testing. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_43", "text": " We presented Region-based Fully Convolutional Networks, a simple but accurate and efficient framework for object detection. Our system naturally adopts the state-of-the-art image classification backbones, such as ResNets, that are by design fully convolutional. Our method achieves accuracy competitive with the Faster R-CNN counterpart, but is much faster during both training and inference. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" }, { "id": "1605.06409_all_44", "text": " We intentionally keep the R-FCN system presented in the paper simple. There have been a series of orthogonal extensions of FCNs that were developed for semantic segmentation (e.g., see ), as well as extensions of region-based methods for object detection (e.g., see (9, 1, 22)). We expect our system will easily enjoy the benefits of the progress in the field. ", "title": "R-FCN: Object Detection via Region-based Fully Convolutional Networks" } ]
Why did the authors choose to test the proposed graph pooling method specifically on molecular property prediction tasks?
The proposed graph pooling method was tested specifically on molecular property prediction tasks because predefining the number of clusters is especially detrimental in molecular property prediction where there is no single number of clusters that is suitable across all graphs [3]. The number of functional groups that determine useful characteristics and chemical behaviors can vary significantly across different molecules [9].
[ 3, 9 ]
[ { "id": "2209.02939_all_0", "text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight difference, then feeds the representation to a final prediction layer. This process implies that information only propagates through node-to-node edges, rendering the model unable to hierarchically aggregate information efficiently beyond local convolution. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_1", "text": " However, a hierarchical structure can encode the global topology of graphs that is useful for effective learning of long range interactions. Therefore, designing a pooling architecture which respects the graph structure is crucial for downstream tasks such as social network analyses https://doi.org/10.48550/arxiv.1609.02907 ; https://doi.org/10.48550/arxiv.1706.02216 and molecule property predictions https://doi.org/10.48550/arxiv.1509.09292 ; https://doi.org/10.48550/arxiv.1606.09375 ; https://doi.org/10.48550/arxiv.1312.6203 ; doi:10.1021/acs.jcim.6b00601 ; 4700287 ; https://doi.org/10.48550/arxiv.1812.01070 . ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_2", "text": " As an alternative to global pooling, DiffPool first proposed an end-to-end differentiable pooling by soft-classifying each node into a smaller number of clusters ying2018 . Later gPool gao2019 and SAGpool lee2019 incorporated the attention mechanism into pooling, while MinCutPool proposed grouping the nodes into clusters by minimizing the relaxed K𝐾K-way normalized minimum cut objective bianchi2019 . ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_3", "text": " In most inductive settings, there is no single number of clusters that is suitable across all graphs in the dataset. Particularly in molecular graphs, the number of functional groups often determines useful characteristics and chemical behaviors, while varying significantly across different molecules. Nonetheless, existing pooling methods require the number of clusters as a hyperparameter, then operates under the assumption that all graphs share the same number of clusters ranjan2020asap . This is often undesirable as it not only requires additional hyperparameter tuning, but also imposes a strong inductive bias that deteriorates downstream performance. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_4", "text": " To overcome this challenge, we propose GMPool, a general pooling framework that does not require an universal number of clusters as a user hyperparameter. Figure 1 depicts the overall framework of GMPool. The core intuition is that the product of a pooling matrix with itself forms a grouping matrix, where each (i,j)𝑖𝑗(i,j)-th entry indicates the pairwise clustering similarity: whether the nodes i𝑖i and j𝑗j are pooled to the same clusters. For each graph, GMPool parameterizes the clustering similarities in its grouping matrix via a classification layer. Finally, we perform SVD on the grouping matrix to obtain the pooling matrix such that the overall rank represents the suitable number of clusters. We also test a single-pooling variant NGMPool that does not perform any decomposition, but rather uses the grouping matrix as is. In real-world molecular property prediction tasks, we show that our approach outperforms previous baselines, while successfully learning suitable clusters. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_5", "text": " The main contributions of this paper are as follows: • We design a grouping matrix-based pooling operator that does not require users to specify the number of clusters a priori. • We propose GMPool and NGMPool. GMPool performs SVD on the grouping matrix to obtain the pooling matrix, whereas NGMPool utilizes the grouping matrix as is. • We demonstrate the power of our methods both quantitatively and qualitatively on a wide range of real molecular property prediction tasks. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_6", "text": " GNN architectures have shown great performance in various fields such as social network data, authorship/citation networks, and molecular data that can naturally be interpreted as graphs. For graph convolution, several work have utilized the graph Laplacian in the spectral domain. However, sheer convolution in the spectral domain suffers from the non-locality problem, and various approaches have been introduced to overcome this limitation. https://doi.org/10.48550/arxiv.1706.02216 ; https://doi.org/10.48550/arxiv.1810.00826 ; https://doi.org/10.48550/arxiv.1710.10903 ; https://doi.org/10.48550/arxiv.1704.01212 One stream of work has embedded the attention architecture into GNN, inferring the interaction between nodes without using a diffusion-like picture. https://doi.org/10.48550/arxiv.1710.10903 Another line of work considered message passing networks, which ensures the signal to be localized and non-linearly weighted. https://doi.org/10.48550/arxiv.1704.01212 This architecture has been proven to be highly effective in molecular property prediction fields. doi:10.1021/acs.jcim.9b00237 ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_7", "text": " Graph pooling aims to utilize the hierarchical nature of graphs. Early work mainly focused on fixed axiomatic pooling methods such as minimum cut, k-means, and spectral clustering without any gradient-based optimization. https://doi.org/10.48550/arxiv.1312.6203 ; NIPS2011_6c1da886 ; 10.1016/j.patcog.2006.04.007 ; https://doi.org/10.48550/arxiv.0711.0189 ; 4302760 Although these pooling methods are effective on graphs without noise, the same heuristic often fails to work well on real datasets and tasks, especially due to a lack of differentiability that prohibits training under supervised signals. Since node representations and pooling strategies mutually affect each other during the training process, simultaneous optimization of whole components is crucial for avoiding local minima. Among many solutions, Diffpoolying2018 is the first to propose an end-to-end learnable pooling mechanism that learns an assignment matrix in which each entry represents the probability of a node being assigned to a cluster. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_8", "text": " gPool gao2019 and SAGPool lee2019 are ranking-based pooling methods that coarsen the input graph by ranking and downsampling a small subset of nodes. MinCutPool bianchi2019 leverages a continuous relaxation of the minimum-cut objective, enabling spectral clustering under full differentiability. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_9", "text": " However, the pooling methods above all share a common limitation: the number of clusters must be predefined for each layer as hyperparameters. This limitation is especially detrimental in inductive settings such as molecular property prediction, where each graph can have varying numbers of useful sub-structures. https://doi.org/10.1111/cbdd.12952 ; doi:10.1021/acs.jmedchem.0c00754 ; GUVENCH20161928 Allowing the model to pool towards varying number of clusters based on data is expected to enhance performance, and our proposed GMPool allows such variation through the rank of the grouping matrix. To the best of our knowledge, GMPool is the first to achieve high performance without the need to manually adjust the number of clusters through additional hyperparameter tuning. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_10", "text": " In this section, we propose a novel differentiable pooling layer, GMPool, which obtains the pooling matrix by first building a grouping matrix that contains clustering similarities of pairwise nodes and then decomposing the matrix into its square-root form. We start the section with preliminary information, then outline the details of GMPool in later sections. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_11", "text": " We assume an inductive graph-level prediction setting where our aim is to learn a function fθ:𝒢→𝒴:subscript𝑓𝜃→𝒢𝒴f_{\\theta}:\\mathcal{G}\\to\\mathcal{Y} that maps a graph G∈𝒢𝐺𝒢G\\in\\mathcal{G} to a property label y∈𝒴𝑦𝒴y\\in\\mathcal{Y}. Each graph G𝐺G with n𝑛n nodes is represented as a triplet G=(A,X,E)𝐺𝐴𝑋𝐸G=(A,X,E) with graph adjacency A∈{0,1}n×n𝐴superscript01𝑛𝑛A\\in\\{0,1\\}^{n\\times n}, node features X∈ℝn×dn𝑋superscriptℝ𝑛subscript𝑑𝑛X\\in\\mathbb{R}^{n\\times d_{n}}, and edge features E∈ℝn×n×de𝐸superscriptℝ𝑛𝑛subscript𝑑𝑒E\\in\\mathbb{R}^{n\\times n\\times d_{e}}. We use Xisubscript𝑋𝑖X_{i} and Ei​jsubscript𝐸𝑖𝑗E_{ij} to denote the features of node i𝑖i and edge (i,j)𝑖𝑗(i,j), respectively. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_12", "text": " As our backbone GNN, we adopt the Directed Message Passing Neural Network (DMPNN) doi:10.1021/acs.jcim.9b00237 which aggregates messages through directed edges. Note that while we chose DMPNN due to its superior performance over GNN architectures, our pooling layer is module-agnostic and can be combined with any GNN as long as node representations are returned as output. Given a graph, DMPNN first initializes the hidden state of each edge (i,j)𝑖𝑗(i,j) based on its feature Ei​jsubscript𝐸𝑖𝑗E_{ij} and the source-node’s feature Xisubscript𝑋𝑖X_{i}. At each timestep t𝑡t, each directional edge gathers hidden states from incident edges into a message mi​jt+1superscriptsubscript𝑚𝑖𝑗𝑡1m_{ij}^{t+1} and updates its own hidden state to hi​jt+1superscriptsubscriptℎ𝑖𝑗𝑡1h_{ij}^{t+1} as follows mi​jt+1=∑k∈𝒩​(i)∖jhk​itsuperscriptsubscript𝑚𝑖𝑗𝑡1subscript𝑘𝒩𝑖𝑗superscriptsubscriptℎ𝑘𝑖𝑡\\displaystyle m_{ij}^{t+1}=\\sum_{k\\in\\mathcal{N}(i)\\setminus j}h_{ki}^{t} (1) hi​jt+1=ReLU​(hi​j0+We​mi​jt+1)superscriptsubscriptℎ𝑖𝑗𝑡1ReLUsuperscriptsubscriptℎ𝑖𝑗0subscript𝑊𝑒superscriptsubscript𝑚𝑖𝑗𝑡1\\displaystyle h_{ij}^{t+1}=\\texttt{ReLU}(h_{ij}^{0}+W_{e}m_{ij}^{t+1}) (2) Here, 𝒩​(i)𝒩𝑖\\mathcal{N}(i) denotes the set of neighboring nodes of node i𝑖i and Wesubscript𝑊𝑒W_{e} a learnable weight. The hidden states of nodes are updated by aggregating the hidden states of incident edges into message mit+1superscriptsubscript𝑚𝑖𝑡1m_{i}^{t+1}, and passing its concatenation with the node feature Xisubscript𝑋𝑖X_{i} into a linear layer followed by ReLU non-linearity mit+1=∑j∈𝒩​(i)hi​jtsuperscriptsubscript𝑚𝑖𝑡1subscript𝑗𝒩𝑖superscriptsubscriptℎ𝑖𝑗𝑡\\displaystyle m_{i}^{t+1}=\\sum_{j\\in\\mathcal{N}(i)}h_{ij}^{t} (3) hit+1=ReLU​(Wn​concat​(Xi,mit+1))superscriptsubscriptℎ𝑖𝑡1ReLUsubscript𝑊𝑛concatsubscript𝑋𝑖superscriptsubscript𝑚𝑖𝑡1\\displaystyle h_{i}^{t+1}=\\texttt{ReLU}(W_{n}\\texttt{concat}(X_{i},m_{i}^{t+1})) (4) Similarly, Wnsubscript𝑊𝑛W_{n} denotes a learnable weight. Assuming DMPNN runs for T𝑇T timesteps, we use (Xo​u​t,Eo​u​t)=GNN​(A,X,E)subscript𝑋𝑜𝑢𝑡subscript𝐸𝑜𝑢𝑡GNN𝐴𝑋𝐸(X_{out},E_{out})=\\texttt{GNN}(A,X,E) to denote the output representation matrices containing hidden states of all nodes and edges, respectively (i.e., Xo​u​t,i=hiTsubscript𝑋𝑜𝑢𝑡𝑖superscriptsubscriptℎ𝑖𝑇X_{out,i}=h_{i}^{T} and Eo​u​t,i​j=hi​jTsubscript𝐸𝑜𝑢𝑡𝑖𝑗superscriptsubscriptℎ𝑖𝑗𝑇E_{out,ij}=h_{ij}^{T}). ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_13", "text": " For graph-level prediction, the node representations after the final GNN layer are typically sum-pooled to obtain a single graph representation hG=∑ihisubscriptℎ𝐺subscript𝑖subscriptℎ𝑖h_{G}=\\sum_{i}h_{i}, which is then passed to a FFN prediction layer. Note that this approach only allows features to propagate locally and is hence unable to learn long-range dependencies and hierarchical structures within graphs. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_14", "text": " Our goal is to learn a pooling operator to coarsen the input graph after the GNN in each hierarchical layer. In each hierarchical layer, the GNN constructs node representations and then the pooling layer forms a coarsened graph, which is used as input to the next hierarchical layer. More formally, given the representations from the l𝑙l-th layer as (Xo​u​t(l),Eo​u​t(l))=GNN​(A(l),X(l),E(l))subscriptsuperscript𝑋𝑙𝑜𝑢𝑡subscriptsuperscript𝐸𝑙𝑜𝑢𝑡GNNsuperscript𝐴𝑙superscript𝑋𝑙superscript𝐸𝑙(X^{(l)}_{out},E^{(l)}_{out})=\\texttt{GNN}(A^{(l)},X^{(l)},E^{(l)}), the pooling layer yields an assignment matrix S(l)∈ℝnl×nl+1superscript𝑆𝑙superscriptℝsubscript𝑛𝑙subscript𝑛𝑙1S^{(l)}\\in\\mathbb{R}^{n_{l}\\times n_{l+1}} pooling nlsubscript𝑛𝑙n_{l} nodes into nl+1subscript𝑛𝑙1n_{l+1} clusters. Then, the graph G(l)=(A(l),X(l),E(l))superscript𝐺𝑙superscript𝐴𝑙superscript𝑋𝑙superscript𝐸𝑙G^{(l)}=(A^{(l)},X^{(l)},E^{(l)}) is coarsened into G(l+1)=(A(l+1),X(l+1),E(l+1))=(S(l)T​A(l)​S(l),S(l)T​Xo​u​t(l),S(l)T​Eo​u​t(l)​S(l))superscript𝐺𝑙1superscript𝐴𝑙1superscript𝑋𝑙1superscript𝐸𝑙1superscript𝑆superscript𝑙𝑇superscript𝐴𝑙superscript𝑆𝑙superscript𝑆superscript𝑙𝑇subscriptsuperscript𝑋𝑙𝑜𝑢𝑡superscript𝑆superscript𝑙𝑇subscriptsuperscript𝐸𝑙𝑜𝑢𝑡superscript𝑆𝑙G^{(l+1)}=(A^{(l+1)},X^{(l+1)},E^{(l+1)})=(S^{(l)^{T}}A^{(l)}S^{(l)},S^{(l)^{T}}X^{(l)}_{out},S^{(l)^{T}}E^{(l)}_{out}S^{(l)}). This hierarchical process can be utilized iteratively depending on the task at hand. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_15", "text": " When looking into the relation between pairs of nodes, the grouping task becomes rather simple. While most previous models focus on classifying each node to a predefined number of clusters, our idea simplifies the task into classifying whether each pair of nodes is in the same group. Thus, setting the number of clusters a priori becomes unnecessary. This classification will go through every pair of combinations of nodes to ensure permutation invariance. Mi​j(l)=Softmax​(Clf​(f​(Xi,Xj)))∀i,j∈# of Nodesformulae-sequencesubscriptsuperscript𝑀𝑙𝑖𝑗SoftmaxClf𝑓subscript𝑋𝑖subscript𝑋𝑗for-all𝑖𝑗# of NodesM^{(l)}_{ij}=\\textrm{Softmax}(\\textrm{Clf}(f(X_{i},X_{j})))\\qquad\\forall\\,\\,i,j\\in\\textrm{\\# of Nodes} (5) where M(l)∈ℝN×Nsuperscript𝑀𝑙superscriptℝ𝑁𝑁M^{(l)}\\in\\mathbb{R}^{N\\times N} and f𝑓f is a commutative function f:X⊕X→Ywhere​X,Y∈ℝN:𝑓formulae-sequence→direct-sum𝑋𝑋𝑌where𝑋𝑌superscriptℝ𝑁f:X\\oplus X\\rightarrow Y\\qquad\\textrm{where}\\,\\,X,Y\\in\\mathbb{R}^{N} (6) that maps two input vectors into one output vector. While there exist many available choices for f𝑓f, we use Euclidean distance between input vectors to simplify the classification task. Each matrix index corresponds to the node number and each element contains probability values for each pair of nodes whether they are in the same group. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_16", "text": " As an illustrative example, consider a set of disjoint clusters with no overlapping nodes. In such case, the grouping matrix not only contains 0,1010,1 as its elements, but also can be reformed into a block diagonal form. The number of blocks corresponds to the number of groups after pooling and nodes assigned to the same blocks corresponds to a same group. For instance, if there are three different groups and each group size are k1,k2,k3subscript𝑘1subscript𝑘2subscript𝑘3k_{1},k_{2},k_{3}, M(l)=(1k1×k10001k2×k20001k3×k3)superscript𝑀𝑙matrixsubscript1subscript𝑘1subscript𝑘1000subscript1subscript𝑘2subscript𝑘2000subscript1subscript𝑘3subscript𝑘3M^{(l)}=\\begin{bmatrix}\\framebox{$1_{k_{1}\\times k_{1}}$}&0&0\\\\ 0&\\framebox{$1_{k_{2}\\times k_{2}}$}&0\\\\ 0&0&\\framebox{$1_{k_{3}\\times k_{3}}$}\\\\ \\end{bmatrix} (7) One can easily see that the corresponding pooling operator is as follows S(l)=(1k1×100⋯001k2×10⋯0001k3×1⋯0)superscript𝑆𝑙matrixsubscript1subscript𝑘1100⋯00subscript1subscript𝑘210⋯000subscript1subscript𝑘31⋯0S^{(l)}=\\begin{bmatrix}\\framebox{$1_{k_{1}\\times 1}$}&0&0\\quad&\\cdots&\\quad 0\\\\ 0&\\framebox{$1_{k_{2}\\times 1}$}&0\\quad&\\cdots&\\quad 0\\\\ 0&0&\\framebox{$1_{k_{3}\\times 1}$}\\quad&\\cdots&\\quad 0\\\\ \\end{bmatrix} (8) In general, each element of the grouping matrix (in eq. 26) is a continuous number within (0,1)01(0,1), which allows soft-clustering with overlapping nodes. For detailed computation, see appendix. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_17", "text": " However, the grouping matrix itself has a limited role in pooling operations. Therefore, extracting pooling operators from the grouping matrix is crucial. Our strategy to form a pooling operator is rather simple. It can be acquired by decomposing a grouping matrix into square-root form. There are numerous known methods which can be utilized, yet we will introduce two representative methods in the following subsection. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_18", "text": " While the grouping matrix cannot be used for pooling as is, it encodes how similarly each pair of nodes are pooled as it equals the product of the pooling operator with its transpose. The (i,j)𝑖𝑗(i,j)-th entry of the grouping matrix equals ⟨Si(l),Sj(l)⟩=1superscriptsubscript𝑆𝑖𝑙superscriptsubscript𝑆𝑗𝑙1\\langle S_{i}^{(l)},S_{j}^{(l)}\\rangle=1 if the nodes are exactly pooled to the same clusters, ⟨Si(l),Sj(l)⟩=0superscriptsubscript𝑆𝑖𝑙superscriptsubscript𝑆𝑗𝑙0\\langle S_{i}^{(l)},S_{j}^{(l)}\\rangle=0 if they are pooled orthogonally to different clusters. Therefore, if we can decompose the grouping matrix into square-root form, it can be interpreted as a pooling operator for the model. S(l)​S(l)​T=M(l)superscript𝑆𝑙superscript𝑆𝑙𝑇superscript𝑀𝑙S^{(l)}S^{(l)T}=M^{(l)} (9) The pooling operator S∈ℝnl×nl+1𝑆superscriptℝsubscript𝑛𝑙subscript𝑛𝑙1S\\in\\mathbb{R}^{n_{l}\\times n_{l+1}} is a matrix where nl+1≤nlsubscript𝑛𝑙1subscript𝑛𝑙n_{l+1}\\leq n_{l}. Note that by multiplying pooling operator S𝑆S in reverse order, the degree matrix D∈ℝnl+1×nl+1𝐷superscriptℝsubscript𝑛𝑙1subscript𝑛𝑙1D\\in\\mathbb{R}^{n_{l+1}\\times n_{l+1}} of pooling space can be obtained. S(l)​T​S(l)=D(l)superscript𝑆𝑙𝑇superscript𝑆𝑙superscript𝐷𝑙S^{(l)T}S^{(l)}=D^{(l)} (10) From eq. 9, it is obvious that the pooling operator completely reconstructs grouping matrix by interacting pooling indices. Moreover, S𝑆S can be interpreted as a weighted matrix for each node to form appropriate sub-structures. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_19", "text": " Eigen decomposition is one of the basic decomposition schemes one can consider. It is widely used to decompose a given matrix into orthonormal basis O∈ℝnl×nl𝑂superscriptℝsubscript𝑛𝑙subscript𝑛𝑙O\\in\\mathbb{R}^{n_{l}\\times n_{l}} and eigen value Λ∈ℝnl×nlΛsuperscriptℝsubscript𝑛𝑙subscript𝑛𝑙\\Lambda\\in\\mathbb{R}^{n_{l}\\times n_{l}}. M(l)=O​Λ​OTsuperscript𝑀𝑙𝑂Λsuperscript𝑂𝑇M^{(l)}=O\\Lambda O^{T} (11) This particular decomposition scheme always works unless the determinant of a given matrix is equal to 0. From eq. 11, one can rearrange RHS of the equation to become a square form of pooling operator if we set nl+1=nlsubscript𝑛𝑙1subscript𝑛𝑙n_{l+1}=n_{l}. M(l)=O​Λ​Λ​OT≡S(l)​S(l)​Tsuperscript𝑀𝑙𝑂ΛΛsuperscript𝑂𝑇superscript𝑆𝑙superscript𝑆𝑙𝑇M^{(l)}=O\\sqrt{\\Lambda}\\sqrt{\\Lambda}O^{T}\\equiv S^{(l)}S^{(l)T} (12) The pooling operator S𝑆S is a square matrix with size of nl×nlsubscript𝑛𝑙subscript𝑛𝑙n_{l}\\times n_{l}, yet the eigen value ΛΛ\\Lambda suppresses useless ranks in the matrix by multiplying 00 to each column of orthonormal basis. Also, eigen decomposition works for any matrix with non-zero determinants, and so it performs perfectly fine in real world situations. Furthermore, any symmetric and real matrix are guaranteed to have real eigen values as well as vectors. Therefore, the square-root of the grouping matrix is ensured to be interpreted as a transformation operator forming sub-groups from nodes. These continuous real valued elements have the advantage that nodes can be soft-clustered to sub-groups. In conventional clustering, it is hard to cluster these structures properly. However, since soft clustering is naturally embedded in the algorithm, linker structures can be dealt with ease. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_20", "text": " After acquiring the pooling operator, the pooling process becomes obvious. Nodes are in fundamental representation while edge features and adjacency matrix are in adjoint representation. Which leads to the following transformation rules. Xi(l+1)=S(l)​Xi(l)superscriptsubscript𝑋𝑖𝑙1superscript𝑆𝑙superscriptsubscript𝑋𝑖𝑙\\displaystyle X_{i}^{(l+1)}=S^{(l)}X_{i}^{(l)} (13) Ei​j(l+1)=S(l)​Ei​j(l)​S(l)​Tsuperscriptsubscript𝐸𝑖𝑗𝑙1superscript𝑆𝑙superscriptsubscript𝐸𝑖𝑗𝑙superscript𝑆𝑙𝑇\\displaystyle E_{ij}^{(l+1)}=S^{(l)}E_{ij}^{(l)}S^{(l)T} (14) Ai​j(l+1)=S(l)​Ai​j(l)​S(l)​Tsuperscriptsubscript𝐴𝑖𝑗𝑙1superscript𝑆𝑙superscriptsubscript𝐴𝑖𝑗𝑙superscript𝑆𝑙𝑇\\displaystyle A_{ij}^{(l+1)}=S^{(l)}A_{ij}^{(l)}S^{(l)T} (15) If grouping is properly done, 00 (or close to 00) components will appear in the decomposed eigen value matrix. These zero eigenvalues arise naturally and play a role in disregarding group information; those are ineffective towards prediction. However, zero elements in the eigen values causes a major problem in the decomposition process since the matrix might carry a singular determinant. Eigen decomposition is based on an iterative approximation algorithm which includes unbounded terms if any two eigen values are small or close. One can see clearly about this matter in DBLP:journals/corr/IonescuVS15 . (∂l∂A)=U​(KT⊙(UT​∂l∂U)+(∂l∂Λ)diag)​(UT)𝑙𝐴𝑈direct-productsuperscript𝐾𝑇superscript𝑈𝑇𝑙𝑈subscript𝑙Λdiagsuperscript𝑈𝑇\\Big{(}\\frac{\\partial{l}}{\\partial{A}}\\Big{)}=U\\big{(}K^{T}\\odot(U^{T}\\frac{\\partial{l}}{\\partial{U}})+(\\frac{\\partial{l}}{\\partial{\\Lambda}})_{\\textrm{diag}})(U^{T}) (16) Here, ⊙direct-product\\odot denotes element-wise product. Off-diagonal components of K=1/(λi−λj)𝐾1subscript𝜆𝑖subscript𝜆𝑗K=1/(\\lambda_{i}-\\lambda_{j}) causes the problem, since the value blows up to the infinity if any two eigen values are close or very small. However, there are some solutions for this matter by approximating gradient in different ways DBLP:journals/corr/abs-1906-09023 ; 9400752 ; DBLP:journals/corr/abs-2105-02498 . Those methods are developed further to achieve higher speed in the calculation DBLP:journals/corr/abs-2201-08663 . They claim that the method is noticeably faster, over 888 times, than the standard SVD which has the time complexity 𝒪​(n3)𝒪superscript𝑛3\\mathcal{O}(n^{3}). Thus, we utilized this method in our work to stabilize and accelerate the learning process. However, since the algorithm achieves the higher speed by approximating gradients, the error compared to standard SVD grows bigger as the size of the matrix grows. Therefore, this method might not be valid with large sized graph data. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_21", "text": " Another decomposition scheme we are introducing has a rather different approach. Since computing the square root of a given matrix is not an easy task, here we focus on the square of the pooling operator, which is nothing but the grouping matrix itself, and formulate a pooling-like effect by multiplying the grouping matrix. The key idea is to retain pooling depth to one and use a weighted aggregation vector in pooling space as an aggregation basis. The weighted aggregation vector is transformed Euclidean one vector by acting a pooling matrix obtained by decomposing the grouping matrix. 1i(l+1)=S(l)​1i(l)superscriptsubscript1𝑖𝑙1superscript𝑆𝑙superscriptsubscript1𝑖𝑙\\displaystyle 1_{i}^{(l+1)}=S^{(l)}1_{i}^{(l)} (17) The final form of the transformation can be expressed as follows. Xi(l+1)∼M(l)​Xi(l)similar-tosuperscriptsubscript𝑋𝑖𝑙1superscript𝑀𝑙superscriptsubscript𝑋𝑖𝑙\\displaystyle X_{i}^{(l+1)}\\sim M^{(l)}X_{i}^{(l)} (18) Ei​j(l+1)∼M(l)​Ei​j(l)​M(l)similar-tosuperscriptsubscript𝐸𝑖𝑗𝑙1superscript𝑀𝑙superscriptsubscript𝐸𝑖𝑗𝑙superscript𝑀𝑙\\displaystyle E_{ij}^{(l+1)}\\sim M^{(l)}E_{ij}^{(l)}M^{(l)} (19) This pooling scheme is simpler to use and more scalable (with 𝒪​(n2)𝒪superscript𝑛2\\mathcal{O}(n^{2}) cost) than GMPool since the method circumvents SVD computation. Yet there are two mathematical ambiguities. One is that it is only valid for single depth pooling cases. If one tries to perform multiple sequential pooling operations, the pooling operators are no more available to be reduced into the grouping matrix, since two different pooling operators are not Abelian. The other ambiguity is that most activation functions commonly used are not equivariant with pooling operators. However, since many of them are based on element-wise operations with monotonic functions, we can presume that the anomaly are not dominant in most cases. We find that this approach performs comparably to GMPool for small sized molecules where a single pooling depth suffice. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_22", "text": " We arrange a total of five datasets to test our algorithms: two are open datasets collected from MoleculeNet Ramsundar-et-al-2019 and Binding DB 10.2174/1386207013330670 ; 10.1093/bioinformatics/18.1.130 ; 10.1002/bip.10076 ; 10.1093/nar/gkl999 ; 10.1093/nar/gkv1072 , three are manually collected and arranged from different literatures including scientific articles and patents. • PLQY includes experimentally measured values of photoluminescence quantum yield (PLQY) for fluorescence molecules. • λm​a​xsubscript𝜆𝑚𝑎𝑥\\lambda_{max} Solvents contains measured λm​a​xsubscript𝜆𝑚𝑎𝑥\\lambda_{max}, wavelength that shows maximum intensity for emission of a fluorescence molecule, under the solvent condition. • λm​a​xsubscript𝜆𝑚𝑎𝑥\\lambda_{max} Films consists of λm​a​xsubscript𝜆𝑚𝑎𝑥\\lambda_{max} values measured after spin coating of fluorescence molecules on films doped with host materials. • pIC50 contains the negative log of the IC50 values for ATP receptor. IC50 implies minimum concentration of certain molecule needed for inhibiting half of activity of the target proteins. The IC50 values are optained from the BindingDB (https://www.bindingdb.org/bind/index.jsp). • Tox21 consists of results of 12 types of toxicity screening tests. We labeled a molecule ‘toxic’ if the molecule failed in any of screening type. Data were originated from Tox21 challenge (2014). Since there are molecules without graph structure information in the dataset, we selected 7,83178317,831 molecules that have the graph structure information. For proper evaluation of pooling approaches, each graph in the data must have at least two or more effective groups. However, Tox21 and pIC50 data contains molecules too small to contain multiple groups and thus we drop molecules with less than 20 nodes from the datasets. In addition, we drop molecules with more than 40 nodes from Tox21 and pIC50 datasets to accelerate the whole training process under dense matrix computations: the largest molecule in each respective dataset has 86 and 132 nodes, but the ratio of molecules with size over 40 in the dataset is only 3.4%percent3.43.4\\% and 3.6%percent3.63.6\\%. Especially for pIC50 dataset, the proportion of molecules with less than 20 nodes are 0.4%percent0.40.4\\%. Lastly, the Tox21 task has been simplified to a single classification task by setting a positive label if any of the 12 tasks are positive in the original dataset. Details can be found in Table 1 and appendix section. Every experiments are tested under five-fold settings with uniform sampling and 10% of dedicated test set to secure the results, and single RTX 3090 is used for the experiments. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_23", "text": " For empirical evaluation, we compare the performance of GMPool and NGMPool against that of five other pooling approaches. We run all experiments via a pipeline with a fixed DMPNN backbone, while exchanging the pooling layers only. Here we provide brief descriptions of each baselines used: Top-k gao2019 and SAGPool lee2019 retain nodes with the highest scoring based on the projections of node features and self-attention scores, respectively. DiffPool ying2018 uses an additional GNN to learn soft-assignment matrices that mix nodes into clusters. ASAPool ranjan2020asap clusters local subgraphs together through scoring and selection of clusters. MemPool mempool incorporates memory layers that jointly coarsen and transform input node representations. Note that we reimplemented the DMPNN backbone, Top-k pooling, and DiffPool. Implementations of other pooling baselines are borrowed from the pytorch-geometric library. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_24", "text": " For the backbone of the model, DMPNN, we use the same hidden size of 200 across all three independent layers: the initial edge features with dimension desubscript𝑑𝑒d_{e} and node features with dimension dnsubscript𝑑𝑛d_{n} are passed through layers of dimension de×200subscript𝑑𝑒200d_{e}\\times 200 and dn×200subscript𝑑𝑛200d_{n}\\times 200, respectively with ReLU activation. The initial node and edge embeddings are determined by features generated in RDKit. The message passing module passes node embeddings through a linear layer with dimension 200×200200200200\\times 200, followed by ReLU activation and 0.150.150.15 dropout layer. For graph representation we use a global average pooling scheme. GMPool and NGMPool construct the grouping matrix via a 200×12001200\\times 1 linear layer and sigmoid activation without any parameters related to cluster numbers or thresholds. We use a batch size of 808080 and Adam optimizer for all model training. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_25", "text": " For baseline pooling methods that require the cluster size as a hyperparameter, we perform grid search across candidates following previous work, and present best results. However, we fix the final pooling size to 10 as the average size of most common 404040 functional groups in bioactive molecules is 4.254.254.25 ertl2020most , indicating that molecules under concern (statistics shown in Table 1) can have up to 101010 clusters. The specific hyperparameter setups used for pooling baselines can be found in appendix. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_26", "text": " The grouping matrix starts from randomized initial state and is optimized to gather effective functional groups in the molecules (Figures 2(b) and 2(c)). Furthermore, since our algorithm fully enjoys the soft clustering concept, the result shows continuous weights for each group. This characteristic ensures the model can gather information from distant nodes if necessary. However, sometimes the grouping matrix shows unfamiliar forms, since the effective functional groups should vary due to the downstream task itself. For instance, for some simple tasks such as PLQY prediction, the grouping is rather intuitive as shown in Figure 2(b), yet for complicated tasks like λmaxsubscript𝜆max\\lambda_{\\textrm{max}} prediction, the effective functional groups are also complicated as in Figure 2(c). ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_27", "text": " We tested various combinations of models and dataset to check the validity of our algorithm. We selected GCN, DMPNN, Top-k, SAGPool, DiffPool, ASAPool and MemPool algorithm to set a benchmark score to compare with. As it is shown in the table 2, majority of the cases, our models outperform conventional methods. However, for some tasks (i.e. λmaxsubscript𝜆max\\lambda_{\\textrm{max}} datasets), our model is gaining only a small margin of the performance. This is caused by the underlying mechanism of the chemical effect. Since some tasks are strongly related to the effective groups of the molecule, yet others are not. In those cases, sub-structures are not intuitive and might appear in very complicated forms, as shown in Figure 2(c). If the grouping becomes complicated, the rank of the pooling matrix should be larger to cover all degrees of freedom for the data. However, conventional models, which shared predefined numbers as universal grouping numbers, force to collect groups and reduce it to the low-rank form, which might not have enough degree of freedom. This will cause information loss or blend which compromises the prediction result. Therefore, one can check that in λmaxsubscript𝜆max\\lambda_{\\textrm{max}} prediction test, conventional pooling algorithms show inferior result than simple massage passing scheme. Yet our model is not designed to reduce the physical rank of the matrix during the pooling process, and there is always enough degree of freedom to carry the information throughout learning. Hence, even for the λmaxsubscript𝜆max\\lambda_{\\textrm{max}} case, our model outperforms the others. Furthermore, for other tasks, it is clear that our model improves performance by 5∼10%​psimilar-to5percent10𝑝5\\sim 10\\%p. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_28", "text": " One crucial hyperparameter to be explored in pooling models is the number of clusters. Even though our model does not require to fix the number of clusters in the first place, one can set the parameter by force. One can easily see in the Figure 3(a) that the number of clusters can be set to the number of nodes without compromising performance of the model. Further, Figure 3(b) shows that our model outperforms Top k algorithms with various cluster numbers and original DMPNN as well. This is one of the powerful features of our model, since the model automatically splits groups and determines the appropriate number of sub-structures for each individual graph. One can also force the number of clusters and share through all graphs in an equal manner; however, it is not effective for the following reasons. In real world data, one can not esteem the exact number of clusters for individual graphs. This might be problematic if one sets the number of clusters less than it requires, the models’ performance will be compromised due to the information loss. Another problem is caused by the mathematical structure of the decomposition scheme. Using SVD method will cause ambiguity since collecting only top k eigen values from the decomposed matrix might not reconstruct the original grouping matrix due to lack of information. It is even worse in the initial stage of the learning as the weight is almost in the random state and the top k eigen values are not precisely representing the appropriate clusters. Thus, as it is depicted in the above figure, it is best to leave the cluster number to be determined automatically by the model itself. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_29", "text": " We have introduced a novel pooling architecture with adaptive number of clusters based on a second order pooling operator, namely the grouping matrix. The grouping matrix is based on clustering similarities between every possible pairs of nodes, ensuring permutation invariance. We have shown that our model is valid for chemical property prediction and outperforms conventional methods in real-world datasets. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_30", "text": " While our model is useful and effective, there is still room for improvement. First of all, despite leveraging a method to decompose the grouping matrix with stable gradient computations, there exist corner cases with a small eigengap at which the model fails to converge. This event seldom happens (about 0.00018%percent0.000180.00018\\% in our experiments), but can be non-negligible when one needs to learn with a large number of data points. Hence, one future direction would be to impose proper constraints on the loss to avoid such gradient blowup in the grouping matrix. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_31", "text": " Another future direction would be to enhance scalability of our methods to improve applicability to large-scale graphs. Since the grouping matrix decomposition step via SVD is the main computational bottleneck of GMPool, incorporating faster decomposition modules such as randomized approximation halko2011finding ; DBLP:journals/corr/abs-1710-02812 methods can lead to faster inference. However, this is likely to incur loss in predictive performance, and as the focus of this work lies in allowing variation in the number of clusters in small molecular graphs where scalability is not an issue, we defer improving the scalability to future work. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_32", "text": " Lastly, generalizing the second order grouping matrix towards higher-order grouping tensors can allow further expressive power. We have introduced a pairwise structure; yet it is not obliged to be fixed into the pairwise form. If we consider higher-order form of node combinations, i.e. k-form where k<N𝑘𝑁k<N and N𝑁N is total node number, the grouping matrix can be generalized into the higher rank tensor. Based on the tensor-form, the transformation rule can be written as ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_33", "text": " M~μ1​⋯​μk=Sμ1ν1​⋯​Sμkνk​Mν1​⋯​νksubscript~𝑀subscript𝜇1⋯subscript𝜇𝑘superscriptsubscript𝑆subscript𝜇1subscript𝜈1⋯superscriptsubscript𝑆subscript𝜇𝑘subscript𝜈𝑘subscript𝑀subscript𝜈1⋯subscript𝜈𝑘\\tilde{M}_{\\mu_{1}\\cdots\\mu_{k}}=S_{\\mu_{1}}^{\\phantom{\\mu_{1}}\\nu_{1}}\\cdots S_{\\mu_{k}}^{\\phantom{\\mu_{k}}\\nu_{k}}M_{\\nu_{1}\\cdots\\nu_{k}} (20) ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" }, { "id": "2209.02939_all_34", "text": " Note that to satisfy the above transformation rule, the following conditions are required. One is that selecting nodes combination should be as same as selecting nodes set and size of the set should fixed into the number of nodes in a group. The other is that the classification result of the node set should be retained the same for any subset in the node set. This concept may have a connection to hypergraph configurations. However if we raise the nodes numbers above 222, required computation power increases by a huge amount, since the combination number grows exponentially until the number of nodes hits N/2𝑁2N/2. Therefore, practically it is a difficult task to test the higher rank version of our algorithm, yet it could be useful for learning datasets with higher order connections. ", "title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters" } ]
Why does YOLO suffer from the shortcomings mentioned by the authors?
The problems from which YOLO model suffer are the localization errors and low recall rate [39]. The aim of this paper is to address these problems [7].
[ 39, 7 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained to a small set of objects. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_1", "text": " Current object detection datasets are limited compared to datasets for other tasks like classification and tagging. The most common detection datasets contain thousands to hundreds of thousands of images with dozens to hundreds of tags . Classification datasets have millions of images with tens or hundreds of thousands of categories . ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_2", "text": " We would like detection to scale to level of object classification. However, labelling images for detection is far more expensive than labelling for classification or tagging (tags are often user-supplied for free). Thus we are unlikely to see detection datasets on the same scale as classification datasets in the near future. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_3", "text": " We propose a new method to harness the large amount of classification data we already have and use it to expand the scope of current detection systems. Our method uses a hierarchical view of object classification that allows us to combine distinct datasets together. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_4", "text": " We also propose a joint training algorithm that allows us to train object detectors on both detection and classification data. Our method leverages labeled detection images to learn to precisely localize objects while it uses classification images to increase its vocabulary and robustness. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_5", "text": " Using this method we train YOLO9000, a real-time object detector that can detect over 9000 different object categories. First we improve upon the base YOLO detection system to produce YOLOv2, a state-of-the-art, real-time detector. Then we use our dataset combination method and joint training algorithm to train a model on more than 9000 classes from ImageNet as well as detection data from COCO. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_6", "text": " All of our code and pre-trained models are available online at http://pjreddie.com/yolo9000/. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_7", "text": " YOLO suffers from a variety of shortcomings relative to state-of-the-art detection systems. Error analysis of YOLO compared to Fast R-CNN shows that YOLO makes a significant number of localization errors. Furthermore, YOLO has relatively low recall compared to region proposal-based methods. Thus we focus mainly on improving recall and localization while maintaining classification accuracy. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_8", "text": " Computer vision generally trends towards larger, deeper networks . Better performance often hinges on training larger networks or ensembling multiple models together. However, with YOLOv2 we want a more accurate detector that is still fast. Instead of scaling up our network, we simplify the network and then make the representation easier to learn. We pool a variety of ideas from past work with our own novel concepts to improve YOLO’s performance. A summary of results can be found in Table 2. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_9", "text": " Batch Normalization. Batch normalization leads to significant improvements in convergence while eliminating the need for other forms of regularization . By adding batch normalization on all of the convolutional layers in YOLO we get more than 2% improvement in mAP. Batch normalization also helps regularize the model. With batch normalization we can remove dropout from the model without overfitting. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_10", "text": " High Resolution Classifier. All state-of-the-art detection methods use classifier pre-trained on ImageNet . Starting with AlexNet most classifiers operate on input images smaller than 256×256256256256\\times 256 . The original YOLO trains the classifier network at 224×224224224224\\times 224 and increases the resolution to 448448448 for detection. This means the network has to simultaneously switch to learning object detection and adjust to the new input resolution. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_11", "text": " For YOLOv2 we first fine tune the classification network at the full 448×448448448448\\times 448 resolution for 10 epochs on ImageNet. This gives the network time to adjust its filters to work better on higher resolution input. We then fine tune the resulting network on detection. This high resolution classification network gives us an increase of almost 4% mAP. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_12", "text": " Convolutional With Anchor Boxes. YOLO predicts the coordinates of bounding boxes directly using fully connected layers on top of the convolutional feature extractor. Instead of predicting coordinates directly Faster R-CNN predicts bounding boxes using hand-picked priors . Using only convolutional layers the region proposal network (RPN) in Faster R-CNN predicts offsets and confidences for anchor boxes. Since the prediction layer is convolutional, the RPN predicts these offsets at every location in a feature map. Predicting offsets instead of coordinates simplifies the problem and makes it easier for the network to learn. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_13", "text": " We remove the fully connected layers from YOLO and use anchor boxes to predict bounding boxes. First we eliminate one pooling layer to make the output of the network’s convolutional layers higher resolution. We also shrink the network to operate on 416416416 input images instead of 448×448448448448\\times 448. We do this because we want an odd number of locations in our feature map so there is a single center cell. Objects, especially large objects, tend to occupy the center of the image so it’s good to have a single location right at the center to predict these objects instead of four locations that are all nearby. YOLO’s convolutional layers downsample the image by a factor of 32 so by using an input image of 416416416 we get an output feature map of 13×13131313\\times 13. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_14", "text": " When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. Following YOLO, the objectness prediction still predicts the IOU of the ground truth and the proposed box and the class predictions predict the conditional probability of that class given that there is an object. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_15", "text": " Using anchor boxes we get a small decrease in accuracy. YOLO only predicts 98 boxes per image but with anchor boxes our model predicts more than a thousand. Without anchor boxes our intermediate model gets 69.569.569.5 mAP with a recall of 81%percent8181\\%. With anchor boxes our model gets 69.269.269.2 mAP with a recall of 88%percent8888\\%. Even though the mAP decreases, the increase in recall means that our model has more room to improve. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_16", "text": " Dimension Clusters. We encounter two issues with anchor boxes when using them with YOLO. The first is that the box dimensions are hand picked. The network can learn to adjust the boxes appropriately but if we pick better priors for the network to start with we can make it easier for the network to learn to predict good detections. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_17", "text": " Instead of choosing priors by hand, we run k-means clustering on the training set bounding boxes to automatically find good priors. If we use standard k-means with Euclidean distance larger boxes generate more error than smaller boxes. However, what we really want are priors that lead to good IOU scores, which is independent of the size of the box. Thus for our distance metric we use: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_18", "text": " d​(box,centroid)=1−IOU​(box,centroid)𝑑boxcentroid1IOUboxcentroidd(\\text{box},\\text{centroid})=1-\\text{IOU}(\\text{box},\\text{centroid}) ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_19", "text": " We run k-means for various values of k𝑘k and plot the average IOU with closest centroid, see Figure 2. We choose k=5𝑘5k=5 as a good tradeoff between model complexity and high recall. The cluster centroids are significantly different than hand-picked anchor boxes. There are fewer short, wide boxes and more tall, thin boxes. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_20", "text": " We compare the average IOU to closest prior of our clustering strategy and the hand-picked anchor boxes in Table 1. At only 5 priors the centroids perform similarly to 9 anchor boxes with an average IOU of 61.0 compared to 60.9. If we use 9 centroids we see a much higher average IOU. This indicates that using k-means to generate our bounding box starts the model off with a better representation and makes the task easier to learn. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_21", "text": " Direct location prediction. When using anchor boxes with YOLO we encounter a second issue: model instability, especially during early iterations. Most of the instability comes from predicting the (x,y)𝑥𝑦(x,y) locations for the box. In region proposal networks the network predicts values txsubscript𝑡𝑥t_{x} and tysubscript𝑡𝑦t_{y} and the (x,y)𝑥𝑦(x,y) center coordinates are calculated as: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_22", "text": " x𝑥\\displaystyle x =(tx∗wa)−xaabsentsubscript𝑡𝑥subscript𝑤𝑎subscript𝑥𝑎\\displaystyle=(t_{x}*w_{a})-x_{a} y𝑦\\displaystyle y =(ty∗ha)−yaabsentsubscript𝑡𝑦subscriptℎ𝑎subscript𝑦𝑎\\displaystyle=(t_{y}*h_{a})-y_{a} ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_23", "text": " For example, a prediction of tx=1subscript𝑡𝑥1t_{x}=1 would shift the box to the right by the width of the anchor box, a prediction of tx=−1subscript𝑡𝑥1t_{x}=-1 would shift it to the left by the same amount. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_24", "text": " This formulation is unconstrained so any anchor box can end up at any point in the image, regardless of what location predicted the box. With random initialization the model takes a long time to stabilize to predicting sensible offsets. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_25", "text": " Instead of predicting offsets we follow the approach of YOLO and predict location coordinates relative to the location of the grid cell. This bounds the ground truth to fall between 00 and 111. We use a logistic activation to constrain the network’s predictions to fall in this range. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_26", "text": " The network predicts 5 bounding boxes at each cell in the output feature map. The network predicts 5 coordinates for each bounding box, txsubscript𝑡𝑥t_{x}, tysubscript𝑡𝑦t_{y}, twsubscript𝑡𝑤t_{w}, thsubscript𝑡ℎt_{h}, and tosubscript𝑡𝑜t_{o}. If the cell is offset from the top left corner of the image by (cx,cy)subscript𝑐𝑥subscript𝑐𝑦(c_{x},c_{y}) and the bounding box prior has width and height pwsubscript𝑝𝑤p_{w}, phsubscript𝑝ℎp_{h}, then the predictions correspond to: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_27", "text": " bxsubscript𝑏𝑥\\displaystyle b_{x} =σ​(tx)+cxabsent𝜎subscript𝑡𝑥subscript𝑐𝑥\\displaystyle=\\sigma(t_{x})+c_{x} bysubscript𝑏𝑦\\displaystyle b_{y} =σ​(ty)+cyabsent𝜎subscript𝑡𝑦subscript𝑐𝑦\\displaystyle=\\sigma(t_{y})+c_{y} bwsubscript𝑏𝑤\\displaystyle b_{w} =pw​etwabsentsubscript𝑝𝑤superscript𝑒subscript𝑡𝑤\\displaystyle=p_{w}e^{t_{w}} bhsubscript𝑏ℎ\\displaystyle b_{h} =ph​ethabsentsubscript𝑝ℎsuperscript𝑒subscript𝑡ℎ\\displaystyle=p_{h}e^{t_{h}} P​r​(object)∗I​O​U​(b,object)𝑃𝑟object𝐼𝑂𝑈𝑏object\\displaystyle Pr(\\text{object})*IOU(b,\\text{object}) =σ​(to)absent𝜎subscript𝑡𝑜\\displaystyle=\\sigma(t_{o}) ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_28", "text": " Since we constrain the location prediction the parametrization is easier to learn, making the network more stable. Using dimension clusters along with directly predicting the bounding box center location improves YOLO by almost 5% over the version with anchor boxes. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_29", "text": " Fine-Grained Features.This modified YOLO predicts detections on a 13×13131313\\times 13 feature map. While this is sufficient for large objects, it may benefit from finer grained features for localizing smaller objects. Faster R-CNN and SSD both run their proposal networks at various feature maps in the network to get a range of resolutions. We take a different approach, simply adding a passthrough layer that brings features from an earlier layer at 26×26262626\\times 26 resolution. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_30", "text": " The passthrough layer concatenates the higher resolution features with the low resolution features by stacking adjacent features into different channels instead of spatial locations, similar to the identity mappings in ResNet. This turns the 26×26×512262651226\\times 26\\times 512 feature map into a 13×13×20481313204813\\times 13\\times 2048 feature map, which can be concatenated with the original features. Our detector runs on top of this expanded feature map so that it has access to fine grained features. This gives a modest 1% performance increase. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_31", "text": " Multi-Scale Training. The original YOLO uses an input resolution of 448×448448448448\\times 448. With the addition of anchor boxes we changed the resolution to 416×416416416416\\times 416. However, since our model only uses convolutional and pooling layers it can be resized on the fly. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_32", "text": " Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses a new image dimension size. Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320,352,…,608}320352…608\\{320,352,...,608\\}. Thus the smallest option is 320×320320320320\\times 320 and the largest is 608×608608608608\\times 608. We resize the network to that dimension and continue training. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_33", "text": " This regime forces the network to learn to predict well across a variety of input dimensions. This means the same network can predict detections at different resolutions. The network runs faster at smaller sizes so YOLOv2 offers an easy tradeoff between speed and accuracy. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_34", "text": " At low resolutions YOLOv2 operates as a cheap, fairly accurate detector. At 288×288288288288\\times 288 it runs at more than 90 FPS with mAP almost as good as Fast R-CNN. This makes it ideal for smaller GPUs, high framerate video, or multiple video streams. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_35", "text": " At high resolution YOLOv2 is a state-of-the-art detector with 78.6 mAP on VOC 2007 while still operating above real-time speeds. See Table 3 for a comparison of YOLOv2 with other frameworks on VOC 2007. Figure 4 ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_36", "text": " Further Experiments. We train YOLOv2 for detection on VOC 2012. Table 4 shows the comparative performance of YOLOv2 versus other state-of-the-art detection systems. YOLOv2 achieves 73.4 mAP while running far faster than competing methods. We also train on COCO and compare to other methods in Table 5. On the VOC metric (IOU = .5) YOLOv2 gets 44.0 mAP, comparable to SSD and Faster R-CNN. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_37", "text": " We want detection to be accurate but we also want it to be fast. Most applications for detection, like robotics or self-driving cars, rely on low latency predictions. In order to maximize performance we design YOLOv2 to be fast from the ground up. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_38", "text": " Most detection frameworks rely on VGG-16 as the base feature extractor . VGG-16 is a powerful, accurate classification network but it is needlessly complex. The convolutional layers of VGG-16 require 30.69 billion floating point operations for a single pass over a single image at 224×224224224224\\times 224 resolution. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_39", "text": " The YOLO framework uses a custom network based on the Googlenet architecture . This network is faster than VGG-16, only using 8.52 billion operations for a forward pass. However, it’s accuracy is slightly worse than VGG-16. For single-crop, top-5 accuracy at 224×224224224224\\times 224, YOLO’s custom model gets 88.0% ImageNet compared to 90.0% for VGG-16. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_40", "text": " Darknet-19. We propose a new classification model to be used as the base of YOLOv2. Our model builds off of prior work on network design as well as common knowledge in the field. Similar to the VGG models we use mostly 3×3333\\times 3 filters and double the number of channels after every pooling step . Following the work on Network in Network (NIN) we use global average pooling to make predictions as well as 1×1111\\times 1 filters to compress the feature representation between 3×3333\\times 3 convolutions . We use batch normalization to stabilize training, speed up convergence, and regularize the model . ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_41", "text": " Our final model, called Darknet-19, has 19 convolutional layers and 5 maxpooling layers. For a full description see Table 6. Darknet-19 only requires 5.58 billion operations to process an image yet achieves 72.9%percent72.972.9\\% top-1 accuracy and 91.2%percent91.291.2\\% top-5 accuracy on ImageNet. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_42", "text": " Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.10.10.1, polynomial rate decay with a power of 444, weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9 using the Darknet neural network framework . During training we use standard data augmentation tricks including random crops, rotations, and hue, saturation, and exposure shifts. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_43", "text": " As discussed above, after our initial training on images at 224×224224224224\\times 224 we fine tune our network at a larger size, 448448448. For this fine tuning we train with the above parameters but for only 10 epochs and starting at a learning rate of 10−3superscript10310^{-3}. At this higher resolution our network achieves a top-1 accuracy of 76.5%percent76.576.5\\% and a top-5 accuracy of 93.3%percent93.393.3\\%. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_44", "text": " Training for detection. We modify this network for detection by removing the last convolutional layer and instead adding on three 3×3333\\times 3 convolutional layers with 102410241024 filters each followed by a final 1×1111\\times 1 convolutional layer with the number of outputs we need for detection. For VOC we predict 5 boxes with 5 coordinates each and 20 classes per box so 125 filters. We also add a passthrough layer from the final 3×3×512335123\\times 3\\times 512 layer to the second to last convolutional layer so that our model can use fine grain features. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_45", "text": " We train the network for 160 epochs with a starting learning rate of 10−3superscript10310^{-3}, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_46", "text": " We propose a mechanism for jointly training on classification and detection data. Our method uses images labelled for detection to learn detection-specific information like bounding box coordinate prediction and objectness as well as how to classify common objects. It uses images with only class labels to expand the number of categories it can detect. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_47", "text": " During training we mix images from both detection and classification datasets. When our network sees an image labelled for detection we can backpropagate based on the full YOLOv2 loss function. When it sees a classification image we only backpropagate loss from the classification-specific parts of the architecture. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_48", "text": " This approach presents a few challenges. Detection datasets have only common objects and general labels, like “dog” or “boat”. Classification datasets have a much wider and deeper range of labels. ImageNet has more than a hundred breeds of dog, including “Norfolk terrier”, “Yorkshire terrier”, and “Bedlington terrier”. If we want to train on both datasets we need a coherent way to merge these labels. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_49", "text": " Most approaches to classification use a softmax layer across all the possible categories to compute the final probability distribution. Using a softmax assumes the classes are mutually exclusive. This presents problems for combining datasets, for example you would not want to combine ImageNet and COCO using this model because the classes “Norfolk terrier” and “dog” are not mutually exclusive. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_50", "text": " We could instead use a multi-label model to combine the datasets which does not assume mutual exclusion. This approach ignores all the structure we do know about the data, for example that all of the COCO classes are mutually exclusive. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_51", "text": " Hierarchical classification. ImageNet labels are pulled from WordNet, a language database that structures concepts and how they relate . In WordNet, “Norfolk terrier” and “Yorkshire terrier” are both hyponyms of “terrier” which is a type of “hunting dog”, which is a type of “dog”, which is a “canine”, etc. Most approaches to classification assume a flat structure to the labels however for combining datasets, structure is exactly what we need. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_52", "text": " WordNet is structured as a directed graph, not a tree, because language is complex. For example a “dog” is both a type of “canine” and a type of “domestic animal” which are both synsets in WordNet. Instead of using the full graph structure, we simplify the problem by building a hierarchical tree from the concepts in ImageNet. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_53", "text": " To build this tree we examine the visual nouns in ImageNet and look at their paths through the WordNet graph to the root node, in this case “physical object”. Many synsets only have one path through the graph so first we add all of those paths to our tree. Then we iteratively examine the concepts we have left and add the paths that grow the tree by as little as possible. So if a concept has two paths to the root and one path would add three edges to our tree and the other would only add one edge, we choose the shorter path. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_54", "text": " The final result is WordTree, a hierarchical model of visual concepts. To perform classification with WordTree we predict conditional probabilities at every node for the probability of each hyponym of that synset given that synset. For example, at the “terrier” node we predict: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_55", "text": " Pr(Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Yorkshire terrier\\displaystyle Pr(\\text{Yorkshire terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Bedlington terrier\\displaystyle Pr(\\text{Bedlington terrier} |terrier)\\displaystyle|\\text{terrier}) ……\\displaystyle... ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_56", "text": " If we want to compute the absolute probability for a particular node we simply follow the path through the tree to the root node and multiply to conditional probabilities. So if we want to know if a picture is of a Norfolk terrier we compute: ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_57", "text": " P​r​(Norfolk terrier)𝑃𝑟Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier}) =P​r​(Norfolk terrier|terrier)absent𝑃𝑟conditionalNorfolk terrierterrier\\displaystyle=Pr(\\text{Norfolk terrier}|\\text{terrier}) ∗Pr(terrier\\displaystyle*Pr(\\text{terrier} |hunting dog)\\displaystyle|\\text{hunting dog}) ∗…absent…\\displaystyle*\\ldots ∗\\displaystyle* ∗Pr(mammal\\displaystyle*Pr(\\text{mammal} |Pr(animal)\\displaystyle|Pr(\\text{animal}) ∗Pr(animal\\displaystyle*Pr(\\text{animal} |physical object)\\displaystyle|\\text{physical object}) ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_58", "text": " For classification purposes we assume that the the image contains an object: P​r​(physical object)=1𝑃𝑟physical object1Pr(\\text{physical object})=1. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_59", "text": " To validate this approach we train the Darknet-19 model on WordTree built using the 1000 class ImageNet. To build WordTree1k we add in all of the intermediate nodes which expands the label space from 1000 to 1369. During training we propagate ground truth labels up the tree so that if an image is labelled as a “Norfolk terrier” it also gets labelled as a “dog” and a “mammal”, etc. To compute the conditional probabilities our model predicts a vector of 1369 values and we compute the softmax over all sysnsets that are hyponyms of the same concept, see Figure 5. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_60", "text": " Using the same training parameters as before, our hierarchical Darknet-19 achieves 71.9%percent71.971.9\\% top-1 accuracy and 90.4%percent90.490.4\\% top-5 accuracy. Despite adding 369 additional concepts and having our network predict a tree structure our accuracy only drops marginally. Performing classification in this manner also has some benefits. Performance degrades gracefully on new or unknown object categories. For example, if the network sees a picture of a dog but is uncertain what type of dog it is, it will still predict “dog” with high confidence but have lower confidences spread out among the hyponyms. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_61", "text": " This formulation also works for detection. Now, instead of assuming every image has an object, we use YOLOv2’s objectness predictor to give us the value of P​r​(physical object)𝑃𝑟physical objectPr(\\text{physical object}). The detector predicts a bounding box and the tree of probabilities. We traverse the tree down, taking the highest confidence path at every split until we reach some threshold and we predict that object class. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_62", "text": " Dataset combination with WordTree. We can use WordTree to combine multiple datasets together in a sensible fashion. We simply map the categories in the datasets to synsets in the tree. Figure 6 shows an example of using WordTree to combine the labels from ImageNet and COCO. WordNet is extremely diverse so we can use this technique with most datasets. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_63", "text": " Joint classification and detection. Now that we can combine datasets using WordTree we can train our joint model on classification and detection. We want to train an extremely large scale detector so we create our combined dataset using the COCO detection dataset and the top 9000 classes from the full ImageNet release. We also need to evaluate our method so we add in any classes from the ImageNet detection challenge that were not already included. The corresponding WordTree for this dataset has 9418 classes. ImageNet is a much larger dataset so we balance the dataset by oversampling COCO so that ImageNet is only larger by a factor of 4:1. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_64", "text": " Using this dataset we train YOLO9000. We use the base YOLOv2 architecture but only 3 priors instead of 5 to limit the output size. When our network sees a detection image we backpropagate loss as normal. For classification loss, we only backpropagate loss at or above the corresponding level of the label. For example, if the label is “dog” we do assign any error to predictions further down in the tree, “German Shepherd” versus “Golden Retriever”, because we do not have that information. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_65", "text": " When it sees a classification image we only backpropagate classification loss. To do this we simply find the bounding box that predicts the highest probability for that class and we compute the loss on just its predicted tree. We also assume that the predicted box overlaps what would be the ground truth label by at least .3.3.3 IOU and we backpropagate objectness loss based on this assumption. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_66", "text": " Using this joint training, YOLO9000 learns to find objects in images using the detection data in COCO and it learns to classify a wide variety of these objects using data from ImageNet. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_67", "text": " We evaluate YOLO9000 on the ImageNet detection task. The detection task for ImageNet shares on 44 object categories with COCO which means that YOLO9000 has only seen classification data for the majority of the test images, not detection data. YOLO9000 gets 19.7 mAP overall with 16.0 mAP on the disjoint 156 object classes that it has never seen any labelled detection data for. This mAP is higher than results achieved by DPM but YOLO9000 is trained on different datasets with only partial supervision . It also is simultaneously detecting 9000 other object categories, all in real-time. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_68", "text": " When we analyze YOLO9000’s performance on ImageNet we see it learns new species of animals well but struggles with learning categories like clothing and equipment. New animals are easier to learn because the objectness predictions generalize well from the animals in COCO. Conversely, COCO does not have bounding box label for any type of clothing, only for person, so YOLO9000 struggles to model categories like “sunglasses” or “swimming trunks”. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_69", "text": " We introduce YOLOv2 and YOLO9000, real-time detection systems. YOLOv2 is state-of-the-art and faster than other detection systems across a variety of detection datasets. Furthermore, it can be run at a variety of image sizes to provide a smooth tradeoff between speed and accuracy. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_70", "text": " YOLO9000 is a real-time framework for detection more than 9000 object categories by jointly optimizing detection and classification. We use WordTree to combine data from various sources and our joint optimization technique to train simultaneously on ImageNet and COCO. YOLO9000 is a strong step towards closing the dataset size gap between detection and classification. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_71", "text": " Many of our techniques generalize outside of object detection. Our WordTree representation of ImageNet offers a richer, more detailed output space for image classification. Dataset combination using hierarchical classification would be useful in the classification and segmentation domains. Training techniques like multi-scale training could provide benefit across a variety of visual tasks. ", "title": "YOLO9000: Better, Faster, Stronger" }, { "id": "1612.08242_all_72", "text": " For future work we hope to use similar techniques for weakly supervised image segmentation. We also plan to improve our detection results using more powerful matching strategies for assigning weak labels to classification data during training. Computer vision is blessed with an enormous amount of labelled data. We will continue looking for ways to bring different sources and structures of data together to make stronger models of the visual world. ", "title": "YOLO9000: Better, Faster, Stronger" } ]
According to Figure 2-(a), ‘May’ is far from other months in visualized word embed space. Why did this result happen?
Because "May" has several different meanings in English, "May" is far from other months [32].
[ 32 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al., 2015; Antol et al., 2015; Xiong et al., 2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are often used to extract the most relevant information from the context for answering the question by summarizing the context into a fixed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_1", "text": " In this paper, we introduce the Bi-Directional Attention Flow  (BiDAF) network, a hierarchical multi-stage architecture for modeling the representations of the context paragraph at different levels of granularity (Figure 1). BiDAF includes character-level, word-level, and contextual embeddings, and uses bi-directional attention flow to obtain a query-aware context representation. Our attention mechanism offers following improvements to the previously popular attention paradigms. First, our attention layer is not used to summarize the context paragraph into a fixed-size vector. Instead, the attention is computed for every time step, and the attended vector at each time step, along with the representations from previous layers, is allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. Second, we use a memory-less attention mechanism. That is, while we iteratively compute attention through time as in Bahdanau et al. (2015), the attention at each time step is a function of only the query and the context paragraph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simplification leads to the division of labor between the attention layer and the modeling layer. It forces the attention layer to focus on learning the attention between the query and the context, and enables the modeling layer to focus on learning the interaction within the query-aware context representation (the output of the attention layer). It also allows the attention at each time step to be unaffected from incorrect attendances at previous time steps. Our experiments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_2", "text": " Our BiDAF model111Our code and interactive demo are available at: allenai.github.io/bi-att-flow/ outperforms all previous approaches on the highly-competitive Stanford Question Answering Dataset (SQuAD) test set leaderboard at the time of submission. With a modification to only the output layer, BiDAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, visualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al., 2016). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_3", "text": " Our machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure 1): 1. Character Embedding Layer maps each word to a vector space using character-level CNNs. 2. Word Embedding Layer maps each word to a vector space using a pre-trained word embedding model. 3. Contextual Embedding Layer utilizes contextual cues from surrounding words to refine the embedding of the words. These first three layers are applied to both the query and context. 4. Attention Flow Layer couples the query and context vectors and produces a set of query-aware feature vectors for each word in the context. 5. Modeling Layer employs a Recurrent Neural Network to scan the context. 6. Output Layer provides an answer to the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_4", "text": " Character embedding layer is responsible for mapping each word to a high-dimensional vector space. Let {𝒙1,…​𝒙T}subscript𝒙1…subscript𝒙𝑇\\{\\bm{x}_{1},\\dots\\bm{x}_{T}\\} and {𝒒1,…​𝒒J}subscript𝒒1…subscript𝒒𝐽\\{\\bm{q}_{1},\\dots\\bm{q}_{J}\\} represent the words in the input context paragraph and query, respectively. Following Kim (2014), we obtain the character-level embedding of each word using Convolutional Neural Networks (CNN). Characters are embedded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the input channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a fixed-size vector for each word. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_5", "text": " Word embedding layer also maps each word to a high-dimensional vector space. We use pre-trained word vectors, GloVe (Pennington et al., 2014), to obtain the fixed word embedding of each word. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_6", "text": " The concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al., 2015). The outputs of the Highway Network are two sequences of d𝑑d-dimensional vectors, or more conveniently, two matrices: 𝐗∈ℝd×T𝐗superscriptℝ𝑑𝑇{\\bf X}\\in\\mathbb{R}^{d\\times T} for the context and 𝐐∈ℝd×J𝐐superscriptℝ𝑑𝐽{\\bf Q}\\in\\mathbb{R}^{d\\times J} for the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_7", "text": " We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain 𝐇∈ℝ2​d×T𝐇superscriptℝ2𝑑𝑇{\\bf H}\\in\\mathbb{R}^{2d\\times T} from the context word vectors 𝐗𝐗{\\bf X}, and 𝐔∈ℝ2​d×J𝐔superscriptℝ2𝑑𝐽{\\bf U}\\in\\mathbb{R}^{2d\\times J} from query word vectors 𝐐𝐐{\\bf Q}. Note that each column vector of 𝐇𝐇{\\bf H} and 𝐔𝐔{\\bf U} is 2​d2𝑑2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d𝑑d-dimensional output. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_8", "text": " It is worth noting that the first three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision field. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_9", "text": " Attention flow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al., 2015; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2016), the attention flow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_10", "text": " The inputs to the layer are contextual vector representations of the context 𝐇𝐇{\\bf H} and the query 𝐔𝐔{\\bf U}. The outputs of the layer are the query-aware vector representations of the context words, 𝐆𝐆{\\bf G}, along with the contextual embeddings from the previous layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_11", "text": " In this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, 𝐒∈ℝT×J𝐒superscriptℝ𝑇𝐽{\\bf S}\\in\\mathbb{R}^{T\\times J}, between the contextual embeddings of the context (𝐇𝐇{\\bf H}) and the query (𝐔𝐔{\\bf U}), where 𝐒t​jsubscript𝐒𝑡𝑗{\\bf S}_{tj} indicates the similarity between t𝑡t-th context word and j𝑗j-th query word. The similarity matrix is computed by 𝐒t​j=α​(𝐇:t,𝐔:j)∈ℝsubscript𝐒𝑡𝑗𝛼subscript𝐇:absent𝑡subscript𝐔:absent𝑗ℝ{\\bf S}_{tj}=\\alpha({\\bf H}_{:t},{\\bf U}_{:j})\\in\\mathbb{R} (1) where α𝛼\\alpha is a trainable scalar function that encodes the similarity between its two input vectors, 𝐇:tsubscript𝐇:absent𝑡{\\bf H}_{:t} is t𝑡t-th column vector of 𝐇𝐇{\\bf H}, and 𝐔:jsubscript𝐔:absent𝑗{\\bf U}_{:j} is j𝑗j-th column vector of 𝐔𝐔{\\bf U}, We choose α​(𝐡,𝐮)=𝐰(𝐒)⊤​(𝐡;𝐮;𝐡∘𝐮)𝛼𝐡𝐮subscriptsuperscript𝐰top𝐒𝐡𝐮𝐡𝐮\\alpha({\\bf h},{\\bf u})={\\bf w}^{\\top}_{({\\bf S})}({\\bf h};{\\bf u};{\\bf h}\\circ{\\bf u}), where 𝐰(𝐒)∈ℝ6​dsubscript𝐰𝐒superscriptℝ6𝑑{\\bf w}_{({\\bf S})}\\in\\mathbb{R}^{6d} is a trainable weight vector, ∘\\circ is elementwise multiplication, (;)(;) is vector concatenation across row, and implicit multiplication is matrix multiplication. Now we use 𝐒𝐒{\\bf S} to obtain the attentions and the attended vectors in both directions. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_12", "text": " Context-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let 𝐚t∈ℝJsubscript𝐚𝑡superscriptℝ𝐽{\\bf a}_{t}\\in\\mathbb{R}^{J} represent the attention weights on the query words by t𝑡t-th context word, ∑𝐚t​j=1subscript𝐚𝑡𝑗1\\sum{\\bf a}_{tj}=1 for all t𝑡t. The attention weight is computed by 𝐚t=softmax​(𝐒t:)∈ℝJsubscript𝐚𝑡softmaxsubscript𝐒:𝑡absentsuperscriptℝ𝐽{\\bf a}_{t}=\\mathrm{softmax}({\\bf S}_{t:})\\in\\mathbb{R}^{J}, and subsequently each attended query vector is 𝐔~:t=∑j𝐚t​j​𝐔:jsubscript~𝐔:absent𝑡subscript𝑗subscript𝐚𝑡𝑗subscript𝐔:absent𝑗\\tilde{{\\bf U}}_{:t}=\\sum_{j}{\\bf a}_{tj}{\\bf U}_{:j}. Hence 𝐔~~𝐔\\tilde{{\\bf U}} is a 2​d2𝑑2d-by-T𝑇T matrix containing the attended query vectors for the entire context. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_13", "text": " Query-to-context Attention. Query-to-context (Q2C) attention signifies which context words have the closest similarity to one of the query words and are hence critical for answering the query. We obtain the attention weights on the context words by 𝐛=softmax​(maxc​o​l⁡(𝐒))∈ℝT𝐛softmaxsubscript𝑐𝑜𝑙𝐒superscriptℝ𝑇{\\bf b}=\\mathrm{softmax}(\\max_{col}({\\bf S}))\\in\\mathbb{R}^{T}, where the maximum function (maxc​o​lsubscript𝑐𝑜𝑙\\max_{col}) is performed across the column. Then the attended context vector is 𝐡~=∑t𝐛t​𝐇:t∈ℝ2​d~𝐡subscript𝑡subscript𝐛𝑡subscript𝐇:absent𝑡superscriptℝ2𝑑\\tilde{\\bf h}=\\sum_{t}{\\bf b}_{t}{\\bf H}_{:t}\\in\\mathbb{R}^{2d}. This vector indicates the weighted sum of the most important words in the context with respect to the query. 𝐡~~𝐡\\tilde{\\bf h} is tiled T𝑇T times across the column, thus giving 𝐇~∈ℝ2​d×T~𝐇superscriptℝ2𝑑𝑇\\tilde{\\bf H}\\in\\mathbb{R}^{2d\\times T}. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_14", "text": " Finally, the contextual embeddings and the attention vectors are combined together to yield 𝐆𝐆{\\bf G}, where each column vector can be considered as the query-aware representation of each context word. We define 𝐆𝐆{\\bf G} by 𝐆:t=𝜷​(𝐇:t,𝐔~:t,𝐇~:t)∈ℝd𝐆subscript𝐆:absent𝑡𝜷subscript𝐇:absent𝑡subscript~𝐔:absent𝑡subscript~𝐇:absent𝑡superscriptℝsubscript𝑑𝐆{\\bf G}_{:t}={\\bm{\\beta}}({\\bf H}_{:t},\\tilde{\\bf U}_{:t},\\tilde{\\bf H}_{:t})\\in\\mathbb{R}^{d_{\\bf G}} (2) where 𝐆:tsubscript𝐆:absent𝑡{\\bf G}_{:t} is the t𝑡t-th column vector (corresponding to t𝑡t-th context word), 𝜷𝜷{\\bm{\\beta}} is a trainable vector function that fuses its (three) input vectors, and d𝐆subscript𝑑𝐆d_{\\bf G} is the output dimension of the 𝜷𝜷{\\bm{\\beta}} function. While the 𝜷𝜷{\\bm{\\beta}} function can be an arbitrary trainable neural network, such as multi-layer perceptron, a simple concatenation as following still shows good performance in our experiments: 𝜷​(𝐡,𝐮~,𝐡~)=(𝐡;𝐮~;𝐡∘𝐮~;𝐡∘𝐡~)∈ℝ8​d×T𝜷𝐡~𝐮~𝐡𝐡~𝐮𝐡~𝐮𝐡~𝐡superscriptℝ8𝑑𝑇{\\bm{\\beta}}({\\bf h},\\tilde{\\bf u},\\tilde{\\bf h})=({\\bf h};\\tilde{\\bf u};{\\bf h}\\circ\\tilde{\\bf u};{\\bf h}\\circ\\tilde{\\bf h})\\in\\mathbb{R}^{8d\\times T} (i.e., d𝐆=8​dsubscript𝑑𝐆8𝑑d_{\\bf G}=8d). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_15", "text": " The input to the modeling layer is 𝐆𝐆{\\bf G}, which encodes the query-aware representations of context words. The output of the modeling layer captures the interaction among the context words conditioned on the query. This is different from the contextual embedding layer, which captures the interaction among context words independent of the query. We use two layers of bi-directional LSTM, with the output size of d𝑑d for each direction. Hence we obtain a matrix 𝐌∈ℝ2​d×T𝐌superscriptℝ2𝑑𝑇{\\bf M}\\in\\mathbb{R}^{2d\\times T}, which is passed onto the output layer to predict the answer. Each column vector of 𝐌𝐌{\\bf M} is expected to contain contextual information about the word with respect to the entire context paragraph and the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_16", "text": " The output layer is application-specific. The modular nature of BiDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section 5, we use a slight modification of this output layer for cloze-style comprehension. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_17", "text": " The QA task requires the model to find a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by 𝐩1=softmax​(𝐰(𝐩1)⊤​(𝐆;𝐌)),superscript𝐩1softmaxsuperscriptsubscript𝐰superscript𝐩1top𝐆𝐌{\\bf p}^{1}=\\mathrm{softmax}({\\bf w}_{({\\bf p}^{1})}^{\\top}({\\bf G};{\\bf M})), (3) where 𝐰(𝐩1)∈ℝ10​dsubscript𝐰superscript𝐩1superscriptℝ10𝑑{\\bf w}_{({\\bf p}^{1})}\\in\\mathbb{R}^{10d} is a trainable weight vector. For the end index of the answer phrase, we pass 𝐌𝐌{\\bf M} to another bidirectional LSTM layer and obtain 𝐌2∈ℝ2​d×Tsuperscript𝐌2superscriptℝ2𝑑𝑇{\\bf M}^{2}\\in\\mathbb{R}^{2d\\times T}. Then we use 𝐌2superscript𝐌2{\\bf M}^{2} to obtain the probability distribution of the end index in a similar manner: 𝐩2=softmax​(𝐰(𝐩2)⊤​(𝐆;𝐌2))superscript𝐩2softmaxsuperscriptsubscript𝐰superscript𝐩2top𝐆superscript𝐌2{\\bf p}^{2}=\\mathrm{softmax}({\\bf w}_{({\\bf p}^{2})}^{\\top}({\\bf G};{\\bf M}^{2})) (4) ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_18", "text": " Training. We define the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples: L​(θ)=−1N​∑iNlog⁡(𝐩yi11)+log⁡(𝐩yi22)𝐿𝜃1𝑁subscriptsuperscript𝑁𝑖subscriptsuperscript𝐩1subscriptsuperscript𝑦1𝑖subscriptsuperscript𝐩2subscriptsuperscript𝑦2𝑖L(\\theta)=-\\frac{1}{N}\\sum^{N}_{i}\\log({\\bf p}^{1}_{y^{1}_{i}})+\\log({\\bf p}^{2}_{y^{2}_{i}}) (5) where θ𝜃\\theta is the set of all trainable weights in the model (the weights and biases of CNN filters and LSTM cells, 𝐰(𝐒)subscript𝐰𝐒{\\bf w}_{({\\bf S})}, 𝐰(𝐩1)subscript𝐰superscript𝐩1{\\bf w}_{({\\bf p}^{1})} and 𝐰(𝐩2)subscript𝐰superscript𝐩2{\\bf w}_{({\\bf p}^{2})}), N𝑁N is the number of examples in the dataset, yi1subscriptsuperscript𝑦1𝑖y^{1}_{i} and yi2subscriptsuperscript𝑦2𝑖y^{2}_{i} are the true start and end indices of the i𝑖i-th example, respectively, and 𝐩ksubscript𝐩𝑘{\\bf p}_{k} indicates the k𝑘k-th value of the vector 𝐩𝐩{\\bf p}. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_19", "text": " Test. The answer span (k,l)𝑘𝑙(k,l) where k≤l𝑘𝑙k\\leq l with the maximum value of 𝐩k1​𝐩l2subscriptsuperscript𝐩1𝑘subscriptsuperscript𝐩2𝑙{\\bf p}^{1}_{k}{\\bf p}^{2}_{l} is chosen, which can be computed in linear time with dynamic programming. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_20", "text": " A significant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al., 2013) were too small to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail by Hermann et al. (2015) and Childrens Book Test by Hill et al. (2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_21", "text": " Previous works in end-to-end machine comprehension use attention mechanisms in three distinct ways. The first group (largely inspired by Bahdanau et al. (2015)) uses a dynamic attention mechanism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention. Hermann et al. (2015) argue that the dynamic attention model performs better than using a single fixed query vector to attend on context words on CNN & DailyMail datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weights in the same model drastically improves the accuracy. Wang & Jiang (2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast to these models, BiDAF uses a memory-less attention mechanism. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_22", "text": " The second group computes the attention weights once, which are then fed into an output layer for final prediction (e.g., Kadlec et al. (2016)). Attention-over-attention model (Cui et al., 2016) uses a 2D similarity matrix between the query and context words (similar to Equation 1) to compute the weighted average of query-to-context attention. In contrast to these models, BiDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors flow into the modeling (RNN) layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_23", "text": " The third group (considered as variants of Memory Network (Weston et al., 2015)) repeats computing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al., 2016; Dhingra et al., 2016). Shen et al. (2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our BiDAF model to incorporate multiple hops. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_24", "text": " The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al., 2015; Malinowski et al., 2015). Attention mechanisms have also been successfully employed for the VQA task and can be broadly clustered based on the granularity of their attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al., 2016; Xiong et al., 2016a). At a finer level, each question word attends to each image patch and the highest attention value for each spatial location (Xu & Saenko, 2016) is adopted. A hybrid approach is to combine questions representations at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al., 2015). Several approaches to constructing the attention matrix have been used including element-wise product, element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al., 2016). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_25", "text": " Lu et al. (2016) have recently shown that in addition to attending from the question to image patches, attending from the image back to the question words provides an improvement on the VQA task. This finding in the visual domain is consistent with our finding in the language domain, where our bi-directional attention between the query and context provides improved results. Their model, however, uses the attention weights directly in the output layer and does not take advantage of the attention flow to the modeling layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_26", "text": " In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al., 2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_27", "text": " SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,000 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k train/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_28", "text": " The model architecture used for this task is depicted in Figure 1. Each paragraph and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fed into the model. We use 100 1D filters for CNN char embedding, each with a width of 5. The hidden state size (d𝑑d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler, 2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.50.50.5, for 12 epochs. A dropout (Srivastava et al., 2014) rate of 0.20.20.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.9990.9990.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 12 runs for each question. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_29", "text": " The results of our model and competing approaches on the hidden test are summarized in Table 2(a). BiDAF (ensemble) achieves an EM score of 73.3 and an F1 score of 81.1, outperforming all previous approaches. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_30", "text": " Table 2(b) shows the performance of our model and its ablations on the SQuAD dev set. Both char-level and word-level embeddings contribute towards the model’s performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole, while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi-directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector 𝐔~~𝐔\\tilde{\\bf U} with the average of the output vectors of the question’s contextual embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on both metrics. For ablating Q2C attention, the output of the attention layer, 𝐆𝐆{\\bf G}, does not include terms that have the attended Q2C vectors, 𝐇~~𝐇\\tilde{\\bf H}. To evaluate the attention flow, we study a dynamic attention model, where the attention is dynamically computed within the modeling layer’s LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention is pre-computed before flowing to the modeling layer. Despite being a simpler attention mechanism, our proposed static attention outperforms the dynamically computed attention by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the first 4 layers which are then incorporated by the modeling layer. We also show the performance of BiDAF with several different definitions of α𝛼\\alpha and 𝜷𝜷{\\bm{\\beta}} functions (Equation 1 and 2) in Appendix B. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_31", "text": " We now provide a qualitative analysis of our model on the SQuAD dev set. First, we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequent query words in the dev data and look at the context words that have the highest cosine similarity to the query words (Table 2). At the word embedding layer, query words such as When, Where and Who are not well aligned to possible answers in the context, but this dramatically changes in the contextual embedding layer which has access to context from surrounding words and is just 1 layer below the attention layer. When begins to match years, Where matches locations, and Who matches names. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_32", "text": " We also visualize these two feature spaces using t-SNE in Figure 2. t-SNE is performed on a large fraction of dev data but we only plot data points corresponding to the months of the year. An interesting pattern emerges in the Word space, where May is separated from the rest of the months because May has multiple meanings in the English language. The contextual embedding layer uses contextual cues from surrounding words and is able to separate the usages of the word May. Finally we visualize the attention matrices for some question-context tuples in the dev data in Figure 3. In the first example, Where matches locations and in the second example, many matches quantities and numerical symbols. Also, entities in the question typically attend to the same entities in the context, thus providing a feature for the model to localize possible answers. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_33", "text": " We analyse the performance of our our model with a traditional language-feature-based baseline (Rajpurkar et al., 2016). Figure 2b shows a Venn diagram of the dev set questions correctly answered by the models. Our model is able to answer more than 86% of the questions correctly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the first words in the questions (Figure 2c). Our model outperforms the traditional baseline comfortably in every category. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_34", "text": " We randomly select 50 incorrect questions (based on EM) and categorize them into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowledge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix A for the examples of the error modes. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_35", "text": " We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_36", "text": " In a cloze test, the reader is asked to fill in words that have been removed from a passage, for measuring one’s ability to comprehend text. Hermann et al. (2015) have recently compiled a massive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word, the missing word is always a named entity, anonymized with a random ID. Also, the IDs must be shuffled constantly during test, which is also critical for full anonymization. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_37", "text": " The model architecture used for this task is very similar to that for SQuAD (Section 4) with only a few small changes to adapt it to the cloze test. Since each answer in the CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (𝐩1superscript𝐩1{\\bf p}^{1}); the prediction for the end index (𝐩2superscript𝐩2{\\bf p}^{2}) is omitted from the loss function. Also, we mask out all non-entity words in the final classification layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al. (2016). During training, after we obtain 𝐩1superscript𝐩1{\\bf p}^{1}, we sum all probability values of the entity instances in the context that correspond to the correct answer. Then the loss function is computed from the summed probability. We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al., 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BiDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by parallelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section 4. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_38", "text": " The results of our single-run models and competing approaches on the CNN/DailyMail datasets are summarized in Table 3. ∗ indicates ensemble methods. BiDAF outperforms previous single-run models on both datasets for both val and test data. On the DailyMail test, our single-run model even outperforms the best ensemble method. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_39", "text": " In this paper, we introduce BiDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each component in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct locations in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_40", "text": " This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments. ", "title": "Bidirectional Attention Flow for Machine Comprehension" } ]
What is the difference between Siamese Network and our works?
Learned distance from Siamese network can be used to solve one-shot problems [27]. This network can play a role as a single layer message-passing iteration of our model [8].
[ 27, 8 ]
[ { "id": "1711.04043_all_0", "text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite these successes, this learning setup does not cover many aspects where learning is nonetheless possible and desirable. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_1", "text": " One such instance is the ability to learn from few examples, in the so-called few-shot learning tasks. Rather than relying on regularization to compensate for the lack of data, researchers have explored ways to leverage a distribution of similar tasks, inspired by human learning Lake et al. (2015). This defines a new supervised learning setup (also called ‘meta-learning’) in which the input-output pairs are no longer given by iid samples of images and their associated labels, but by iid samples of collections of images and their associated label similarity. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_2", "text": " A recent and highly-successful research program has exploited this meta-learning paradigm on the few-shot image classification task Lake et al. (2015); Koch et al. (2015); Vinyals et al. (2016); Mishra et al. (2017); Snell et al. (2017). In essence, these works learn a contextual, task-specific similarity measure, that first embeds input images using a CNN, and then learns how to combine the embedded images in the collection to propagate the label information towards the target image. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_3", "text": " In particular, Vinyals et al. (2016) cast the few-shot learning problem as a supervised classification task mapping a support set of images into the desired label, and developed an end-to-end architecture accepting those support sets as input via attention mechanisms. In this work, we build upon this line of work, and argue that this task is naturally expressed as a supervised interpolation problem on a graph, where nodes are associated with the images in the collection, and edges are given by a trainable similarity kernels. Leveraging recent progress on representation learning for graph-structured data Bronstein et al. (2017); Gilmer et al. (2017), we thus propose a simple graph-based few-shot learning model that implements a task-driven message passing algorithm. The resulting architecture is trained end-to-end, captures the invariances of the task, such as permutations within the input collections, and offers a good tradeoff between simplicity, generality, performance and sample complexity. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_4", "text": " Besides few-shot learning, a related task is the ability to learn from a mixture of labeled and unlabeled examples — semi-supervised learning, as well as active learning, in which the learner has the option to request those missing labels that will be most helpful for the prediction task. Our graph-based architecture is naturally extended to these setups with minimal changes in the training design. We validate experimentally the model on few-shot image classification, matching state-of-the-art performance with considerably fewer parameters, and demonstrate applications to semi-supervised and active learning setups. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_5", "text": " Our contributions are summarized as follows: • We cast few-shot learning as a supervised message passing task which is trained end-to-end using graph neural networks. • We match state-of-the-art performance on Omniglot and Mini-Imagenet tasks with fewer parameters. • We extend the model in the semi-supervised and active learning regimes. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_6", "text": " The rest of the paper is structured as follows. Section 2 describes related work, Sections 3, 4 and 5 present the problem setup, our graph neural network model and the training, and Section 6 reports numerical experiments. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_7", "text": " One-shot learning was first introduced by Fei-Fei et al. (2006), they assumed that currently learned classes can help to make predictions on new ones when just one or few labels are available. More recently, Lake et al. (2015) presented a Hierarchical Bayesian model that reached human level error on few-shot learning alphabet recongition tasks. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_8", "text": " Since then, great progress has been done in one-shot learning. Koch et al. (2015) presented a deep-learning model based on computing the pair-wise distance between samples using Siamese Networks, then, this learned distance can be used to solve one-shot problems by k-nearest neighbors classification. Vinyals et al. (2016) Presented an end-to-end trainable k-nearest neighbors using the cosine distance, they also introduced a contextual mechanism using an attention LSTM model that takes into account all the samples of the subset 𝒯𝒯\\mathcal{T} when computing the pair-wise distance between samples. Snell et al. (2017) extended the work from Vinyals et al. (2016), by using euclidean distance instead of cosine which provided significant improvements, they also build a prototype representation of each class for the few-shot learning scenario. Mehrotra & Dukkipati (2017) trained a deep residual network together with a generative model to approximate the pair-wise distance between samples. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_9", "text": " A new line of meta-learners for one-shot learning is rising lately: Ravi & Larochelle (2016) introduced a meta-learning method where an LSTM updates the weights of a classifier for a given episode. Munkhdalai & Yu (2017) also presented a meta-learning architecture that learns meta-level knowledge across tasks, and it changes its inductive bias via fast parametrization. Finn et al. (2017) is using a model agnostic meta-learner based on gradient descent, the goal is to train a classification model such that given a new task, a small amount of gradient steps with few data will be enough to generalize. Lately, Mishra et al. (2017) used Temporal Convolutions which are deep recurrent networks based on dilated convolutions, this method also exploits contextual information from the subset 𝒯𝒯\\mathcal{T} providing very good results. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_10", "text": " Another related area of research concerns deep learning architectures on graph-structured data. The GNN was first proposed in Gori et al. (2005); Scarselli et al. (2009), as a trainable recurrent message-passing whose fixed points could be adjusted discriminatively. Subsequent works Li et al. (2015); Sukhbaatar et al. (2016) have relaxed the model by untying the recurrent layer weights and proposed several nonlinear updates through gating mechanisms. Graph neural networks are in fact natural generalizations of convolutional networks to non-Euclidean graphs. Bruna et al. (2013); Henaff et al. (2015) proposed to learn smooth spectral multipliers of the graph Laplacian, albeit with high computational cost, and Defferrard et al. (2016); Kipf & Welling (2016) resolved the computational bottleneck by learning polynomials of the graph Laplacian, thus avoiding the computation of eigenvectors and completing the connection with GNNs. In particular, Kipf & Welling (2016) was the first to propose the use of GNNs on semi-supervised classification problems. We refer the reader to Bronstein et al. (2017) for an exhaustive literature review on the topic. GNNs and the analogous Neural Message Passing Models are finding application in many different domains. Battaglia et al. (2016); Chang et al. (2016) develop graph interaction networks that learn pairwise particle interactions and apply them to discrete particle physical dynamics. Duvenaud et al. (2015); Kearnes et al. (2016) study molecular fingerprints using variants of the GNN architecture, and Gilmer et al. (2017) further develop the model by combining it with set representations Vinyals et al. (2015), showing state-of-the-art results on molecular prediction. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_11", "text": " We describe first the general setup and notations, and then particularize it to the case of few-shot learning, semi-supervised learning and active learning. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_12", "text": " We consider input-output pairs (𝒯i,Yi)isubscriptsubscript𝒯𝑖subscript𝑌𝑖𝑖(\\mathcal{T}_{i},Y_{i})_{i} drawn iid from a distribution P𝑃P of partially-labeled image collections 𝒯𝒯\\displaystyle\\mathcal{T} =\\displaystyle= {{(x1,l1),…​(xs,ls)},{x~1,…,x~r},{x¯1,…,x¯t};li∈{1,K},xi,x~j,x¯j∼𝒫l​(ℝN)},formulae-sequencesubscript𝑥1subscript𝑙1…subscript𝑥𝑠subscript𝑙𝑠subscript~𝑥1…subscript~𝑥𝑟subscript¯𝑥1…subscript¯𝑥𝑡subscript𝑙𝑖1𝐾similar-tosubscript𝑥𝑖subscript~𝑥𝑗subscript¯𝑥𝑗subscript𝒫𝑙superscriptℝ𝑁\\displaystyle\\left\\{\\{(x_{1},l_{1}),\\dots(x_{s},l_{s})\\},\\{\\tilde{x}_{1},\\dots,\\tilde{x}_{r}\\},\\{\\bar{x}_{1},\\dots,\\bar{x}_{t}\\}~{};~{}l_{i}\\in\\{1,K\\},x_{i},\\tilde{x}_{j},\\bar{x}_{j}\\sim\\mathcal{P}_{l}(\\mathbb{R}^{N})\\right\\}~{}, and ​Yand 𝑌\\displaystyle\\text{and }Y =\\displaystyle= (y1,…,yt)∈{1,K}t,subscript𝑦1…subscript𝑦𝑡superscript1𝐾𝑡\\displaystyle(y_{1},\\dots,y_{t})\\in\\{1,K\\}^{t}~{}, (1) for arbitrary values of s,r,t𝑠𝑟𝑡s,r,t and K𝐾K. Where s𝑠s is the number of labeled samples, r𝑟r is the number of unlabeled samples (r>0𝑟0r>0 for the semi-supervised and active learning scenarios) and t𝑡t is the number of samples to classify. K𝐾K is the number of classes. We will focus in the case t=1𝑡1t=1 where we just classify one sample per task 𝒯𝒯\\mathcal{T}. 𝒫l​(ℝN)subscript𝒫𝑙superscriptℝ𝑁\\mathcal{P}_{l}(\\mathbb{R}^{N}) denotes a class-specific image distribution over ℝNsuperscriptℝ𝑁\\mathbb{R}^{N}. In our context, the targets Yisubscript𝑌𝑖Y_{i} are associated with image categories of designated images x¯1,…,x¯t∈𝒯isubscript¯𝑥1…subscript¯𝑥𝑡subscript𝒯𝑖\\bar{x}_{1},\\dots,\\bar{x}_{t}\\in\\mathcal{T}_{i} with no observed label. Given a training set {(𝒯i,Yi)i}i≤Lsubscriptsubscriptsubscript𝒯𝑖subscript𝑌𝑖𝑖𝑖𝐿\\{(\\mathcal{T}_{i},Y_{i})_{i}\\}_{i\\leq L}, we consider the standard supervised learning objective minΘ⁡1L​∑i≤Lℓ​(Φ​(𝒯i;Θ),Yi)+ℛ​(Θ),subscriptΘ1𝐿subscript𝑖𝐿ℓΦsubscript𝒯𝑖Θsubscript𝑌𝑖ℛΘ\\min_{\\Theta}\\frac{1}{L}\\sum_{i\\leq L}\\ell(\\Phi(\\mathcal{T}_{i};\\Theta),Y_{i})+\\mathcal{R}(\\Theta)~{}, using the model Φ​(𝒯;Θ)=p​(Y|𝒯)Φ𝒯Θ𝑝conditional𝑌𝒯\\Phi(\\mathcal{T};\\Theta)=p(Y~{}|~{}\\mathcal{T}) specified in Section 4 and ℛℛ\\mathcal{R} is a standard regularization objective. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_13", "text": " When r=0𝑟0r=0, t=1𝑡1t=1 and s=q​K𝑠𝑞𝐾s=qK, there is a single image in the collection with unknown label. If moreover each label appears exactly q𝑞q times, this setting is referred as the q𝑞q-shot, K𝐾K-way learning. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_14", "text": " When r>0𝑟0r>0 and t=1𝑡1t=1, the input collection contains auxiliary images x~1,…,x~rsubscript~𝑥1…subscript~𝑥𝑟\\tilde{x}_{1},\\dots,\\tilde{x}_{r} that the model can use to improve the prediction accuracy, by leveraging the fact that these samples are drawn from common distributions as those determining the output. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_15", "text": " In the active learning setting, the learner has the ability to request labels from the sub-collection {x~1,…,x~r}subscript~𝑥1…subscript~𝑥𝑟\\{\\tilde{x}_{1},\\dots,\\tilde{x}_{r}\\}. We are interested in studying to what extent this active learning can improve the performance with respect to the previous semi-supervised setup, and match the performance of the one-shot learning setting with s0subscript𝑠0s_{0} known labels when s+r=s0𝑠𝑟subscript𝑠0s+r=s_{0}, s≪s0much-less-than𝑠subscript𝑠0s\\ll s_{0}. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_16", "text": " This section presents our approach, based on a simple end-to-end graph neural network architecture. We first explain how the input context is mapped into a graphical representation, then detail the architecture, and next show how this model generalizes a number of previously published few-shot learning architectures. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_17", "text": " The input 𝒯𝒯\\mathcal{T} contains a collection of images, both labeled and unlabeled. The goal of few-shot learning is to propagate label information from labeled samples towards the unlabeled query image. This propagation of information can be formalized as a posterior inference over a graphical model determined by the input images and labels. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_18", "text": " Following several recent works that cast posterior inference using message passing with neural networks defined over graphs Scarselli et al. (2009); Duvenaud et al. (2015); Gilmer et al. (2017), we associate 𝒯𝒯\\mathcal{T} with a fully-connected graph G𝒯=(V,E)subscript𝐺𝒯𝑉𝐸G_{\\mathcal{T}}=(V,E) where nodes va∈Vsubscript𝑣𝑎𝑉v_{a}\\in V correspond to the images present in 𝒯𝒯\\mathcal{T} (both labeled and unlabeled). In this context, the setup does not specify a fixed similarity ea,a′subscript𝑒𝑎superscript𝑎′e_{a,a^{\\prime}} between images xasubscript𝑥𝑎x_{a} and xa′subscript𝑥superscript𝑎′x_{a^{\\prime}}, suggesting an approach where this similarity measure is learnt in a discriminative fashion with a parametric model similarly as in Gilmer et al. (2017), such as a siamese neural architecture. This framework is closely related to the set representation from Vinyals et al. (2016), but extends the inference mechanism using the graph neural network formalism that we detail next. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_19", "text": " Graph Neural Networks, introduced in Gori et al. (2005); Scarselli et al. (2009) and further simplified in Li et al. (2015); Duvenaud et al. (2015); Sukhbaatar et al. (2016) are neural networks based on local operators of a graph G=(V,E)𝐺𝑉𝐸G=(V,E), offering a powerful balance between expressivity and sample complexity; see Bronstein et al. (2017) for a recent survey on models and applications of deep learning on graphs. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_20", "text": " In its simplest incarnation, given an input signal F∈ℝV×d𝐹superscriptℝ𝑉𝑑F\\in\\mathbb{R}^{V\\times d} on the vertices of a weighted graph G𝐺G, we consider a family 𝒜𝒜\\mathcal{A} of graph intrinsic linear operators that act locally on this signal. The simplest is the adjacency operator A:F↦A​(F):𝐴maps-to𝐹𝐴𝐹A:F\\mapsto A(F) where (A​F)i:=∑j∼iwi,j​Fj,assignsubscript𝐴𝐹𝑖subscriptsimilar-to𝑗𝑖subscript𝑤𝑖𝑗subscript𝐹𝑗(AF)_{i}:=\\sum_{j\\sim i}w_{i,j}F_{j}~{}, with i∼jsimilar-to𝑖𝑗i\\sim j iff (i,j)∈E𝑖𝑗𝐸(i,j)\\in E and wi,jsubscript𝑤𝑖𝑗w_{i,j} its associated weight. A GNN layer Gc​(⋅)Gc⋅\\text{Gc}(\\cdot) receives as input a signal 𝐱(k)∈ℝV×dksuperscript𝐱𝑘superscriptℝ𝑉subscript𝑑𝑘{\\bf x}^{(k)}\\in\\mathbb{R}^{V\\times d_{k}} and produces 𝐱(k+1)∈ℝV×dk+1superscript𝐱𝑘1superscriptℝ𝑉subscript𝑑𝑘1{\\bf x}^{(k+1)}\\in\\mathbb{R}^{V\\times d_{k+1}} as 𝐱l(k+1)=Gc​(𝐱(k))=ρ​(∑B∈𝒜B​𝐱(k)​θB,l(k)),l=d1​…​dk+1,formulae-sequencesubscriptsuperscript𝐱𝑘1𝑙Gcsuperscript𝐱𝑘𝜌subscript𝐵𝒜𝐵superscript𝐱𝑘superscriptsubscript𝜃𝐵𝑙𝑘𝑙subscript𝑑1…subscript𝑑𝑘1{\\bf x}^{(k+1)}_{l}=\\text{Gc}({\\bf x}^{(k)})=\\rho\\left(\\sum_{B\\in\\mathcal{A}}B{\\bf x}^{(k)}\\theta_{B,l}^{(k)}\\right)~{},\\,l=d_{1}\\dots d_{k+1}~{}, (2) where Θ={θ1(k),…,θ|𝒜|(k)}kΘsubscriptsuperscriptsubscript𝜃1𝑘…superscriptsubscript𝜃𝒜𝑘𝑘\\Theta=\\{\\theta_{1}^{(k)},\\dots,\\theta_{|\\mathcal{A}|}^{(k)}\\}_{k}, θB(k)∈ℝdk×dk+1superscriptsubscript𝜃𝐵𝑘superscriptℝsubscript𝑑𝑘subscript𝑑𝑘1{\\theta}_{B}^{(k)}\\in\\mathbb{R}^{d_{k}\\times d_{k+1}}, are trainable parameters and ρ​(⋅)𝜌⋅\\rho(\\cdot) is a point-wise non-linearity, chosen in this work to be a ‘leaky’ ReLU Xu et al. (2015). ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_21", "text": " Authors have explored several modeling variants from this basic formulation, by replacing the point-wise nonlinearity with gating operations Duvenaud et al. (2015), or by generalizing the generator family to Laplacian polynomials Defferrard et al. (2016); Kipf & Welling (2016); Bruna et al. (2013), or including 2Jsuperscript2𝐽2^{J}-th powers of A𝐴A to 𝒜𝒜\\mathcal{A}, AJ=min⁡(1,A2J)subscript𝐴𝐽1superscript𝐴superscript2𝐽A_{J}=\\min(1,A^{2^{J}}) to encode 2Jsuperscript2𝐽2^{J}-hop neighborhoods of each node Bruna & Li (2017). Cascaded operations in the form (2) are able to approximate a wide range of graph inference tasks. In particular, inspired by message-passing algorithms, Kearnes et al. (2016); Gilmer et al. (2017) generalized the GNN to also learn edge features A~(k)superscript~𝐴𝑘\\tilde{A}^{(k)} from the current node hidden representation: A~i,j(k)=φθ~​(𝐱i(k),𝐱j(k)),superscriptsubscript~𝐴𝑖𝑗𝑘subscript𝜑~𝜃superscriptsubscript𝐱𝑖𝑘superscriptsubscript𝐱𝑗𝑘\\tilde{A}_{i,j}^{(k)}=\\varphi_{\\tilde{\\theta}}({\\bf x}_{i}^{(k)},{\\bf x}_{j}^{(k)})~{}, (3) where φ𝜑\\varphi is a symmetric function parametrized with e.g. a neural network. In this work, we consider a Multilayer Perceptron stacked after the absolute difference between two vector nodes. See eq. 4: φθ~​(𝐱i(k),𝐱j(k))=MLPθ~​(a​b​s​(𝐱i(k)−𝐱j(k)))subscript𝜑~𝜃superscriptsubscript𝐱𝑖𝑘superscriptsubscript𝐱𝑗𝑘subscriptMLP~𝜃𝑎𝑏𝑠superscriptsubscript𝐱𝑖𝑘superscriptsubscript𝐱𝑗𝑘\\varphi_{\\tilde{\\theta}}({\\bf x}_{i}^{(k)},{\\bf x}_{j}^{(k)})=\\text{MLP}_{\\tilde{\\theta}}(abs({\\bf x}_{i}^{(k)}-{\\bf x}_{j}^{(k)})) (4) ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_22", "text": " Then φ𝜑\\varphi is a metric, which is learned by doing a non-linear combination of the absolute difference between the individual features of two nodes. Using this architecture the distance property Symmetry φθ~​(a,b)=φθ~​(b,a)subscript𝜑~𝜃𝑎𝑏subscript𝜑~𝜃𝑏𝑎\\varphi_{\\tilde{\\theta}}(a,b)=\\varphi_{\\tilde{\\theta}}(b,a) is fulfilled by construction and the distance property Identity φθ~​(a,a)=0subscript𝜑~𝜃𝑎𝑎0\\varphi_{\\tilde{\\theta}}(a,a)=0 is easily learned. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_23", "text": " The trainable adjacency is then normalized to a stochastic kernel by using a softmax along each row. The resulting update rules for node features are obtained by adding the edge feature kernel A~(k)superscript~𝐴𝑘\\tilde{A}^{(k)} into the generator family 𝒜={A~(k),𝟏}𝒜superscript~𝐴𝑘1\\mathcal{A}=\\{\\tilde{A}^{(k)},{\\bf 1}\\} and applying (2). Adjacency learning is particularly important in applications where the input set is believed to have some geometric structure, but the metric is not known a priori, such as is our case. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_24", "text": " In general graphs, the network depth is chosen to be of the order of the graph diameter, so that all nodes obtain information from the entire graph. In our context, however, since the graph is densely connected, the depth is interpreted simply as giving the model more expressive power. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_25", "text": " The input collection 𝒯𝒯\\mathcal{T} is mapped into node features as follows. For images xi∈𝒯subscript𝑥𝑖𝒯x_{i}\\in\\mathcal{T} with known label lisubscript𝑙𝑖l_{i}, the one-hot encoding of the label is concatenated with the embedding features of the image at the input of the GNN. 𝐱i(0)=(ϕ​(xi),h​(li)),superscriptsubscript𝐱𝑖0italic-ϕsubscript𝑥𝑖ℎsubscript𝑙𝑖{\\bf x}_{i}^{(0)}=(\\phi(x_{i}),h(l_{i}))~{}, (5) where ϕitalic-ϕ\\phi is a Convolutional neural network and h​(l)∈ℝ+Kℎ𝑙subscriptsuperscriptℝ𝐾h(l)\\in\\mathbb{R}^{K}_{+} is a one-hot encoding of the label. Architectural details for ϕitalic-ϕ\\phi are detailed in Section 6.1.1 and 6.1.2. For images x~j,x¯j′subscript~𝑥𝑗subscript¯𝑥superscript𝑗′\\tilde{x}_{j},\\bar{x}_{j^{\\prime}} with unknown label lisubscript𝑙𝑖l_{i}, we modify the previous construction to account for full uncertainty about the label variable by replacing h​(l)ℎ𝑙h(l) with the uniform distribution over the K𝐾K-simplex: Vj=(ϕ​(x~j),K−1​𝟏K)subscript𝑉𝑗italic-ϕsubscript~𝑥𝑗superscript𝐾1subscript1𝐾V_{j}=(\\phi(\\tilde{x}_{j}),K^{-1}{\\bf 1}_{K}), and analogously for x¯¯𝑥\\bar{x}. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_26", "text": " The graph neural network formulation of few-shot learning generalizes a number of recent models proposed in the literature. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_27", "text": " Siamese Networks Koch et al. (2015) can be interpreted as a single layer message-passing iteration of our model, and using the same initial node embedding (5) 𝐱i(0)=(ϕ​(xi),hi)superscriptsubscript𝐱𝑖0italic-ϕsubscript𝑥𝑖subscriptℎ𝑖{\\bf x}_{i}^{(0)}=(\\phi(x_{i}),h_{i}) , using a non-trainable edge feature φ​(𝐱i,𝐱j)=‖ϕ​(xi)−ϕ​(xj)‖,A~(0)=softmax​(−φ),formulae-sequence𝜑subscript𝐱𝑖subscript𝐱𝑗normitalic-ϕsubscript𝑥𝑖italic-ϕsubscript𝑥𝑗superscript~𝐴0softmax𝜑\\varphi({\\bf x}_{i},{\\bf x}_{j})=\\|\\phi(x_{i})-\\phi(x_{j})\\|~{},~{}\\tilde{A}^{(0)}=\\text{softmax}(-\\varphi)~{}, and resulting label estimation Y^∗=∑jA~∗,j(0)​⟨𝐱j(0),u⟩,subscript^𝑌subscript𝑗superscriptsubscript~𝐴𝑗0superscriptsubscript𝐱𝑗0𝑢\\hat{Y}_{*}=\\sum_{j}\\tilde{A}_{*,j}^{(0)}\\langle{\\bf x}_{j}^{(0)},u\\rangle~{}, with u𝑢u selecting the label field from 𝐱𝐱{\\bf x}. In this model, the learning is reduced to learning image embeddings ϕ​(xi)italic-ϕsubscript𝑥𝑖\\phi(x_{i}) whose euclidean metric is consistent with the label similarities. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_28", "text": " Prototypical networks Snell et al. (2017) evolve Siamese networks by aggregating information within each cluster determined by nodes with the same label. This operation can also be accomplished with a gnn as follows. we consider A~i,j(0)={q−1if ​li=lj0otherwise.subscriptsuperscript~𝐴0𝑖𝑗casessuperscript𝑞1if subscript𝑙𝑖subscript𝑙𝑗0otherwise.\\tilde{A}^{(0)}_{i,j}=\\left\\{\\begin{array}(){cc}q^{-1}&\\text{if }l_{i}=l_{j}\\\\ 0&\\text{otherwise.}\\end{array}\\right. where q𝑞q is the number of examples per class, and 𝐱i(1)=∑jA~i,j(0)​𝐱j(0),superscriptsubscript𝐱𝑖1subscript𝑗subscriptsuperscript~𝐴0𝑖𝑗superscriptsubscript𝐱𝑗0{\\bf x}_{i}^{(1)}=\\sum_{j}\\tilde{A}^{(0)}_{i,j}{\\bf x}_{j}^{(0)}~{}, where 𝐱(0)superscript𝐱0{\\bf x}^{(0)} is defined as in the Siamese Networks. We finally apply the previous kernel A~(1)=softmax​(φ)superscript~𝐴1softmax𝜑\\tilde{A}^{(1)}=\\text{softmax}(\\varphi) applied to 𝐱(1)superscript𝐱1{\\bf x}^{(1)} to yield class prototypes: Y^∗=∑jA~∗,j(1)​⟨𝐱j(1),u⟩.subscript^𝑌subscript𝑗superscriptsubscript~𝐴𝑗1superscriptsubscript𝐱𝑗1𝑢\\hat{Y}_{*}=\\sum_{j}\\tilde{A}_{*,j}^{(1)}\\langle{\\bf x}_{j}^{(1)},u\\rangle~{}. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_29", "text": " Matching networks Vinyals et al. (2016) use a set representation for the ensemble of images in 𝒯𝒯\\mathcal{T}, similarly as our proposed graph neural network model, but with two important differences. First, the attention mechanism considered in this set representation is akin to the edge feature learning, with the difference that the mechanism attends always to the same node embeddings, as opposed to our stacked adjacency learning, which is closer to Vaswani et al. (2017). In other words, instead of the attention kernel in (3), matching networks consider attention mechanisms of the form A~∗,j(k)=φ​(𝐱∗(k),𝐱j(T))superscriptsubscript~𝐴𝑗𝑘𝜑superscriptsubscript𝐱𝑘superscriptsubscript𝐱𝑗𝑇\\tilde{A}_{*,j}^{(k)}=\\varphi({\\bf x}_{*}^{(k)},{\\bf x}_{j}^{(T)}), where 𝐱j(T)superscriptsubscript𝐱𝑗𝑇{\\bf x}_{j}^{(T)} is the encoding function for the elements of the support set, obtained with bidirectional LSTMs. In that case, the support set encoding is thus computed independently of the target image. Second, the label and image fields are treated separately throughout the model, with a final step that aggregates linearly the labels using a trained kernel. This may prevent the model to leverage complex dependencies between labels and images at intermediate stages. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_30", "text": " We describe next how to train the parameters of the GNN in the different setups we consider: few-shot learning, semi-supervised learning and active learning. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_31", "text": " In this setup, the model is asked only to predict the label Y𝑌Y corresponding to the image to classify x¯∈𝒯¯𝑥𝒯\\bar{x}\\in\\mathcal{T}, associated with node ∗* in the graph. The final layer of the GNN is thus a softmax mapping the node features to the K𝐾K-simplex. We then consider the Cross-entropy loss evaluated at node ∗*: ℓ​(Φ​(𝒯;Θ),Y)=−∑kyk​log⁡P​(Y∗=yk|𝒯).ℓΦ𝒯Θ𝑌subscript𝑘subscript𝑦𝑘𝑃subscript𝑌conditionalsubscript𝑦𝑘𝒯\\ell(\\Phi(\\mathcal{T};\\Theta),Y)=-\\sum_{k}y_{k}\\log P(Y_{*}=y_{k}~{}|~{}\\mathcal{T})~{}. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_32", "text": " The semi-supervised setting is trained identically — the only difference is that the initial label fields of the node will be filled with the uniform distribution on nodes corresponding to x~jsubscript~𝑥𝑗\\tilde{x}_{j}. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_33", "text": " In the Active Learning setup, the model has the intrinsic ability to query for one of the labels from {x~1,…,x~r}subscript~𝑥1…subscript~𝑥𝑟\\{\\tilde{x}_{1},\\dots,\\tilde{x}_{r}\\}. The network will learn to ask for the most informative label in order to classify the sample x¯∈𝒯¯𝑥𝒯\\bar{x}\\in\\mathcal{T}. The querying is done after the first layer of the GNN by using a Softmax attention over the unlabeled nodes of the graph. For this we apply a function g​(𝐱i(1))∈ℝ1𝑔subscriptsuperscript𝐱1𝑖superscriptℝ1g({\\bf x}^{(1)}_{i})\\in\\mathbb{R}^{1} that maps each unlabeled vector node to a scalar value. Function g𝑔g is parametrized by a two layers neural network. A Softmax is applied over the {1,…,r}1…𝑟{\\{1,\\dots,r\\}} scalar values obtained after applying g𝑔g: Attention=Softmax​(g​(𝐱{1,…,r}(1)))AttentionSoftmax𝑔subscriptsuperscript𝐱11…𝑟\\text{Attention}=\\text{Softmax}(g({\\bf x}^{(1)}_{\\{1,\\dots,r\\}})) ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_34", "text": " In order to query only one sample, we set all elements from the A​t​t​e​n​t​i​o​n∈ℝr𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛superscriptℝ𝑟Attention\\in\\mathbb{R}^{r} vector to 0 except for one. At test time we keep the maximum value, at train time we randomly sample one value based on its multinomial probability. Then we multiply this sampled attention by the label vectors: w⋅h​(li∗)=⟨Attention′,h​(l{1,…,r})⟩⋅𝑤ℎsubscript𝑙superscript𝑖superscriptAttention′ℎsubscript𝑙1…𝑟w\\cdot h(l_{i^{*}})=\\langle\\text{Attention}^{\\prime},h(l_{\\{1,\\dots,r\\}})\\rangle ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_35", "text": " The label of the queried vector h​(li∗)ℎsubscript𝑙superscript𝑖h(l_{i^{*}}) is obtained, scaled by the weight w∈(0,1)𝑤01w\\in(0,1). This value is then summed to the current representation 𝐱i∗(1)subscriptsuperscript𝐱1superscript𝑖{\\bf x}^{(1)}_{i^{*}}, since we are using dense connections in our GNN model we can sum this w⋅h​(li∗)⋅𝑤ℎsubscript𝑙superscript𝑖w\\cdot h(l_{i^{*}}) value directly to where the uniform label distribution was concatenated 𝐱i∗(1)=(Gc(𝐱i∗(0)),𝐱i∗(0))=(Gc(𝐱i∗(0)),(ϕ(xi∗),h(l)i∗)){\\bf x}^{(1)}_{i^{*}}=(\\text{Gc}({\\bf x}^{(0)}_{i^{*}}),{\\bf x}^{(0)}_{i^{*}})=(\\text{Gc}({\\bf x}^{(0)}_{i^{*}}),(\\phi(x_{i^{*}}),h(l{{}_{i^{*}}}))) ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_36", "text": " After the label has been summed to the current node, the information is forward propagated. This attention part is trained end-to-end with the rest of the network by backpropagating the loss from the output of the GNN. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_37", "text": " For the few-shot, semi-supervised and active learning experiments we used the Omniglot dataset presented by Lake et al. (2015) and Mini-Imagenet dataset introduced by Vinyals et al. (2016) which is a small version of ILSVRC-12 Krizhevsky et al. (2012). All experiments are based on the q𝑞q-shot, K𝐾K-way setting. For all experiments we used the same values q𝑞q-shot and K𝐾K-way for both training and testing. Code available at: https://github.com/vgsatorras/few-shot-gnn ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_38", "text": " Omniglot is a dataset of 1623 characters from 50 different alphabets, each character/class has been drawn by 20 different people. Following Vinyals et al. (2016) implementation we split the dataset into 1200 classes for training and the remaining 423 for testing. We augmented the dataset by multiples of 90 degrees as proposed by Santoro et al. (2016). ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_39", "text": " Inspired by the embedding architecture from Vinyals et al. (2016), following Mishra et al. (2017), a CNN was used as an embedding ϕitalic-ϕ\\phi function consisting of four stacked blocks of {{\\{3×\\times3-convolutional layer with 64 filters, batch-normalization, 2×\\times2 max-pooling, leaky-relu} the output is passed through a fully connected layer resulting in a 64-dimensional embedding. For the GNN we used 3 blocks each of them composed by 1) a module that computes the adjacency matrix and 2) a graph convolutional layer. A more detailed description of each block can be found at Figure 3. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_40", "text": " Mini-Imagenet is a more challenging dataset for one-shot learning proposed by Vinyals et al. (2016) derived from the original ILSVRC-12 dataset Krizhevsky et al. (2012). It consists of 84×\\times84 RGB images from 100 different classes with 600 samples per class. It was created with the purpose of increasing the complexity for one-shot tasks while keeping the simplicity of a light size dataset, that makes it suitable for fast prototyping. We used the splits proposed by Ravi & Larochelle (2016) of 64 classes for training, 16 for validation and 20 for testing. Using 64 classes for training, and the 16 validation classes only for early stopping and parameter tuning. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_41", "text": " The embedding architecture used for Mini-Imagenet is formed by 4 convolutional layers followed by a fully-connected layer resulting in a 128 dimensional embedding. This light architecture is useful for fast prototyping: 1×{\\times\\{3×\\times3-conv. layer (64 filters), batch normalization, max pool(2,2)22(2,2), leaky relu}}\\}, 1×{\\times\\{3×\\times3-conv. layer (96 filters), batch normalization, max pool(2,2)22(2,2), leaky relu}}\\}, 1×{\\times\\{3×\\times3-conv. layer (128 filters), batch normalization, max pool(2,2)22(2,2), leaky relu, dropout(0.5)}(0.5)\\}, 1×{\\times\\{3×\\times3-conv. layer (256 filters), batch normalization, max pool(2,2)22(2,2), leaky relu, dropout(0.5)}(0.5)\\}, 1×{\\times\\{ fc-layer (128 filters), batch normalization}}\\}. The two dropout layers are useful to avoid overfitting the GNN in Mini-Imagenet dataset. The GNN architecture is similar than for Omniglot, it is formed by 3 blocks, each block is described at Figure 3. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_42", "text": " Few-shot learning experiments for Omniglot and Mini-Imagenet are presented at Table 1 and Table 2 respectively. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_43", "text": " We evaluate our model by performing different q-shot, K-way experiments on both datasets. For every few-shot task 𝒯𝒯\\mathcal{T}, we sample K random classes from the dataset, and from each class we sample q random samples. An extra sample to classify is chosen from one of that K classes. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_44", "text": " Omniglot: The GNN method is providing competitive results while still remaining simpler than other methods. State of the art results are reached in the 5-Way and 20-way 1-shot experiments. In the 20-Way 1-shot setting the GNN is providing slightly better results than Munkhdalai & Yu (2017) while still being a more simple approach. The TCML approach from Mishra et al. (2017) is in the same confidence interval for 3 out of 4 experiments, but it is slightly better for the 20-Way 5-shot, although the number of parameters is reduced from ∼similar-to\\sim5M (TCML) to ∼similar-to\\sim300K (3 layers GNN). ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_45", "text": " At Mini-Imagenet table we are also presenting a baseline ”Our metric learning + KNN” where no information has been aggregated among nodes, it is a K-nearest neighbors applied on top of the pair-wise learnable metric φθ​(xi(0),xj(0))subscript𝜑𝜃subscriptsuperscriptx0𝑖subscriptsuperscriptx0𝑗\\varphi_{\\theta}(\\textbf{x}^{(0)}_{i},\\textbf{x}^{(0)}_{j}) and trained end-to-end, this learnable metric is competitive by itself compared to other state of the art methods. Even so, a significant improvement (from 64.02% to 66.41%) can be seen for the 5-shot 5-Way Mini-Imagenet setting when aggregating information among nodes by using the full GNN architecture. A variety of embedding functions ϕitalic-ϕ\\phi are used among the different papers for Mini-Imagenet experiments, in our case we are using a simple network of 4 conv. layers followed by a fully connected layer (Section 6.1.2) which served us to compare between Our GNN and Our metric learning + KNN and it is useful for fast prototyping. More complex embeddings have proven to produce better results, at Mishra et al. (2017) a deep residual network is used as embedding network ϕitalic-ϕ\\phi increasing the accuracy considerably. Regarding the TCML architecture in Mini-Imagenet, the number of parameters is reduced from ∼similar-to\\sim11M (TCML) to ∼similar-to\\sim400K (3 layers GNN). ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_46", "text": " Semi-supervised experiments are performed on the 5-way 5-shot setting. Different results are presented when 20% and 40% of the samples are labeled. The labeled samples are balanced among classes in all experiments, in other words, all the classes have the same amount of labeled and unlabeled samples. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_47", "text": " Two strategies can be seen at Tables 3 and 4. ”GNN - Trained only with labeled” is equivalent to the supervised few-shot setting, for example, in the 5-Way 5-shot 20%-labeled setting, this method is equivalent to the 5-way 1-shot learning setting since it is ignoring the unlabeled samples. ”GNN - Semi supervised” is the actual semi-supervised method, for example, in the 5-Way 5-shot 20%-labeled setting, the GNN receives as input 1 labeled sample per class and 4 unlabeled samples per class. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_48", "text": " Omniglot results are presented at Table 3, for this scenario we observe that the accuracy improvement is similar when adding images than when adding labels. The GNN is able to extract information from the input distribution of unlabeled samples such that only using 20% of the labels in a 5-shot semi-supervised environment we get same results as in the 40% supervised setting. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_49", "text": " In Mini-Imagenet experiments, Table 4, we also notice an improvement when using semi-supervised data although it is not as significant as in Omniglot. The distribution of Mini-Imagenet images is more complex than for Omniglot. In spite of it, the GNN manages to improve by ∼similar-to\\sim2% in the 20% and 40% settings. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_50", "text": " We performed Active Learning experiments on the 5-Way 5-shot set-up when 20% of the samples are labeled. In this scenario our network will query for the label of one sample from the unlabeled ones. The results are compared with the Random baseline where the network chooses a random sample to be labeled instead of one that maximally reduces the loss of the classification task 𝒯𝒯\\mathcal{T}. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_51", "text": " Results are shown at Table 5. The results of the GNN-Random criterion are close to the Semi-supervised results for 20%-labeled samples from Tables 3 and 4. It means that selecting one random label practically does not improve the accuracy at all. When using the GNN-AL learned criterion, we notice an improvement of ∼3.4%similar-toabsent3.4%\\sim\\text{3.4\\%} for Mini-Imagenet, it means that the GNN manages to correctly choose a more informative sample than a random one. In Omniglot the improvement is smaller since the accuracy is almost saturated and the improving margin is less. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_52", "text": " This paper explored graph neural representations for few-shot, semi-supervised and active learning. From the meta-learning perspective, these tasks become supervised learning problems where the input is given by a collection or set of elements, whose relational structure can be leveraged with neural message passing models. In particular, stacked node and edge features generalize the contextual similarity learning underpinning previous few-shot learning models. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_53", "text": " The graph formulation is helpful to unify several training setups (few-shot, active, semi-supervised) under the same framework, a necessary step towards the goal of having a single learner which is able to operate simultaneously in different regimes (stream of labels with few examples per class, or stream of examples with few labels). This general goal requires scaling up graph models to millions of nodes, motivating graph hierarchical and coarsening approaches Defferrard et al. (2016). ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_54", "text": " Another future direction is to generalize the scope of Active Learning, to include e.g. the ability to ask questions Rothe et al. (2017), or in reinforcement learning setups, where few-shot learning is critical to adapt to non-stationary environments. ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" }, { "id": "1711.04043_all_55", "text": " This work was partly supported by Samsung Electronics (Improving Deep Learning using Latent Structure). ", "title": "FEW-SHOT LEARNING WITH GRAPH NEURAL NET- WORKS" } ]
Historically, which architectures have been used for supervised sequence learning tasks?
Recurrent neural networks using the Long Short Term Memory(LSTM) architectures have been used for supervised sequence learning tasks [0].
[ 0 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised sequence learning tasks, such as speech recognition (Graves & Jaitly, 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014), and caption generation for images (Vinyals et al., 2014). They have also been applied on videos for recognizing actions and generating natural language descriptions (Donahue et al., 2014). A general sequence to sequence learning framework was described by Sutskever et al. (2014) in which a recurrent network is used to encode a sequence into a fixed length representation, and then another recurrent network is used to decode a sequence out of that representation. In this work, we apply and extend this framework to learn representations of sequences of images. We choose to work in the unsupervised setting where we only have access to a dataset of unlabelled videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_1", "text": " Videos are an abundant and rich source of visual information and can be seen as a window into the physics of the world we live in, showing us examples of what constitutes objects, how objects move against backgrounds, what happens when cameras move and how things get occluded. Being able to learn a representation that disentangles these factors would help in making intelligent machines that can understand and act in their environment. Additionally, learning good video representations is essential for a number of useful tasks, such as recognizing actions and gestures. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_2", "text": " Supervised learning has been extremely successful in learning good visual representations that not only produce good results at the task they are trained for, but also transfer well to other tasks and datasets. Therefore, it is natural to extend the same approach to learning video representations. This has led to research in 3D convolutional nets (Ji et al., 2013; Tran et al., 2014), different temporal fusion strategies (Karpathy et al., 2014) and exploring different ways of presenting visual information to convolutional nets (Simonyan & Zisserman, 2014a). However, videos are much higher dimensional entities compared to single images. Therefore, it becomes increasingly difficult to do credit assignment and learn long range structure, unless we collect much more labelled data or do a lot of feature engineering (for example computing the right kinds of flow features) to keep the dimensionality low. The costly work of collecting more labelled data and the tedious work of doing more clever engineering can go a long way in solving particular problems, but this is ultimately unsatisfying as a machine learning solution. This highlights the need for using unsupervised learning to find and represent structure in videos. Moreover, videos have a lot of structure in them (spatial and temporal regularities) which makes them particularly well suited as a domain for building unsupervised learning models. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_3", "text": " When designing any unsupervised learning model, it is crucial to have the right inductive biases and choose the right objective function so that the learning signal points the model towards learning useful features. In this paper, we use the LSTM Encoder-Decoder framework to learn video representations. The key inductive bias here is that the same operation must be applied at each time step to propagate information to the next step. This enforces the fact that the physics of the world remains the same, irrespective of input. The same physics acting on any state, at any time, must produce the next state. Our model works as follows. The Encoder LSTM runs through a sequence of frames to come up with a representation. This representation is then decoded through another LSTM to produce a target sequence. We consider different choices of the target sequence. One choice is to predict the same sequence as the input. The motivation is similar to that of autoencoders – we wish to capture all that is needed to reproduce the input but at the same time go through the inductive biases imposed by the model. Another option is to predict the future frames. Here the motivation is to learn a representation that extracts all that is needed to extrapolate the motion and appearance beyond what has been observed. These two natural choices can also be combined. In this case, there are two decoder LSTMs – one that decodes the representation into the input sequence and another that decodes the same representation to predict the future. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_4", "text": " The inputs to the model can, in principle, be any representation of individual video frames. However, for the purposes of this work, we limit our attention to two kinds of inputs. The first is image patches. For this we use natural image patches as well as a dataset of moving MNIST digits. The second is high-level “percepts” extracted by applying a convolutional net trained on ImageNet. These percepts are the states of last (and/or second-to-last) layers of rectified linear hidden states from a convolutional neural net model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_5", "text": " In order to evaluate the learned representations we qualitatively analyze the reconstructions and predictions made by the model. For a more quantitative evaluation, we use these LSTMs as initializations for the supervised task of action recognition. If the unsupervised learning model comes up with useful representations then the classifier should be able to perform better, especially when there are only a few labelled examples. We find that this is indeed the case. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_6", "text": " The first approaches to learning representations of videos in an unsupervised way were based on ICA (van Hateren & Ruderman, 1998; Hurri & Hyvärinen, 2003). Le et al. (2011) approached this problem using multiple layers of Independent Subspace Analysis modules. Generative models for understanding transformations between pairs of consecutive images are also well studied (Memisevic, 2013; Memisevic & Hinton, 2010; Susskind et al., 2011). This work was extended recently by Michalski et al. (2014) to model longer sequences. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_7", "text": " Recently, Ranzato et al. (2014) proposed a generative model for videos. The model uses a recurrent neural network to predict the next frame or interpolate between frames. In this work, the authors highlight the importance of choosing the right loss function. It is argued that squared loss in input space is not the right objective because it does not respond well to small distortions in input space. The proposed solution is to quantize image patches into a large dictionary and train the model to predict the identity of the target patch. This does solve some of the problems of squared loss but it introduces an arbitrary dictionary size into the picture and altogether removes the idea of patches being similar or dissimilar to one other. Designing an appropriate loss function that respects our notion of visual similarity is a very hard problem (in a sense, almost as hard as the modeling problem we want to solve in the first place). Therefore, in this paper, we use the simple squared loss objective function as a starting point and focus on designing an encoder-decoder RNN architecture that can be used with any loss function. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_8", "text": " In this section, we describe several variants of our LSTM Encoder-Decoder model. The basic unit of our network is the LSTM cell block. Our implementation of LSTMs follows closely the one discussed by Graves (2013). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_9", "text": " In this section we briefly describe the LSTM unit which is the basic building block of our model. The unit is shown in Fig. 1 (reproduced from Graves (2013)). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_10", "text": " Each LSTM unit has a cell which has a state ctsubscript𝑐𝑡c_{t} at time t𝑡t. This cell can be thought of as a memory unit. Access to this memory unit for reading or modifying it is controlled through sigmoidal gates – input gate itsubscript𝑖𝑡i_{t}, forget gate ftsubscript𝑓𝑡f_{t} and output gate otsubscript𝑜𝑡o_{t}. The LSTM unit operates as follows. At each time step it receives inputs from two external sources at each of the four terminals (the three gates and the input). The first source is the current frame 𝐱tsubscript𝐱𝑡{{\\bf x}_{t}}. The second source is the previous hidden states of all LSTM units in the same layer 𝐡t−1subscript𝐡𝑡1{\\bf h}_{t-1}. Additionally, each gate has an internal source, the cell state ct−1subscript𝑐𝑡1c_{t-1} of its cell block. The links between a cell and its own gates are called peephole connections. The inputs coming from different sources get added up, along with a bias. The gates are activated by passing their total input through the logistic function. The total input at the input terminal is passed through the tanh non-linearity. The resulting activation is multiplied by the activation of the input gate. This is then added to the cell state after multiplying the cell state by the forget gate’s activation ftsubscript𝑓𝑡f_{t}. The final output from the LSTM unit htsubscriptℎ𝑡h_{t} is computed by multiplying the output gate’s activation otsubscript𝑜𝑡o_{t} with the updated cell state passed through a tanh non-linearity. These updates are summarized for a layer of LSTM units as follows 𝐢tsubscript𝐢𝑡\\displaystyle{\\bf i}_{t} =\\displaystyle= σ​(Wx​i​𝐱t+Wh​i​𝐡t−1+Wc​i​𝐜t−1+𝐛i),𝜎subscript𝑊𝑥𝑖subscript𝐱𝑡subscript𝑊ℎ𝑖subscript𝐡𝑡1subscript𝑊𝑐𝑖subscript𝐜𝑡1subscript𝐛𝑖\\displaystyle\\sigma\\left(W_{xi}{\\bf x}_{t}+W_{hi}{\\bf h}_{t-1}+W_{ci}{\\bf c}_{t-1}+{\\bf b}_{i}\\right), 𝐟tsubscript𝐟𝑡\\displaystyle{\\bf f}_{t} =\\displaystyle= σ​(Wx​f​𝐱t+Wh​f​𝐡t−1+Wc​f​𝐜t−1+𝐛f),𝜎subscript𝑊𝑥𝑓subscript𝐱𝑡subscript𝑊ℎ𝑓subscript𝐡𝑡1subscript𝑊𝑐𝑓subscript𝐜𝑡1subscript𝐛𝑓\\displaystyle\\sigma\\left(W_{xf}{\\bf x}_{t}+W_{hf}{\\bf h}_{t-1}+W_{cf}{\\bf c}_{t-1}+{\\bf b}_{f}\\right), 𝐜tsubscript𝐜𝑡\\displaystyle{\\bf c}_{t} =\\displaystyle= 𝐟t​𝐜t−1+𝐢t​tanh⁡(Wx​c​𝐱t+Wh​c​𝐡t−1+𝐛c),subscript𝐟𝑡subscript𝐜𝑡1subscript𝐢𝑡subscript𝑊𝑥𝑐subscript𝐱𝑡subscript𝑊ℎ𝑐subscript𝐡𝑡1subscript𝐛𝑐\\displaystyle{\\bf f}_{t}{\\bf c}_{t-1}+{\\bf i}_{t}\\tanh\\left(W_{xc}{\\bf x}_{t}+W_{hc}{\\bf h}_{t-1}+{\\bf b}_{c}\\right), 𝐨tsubscript𝐨𝑡\\displaystyle{\\bf o}_{t} =\\displaystyle= σ​(Wx​o​𝐱t+Wh​o​𝐡t−1+Wc​o​𝐜t+𝐛o),𝜎subscript𝑊𝑥𝑜subscript𝐱𝑡subscript𝑊ℎ𝑜subscript𝐡𝑡1subscript𝑊𝑐𝑜subscript𝐜𝑡subscript𝐛𝑜\\displaystyle\\sigma\\left(W_{xo}{\\bf x}_{t}+W_{ho}{\\bf h}_{t-1}+W_{co}{\\bf c}_{t}+{\\bf b}_{o}\\right), 𝐡tsubscript𝐡𝑡\\displaystyle{\\bf h}_{t} =\\displaystyle= 𝐨t​tanh⁡(𝐜t).subscript𝐨𝑡subscript𝐜𝑡\\displaystyle{\\bf o}_{t}\\tanh({\\bf c}_{t}). ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_11", "text": " Note that all Wc⁣∙subscript𝑊𝑐∙W_{c\\bullet} matrices are diagonal, whereas the rest are dense. The key advantage of using an LSTM unit over a traditional neuron in an RNN is that the cell state in an LSTM unit sums activities over time. Since derivatives distribute over sums, the error derivatives don’t vanish quickly as they get sent back into time. This makes it easy to do credit assignment over long sequences and discover long-range features. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_12", "text": " In this section, we describe a model that uses Recurrent Neural Nets (RNNs) made of LSTM units to do unsupervised learning. The model consists of two RNNs – the encoder LSTM and the decoder LSTM as shown in Fig. 2. The input to the model is a sequence of vectors (image patches or features). The encoder LSTM reads in this sequence. After the last input has been read, the decoder LSTM takes over and outputs a prediction for the target sequence. The target sequence is same as the input sequence, but in reverse order. Reversing the target sequence makes the optimization easier because the model can get off the ground by looking at low range correlations. This is also inspired by how lists are represented in LISP. The encoder can be seen as creating a list by applying the cons function on the previously constructed list and the new input. The decoder essentially unrolls this list, with the hidden to output weights extracting the element at the top of the list (car function) and the hidden to hidden weights extracting the rest of the list (cdr function). Therefore, the first element out is the last element in. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_13", "text": " The decoder can be of two kinds – conditional or unconditioned. A conditional decoder receives the last generated output frame as input, i.e., the dotted input in Fig. 2 is present. An unconditioned decoder does not receive that input. This is discussed in more detail in Sec. 2.4. Fig. 2 shows a single layer LSTM Autoencoder. The architecture can be extend to multiple layers by stacking LSTMs on top of each other. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_14", "text": " Why should this learn good features? The state of the encoder LSTM after the last input has been read is the representation of the input video. The decoder LSTM is being asked to reconstruct back the input sequence from this representation. In order to do so, the representation must retain information about the appearance of the objects and the background as well as the motion contained in the video. However, an important question for any autoencoder-style model is what prevents it from learning an identity mapping and effectively copying the input to the output. In that case all the information about the input would still be present but the representation will be no better than the input. There are two factors that control this behaviour. First, the fact that there are only a fixed number of hidden units makes it unlikely that the model can learn trivial mappings for arbitrary length input sequences. Second, the same LSTM operation is used to decode the representation recursively. This means that the same dynamics must be applied on the representation at any stage of decoding. This further prevents the model from learning an identity mapping. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_15", "text": " Another natural unsupervised learning task for sequences is predicting the future. This is the approach used in language models for modeling sequences of words. The design of the Future Predictor Model is same as that of the Autoencoder Model, except that the decoder LSTM in this case predicts frames of the video that come after the input sequence (Fig. 3). Ranzato et al. (2014) use a similar model but predict only the next frame at each time step. This model, on the other hand, predicts a long sequence into the future. Here again we can consider two variants of the decoder – conditional and unconditioned. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_16", "text": " Why should this learn good features? In order to predict the next few frames correctly, the model needs information about which objects and background are present and how they are moving so that the motion can be extrapolated. The hidden state coming out from the encoder will try to capture this information. Therefore, this state can be seen as a representation of the input sequence. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_17", "text": " For each of these two models, we can consider two possibilities - one in which the decoder LSTM is conditioned on the last generated frame and the other in which it is not. In the experimental section, we explore these choices quantitatively. Here we briefly discuss arguments for and against a conditional decoder. A strong argument in favour of using a conditional decoder is that it allows the decoder to model multiple modes in the target sequence distribution. Without that, we would end up averaging the multiple modes in the low-level input space. However, this is an issue only if we expect multiple modes in the target sequence distribution. For the LSTM Autoencoder, there is only one correct target and hence a unimodal target distribution. But for the LSTM Future Predictor there is a possibility of multiple targets given an input because even if we assume a deterministic universe, everything needed to predict the future will not necessarily be observed in the input. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_18", "text": " There is also an argument against using a conditional decoder from the optimization point-of-view. There are strong short-range correlations in video data, for example, most of the content of a frame is same as the previous one. If the decoder was given access to the last few frames while generating a particular frame at training time, it would find it easy to pick up on these correlations. There would only be a very small gradient that tries to fix up the extremely subtle errors that require long term knowledge about the input sequence. In an unconditioned decoder, this input is removed and the model is forced to look for information deep inside the encoder. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_19", "text": " The two tasks – reconstructing the input and predicting the future can be combined to create a composite model as shown in Fig. 4. Here the encoder LSTM is asked to come up with a state from which we can both predict the next few frames as well as reconstruct the input. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_20", "text": " This composite model tries to overcome the shortcomings that each model suffers on its own. A high-capacity autoencoder would suffer from the tendency to learn trivial representations that just memorize the inputs. However, this memorization is not useful at all for predicting the future. Therefore, the composite model cannot just memorize information. On the other hand, the future predictor suffers form the tendency to store information only about the last few frames since those are most important for predicting the future, i.e., in order to predict vtsubscript𝑣𝑡v_{t}, the frames {vt−1,…,vt−k}subscript𝑣𝑡1…subscript𝑣𝑡𝑘\\{v_{t-1},\\ldots,v_{t-k}\\} are much more important than v0subscript𝑣0v_{0}, for some small value of k𝑘k. Therefore the representation at the end of the encoder will have forgotten about a large part of the input. But if we ask the model to also predict all of the input sequence, then it cannot just pay attention to the last few frames. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_21", "text": " We design experiments to accomplish the following objectives: • Get a qualitative understanding of what the LSTM learns to do. • Measure the benefit of initializing networks for supervised learning tasks with the weights found by unsupervised learning, especially with very few training examples. • Compare the different proposed models - Autoencoder, Future Predictor and Composite models and their conditional variants. • Compare with state-of-the-art action recognition benchmarks. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_22", "text": " We use the UCF-101 and HMDB-51 datasets for supervised tasks. The UCF-101 dataset (Soomro et al., 2012) contains 13,320 videos with an average length of 6.2 seconds belonging to 101 different action categories. The dataset has 3 standard train/test splits with the training set containing around 9,500 videos in each split (the rest are test). The HMDB-51 dataset (Kuehne et al., 2011) contains 5100 videos belonging to 51 different action categories. Mean length of the videos is 3.2 seconds. This also has 3 train/test splits with 3570 videos in the training set and rest in test. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_23", "text": " To train the unsupervised models, we used a subset of the Sports-1M dataset (Karpathy et al., 2014), that contains 1 million YouTube clips. Even though this dataset is labelled for actions, we did not do any supervised experiments on it because of logistical constraints with working with such a huge dataset. We instead collected 300 hours of video by randomly sampling 10 second clips from the dataset. It is possible to collect better samples if instead of choosing randomly, we extracted videos where a lot of motion is happening and where there are no shot boundaries. However, we did not do so in the spirit of unsupervised learning, and because we did not want to introduce any unnatural bias in the samples. We also used the supervised datasets (UCF-101 and HMDB-51) for unsupervised training. However, we found that using them did not give any significant advantage over just using the YouTube videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_24", "text": " We extracted percepts using the convolutional neural net model of Simonyan & Zisserman (2014b). The videos have a resolution of 240 ×\\times 320 and were sampled at almost 30 frames per second. We took the central 224 ×\\times 224 patch from each frame and ran it through the convnet. This gave us the RGB percepts. Additionally, for UCF-101, we computed flow percepts by extracting flows using the Brox method and training the temporal stream convolutional network as described by Simonyan & Zisserman (2014a). We found that the fc6 features worked better than fc7 for single frame classification using both RGB and flow percepts. Therefore, we used the 4096-dimensional fc6 layer as the input representation of our data. Besides these percepts, we also trained the proposed models on 32 ×\\times 32 patches of pixels. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_25", "text": " All models were trained using backprop on a single NVIDIA Titan GPU. A two layer 2048 unit Composite model that predicts 13 frames and reconstructs 16 frames took 18-20 hours to converge on 300 hours of percepts. We initialized weights by sampling from a uniform distribution whose scale was set to 1/sqrt(fan-in). Biases at all the gates were initialized to zero. Peep-hole connections were initialized to zero. The supervised classifiers trained on 16 frames took 5-15 minutes to converge. The code can be found at https://github.com/emansim/unsupervised-videos. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_26", "text": " The aim of this set of experiments to visualize the properties of the proposed models. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_27", "text": " Experiments on MNIST We first trained our models on a dataset of moving MNIST digits. In this dataset, each video was 20 frames long and consisted of two digits moving inside a 64 ×\\times 64 patch. The digits were chosen randomly from the training set and placed initially at random locations inside the patch. Each digit was assigned a velocity whose direction was chosen uniformly randomly on a unit circle and whose magnitude was also chosen uniformly at random over a fixed range. The digits bounced-off the edges of the 64 ×\\times 64 frame and overlapped if they were at the same location. The reason for working with this dataset is that it is infinite in size and can be generated quickly on the fly. This makes it possible to explore the model without expensive disk accesses or overfitting issues. It also has interesting behaviours due to occlusions and the dynamics of bouncing off the walls. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_28", "text": " We first trained a single layer Composite Model. Each LSTM had 2048 units. The encoder took 10 frames as input. The decoder tried to reconstruct these 10 frames and the future predictor attempted to predict the next 10 frames. We used logistic output units with a cross entropy loss function. Fig. 5 shows two examples of running this model. The true sequences are shown in the first two rows. The next two rows show the reconstruction and future prediction from the one layer Composite Model. It is interesting to note that the model figures out how to separate superimposed digits and can model them even as they pass through each other. This shows some evidence of disentangling the two independent factors of variation in this sequence. The model can also correctly predict the motion after bouncing off the walls. In order to see if adding depth helps, we trained a two layer Composite Model, with each layer having 2048 units. We can see that adding depth helps the model make better predictions. Next, we changed the future predictor by making it conditional. We can see that this model makes sharper predictions. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_29", "text": " Experiments on Natural Image Patches Next, we tried to see if our models can also work with natural image patches. For this, we trained the models on sequences of 32 ×\\times 32 natural image patches extracted from the UCF-101 dataset. In this case, we used linear output units and the squared error loss function. The input was 16 frames and the model was asked to reconstruct the 16 frames and predict the future 13 frames. Fig. 6 shows the results obtained from a two layer Composite model with 2048 units. We found that the reconstructions and the predictions are both very blurry. We then trained a bigger model with 4096 units. The outputs from this model are also shown in Fig. 6. We can see that the reconstructions get much sharper. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_30", "text": " Generalization over time scales In the next experiment, we test if the model can work at time scales that are different than what it was trained on. We take a one hidden layer unconditioned Composite Model trained on moving MNIST digits. The model has 2048 LSTM units and looks at a 64 ×\\times 64 input. It was trained on input sequences of 10 frames to reconstruct those 10 frames as well as predict 10 frames into the future. In order to test if the future predictor is able to generalize beyond 10 frames, we let the model run for 100 steps into the future. Fig. 7(a) shows the pattern of activity in the LSTM units of the future predictor pathway for a randomly chosen test input. It shows the activity at each of the three sigmoidal gates (input, forget, output), the input (after the tanh non-linearity, before being multiplied by the input gate), the cell state and the final output (after being multiplied by the output gate). Even though the units are ordered randomly along the vertical axis, we can see that the dynamics has a periodic quality to it. The model is able to generate persistent motion for long periods of time. In terms of reconstruction, the model only outputs blobs after the first 15 frames, but the motion is relatively well preserved. More results, including long range future predictions over hundreds of time steps can see been at http://www.cs.toronto.edu/~nitish/unsupervised_video. To show that setting up a periodic behaviour is not trivial, Fig. 7(b) shows the activity from a randomly initialized future predictor. Here, the LSTM state quickly converges and the outputs blur completely. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_31", "text": " Out-of-domain Inputs Next, we test this model’s ability to deal with out-of-domain inputs. For this, we test the model on sequences of one and three moving digits. The model was trained on sequences of two moving digits, so it has never seen inputs with just one digit or three digits. Fig. 8 shows the reconstruction and future prediction results. For one moving digit, we can see that the model can do a good job but it really tries to hallucinate a second digit overlapping with the first one. The second digit shows up towards the end of the future reconstruction. For three digits, the model merges digits into blobs. However, it does well at getting the overall motion right. This highlights a key drawback of modeling entire frames of input in a single pass. In order to model videos with variable number of objects, we perhaps need models that not only have an attention mechanism in place, but can also learn to execute themselves a variable number of times and do variable amounts of computation. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_32", "text": " Visualizing Features Next, we visualize the features learned by this model. Fig. 9 shows the weights that connect each input frame to the encoder LSTM. There are four sets of weights. One set of weights connects the frame to the input units. There are three other sets, one corresponding to each of the three gates (input, forget and output). Each weight has a size of 64 ×\\times 64. A lot of features look like thin strips. Others look like higher frequency strips. It is conceivable that the high frequency features help in encoding the direction and velocity of motion. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_33", "text": " Fig. 10 shows the output features from the two LSTM decoders of a Composite Model. These correspond to the weights connecting the LSTM output units to the output layer. They appear to be somewhat qualitatively different from the input features shown in Fig. 9. There are many more output features that are local blobs, whereas those are rare in the input features. In the output features, the ones that do look like strips are much shorter than those in the input features. One way to interpret this is the following. The model needs to know about motion (which direction and how fast things are moving) from the input. This requires precise information about location (thin strips) and velocity (high frequency strips). But when it is generating the output, the model wants to hedge its bets so that it does not suffer a huge loss for predicting things sharply at the wrong place. This could explain why the output features have somewhat bigger blobs. The relative shortness of the strips in the output features can be explained by the fact that in the inputs, it does not hurt to have a longer feature than what is needed to detect a location because information is coarse-coded through multiple features. But in the output, the model may not want to put down a feature that is bigger than any digit because other units will have to conspire to correct for it. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_34", "text": " The aim of this set of experiments is to see if the features learned by unsupervised learning can help improve performance on supervised tasks. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_35", "text": " We trained a two layer Composite Model with 2048 hidden units with no conditioning on either decoders. The model was trained on percepts extracted from 300 hours of YouTube data. The model was trained to autoencode 16 frames and predict the next 13 frames. We initialize an LSTM classifier with the weights learned by the encoder LSTM from this model. The classifier is shown in Fig. 11. The output from each LSTM in the second layer goes into a softmax classifier that makes a prediction about the action being performed at each time step. Since only one action is being performed in each video in the datasets we consider, the target is the same at each time step. At test time, the predictions made at each time step are averaged. To get a prediction for the entire video, we average the predictions from all 16 frame blocks in the video with a stride of 8 frames. Using a smaller stride did not improve results. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_36", "text": " The baseline for comparing these models is an identical LSTM classifier but with randomly initialized weights. All classifiers used dropout regularization, where we dropped activations as they were communicated across layers but not through time within the same LSTM as proposed in Zaremba et al. (2014). We emphasize that this is a very strong baseline and does significantly better than just using single frames. Using dropout was crucial in order to train good baseline models especially with very few training examples. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_37", "text": " Fig. 12 compares three models - single frame classifier (logistic regression), baseline LSTM classifier and the LSTM classifier initialized with weights from the Composite Model as the number of labelled videos per class is varied. Note that having one labelled video means having many labelled 16 frame blocks. We can see that for the case of very few training examples, unsupervised learning gives a substantial improvement. For example, for UCF-101, the performance improves from 29.6% to 34.3% when training on only one labelled video. As the size of the labelled dataset grows, the improvement becomes smaller. Even for the full UCF-101 dataset we still get a considerable improvement from 74.5% to 75.8%. On HMDB-51, the improvement is from 42.8% to 44.0% for the full dataset (70 videos per class) and 14.4% to 19.1% for one video per class. Although, the improvement in classification by using unsupervised learning was not as big as we expected, we still managed to yield an additional improvement over a strong baseline. We discuss some avenues for improvements later. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_38", "text": " We further ran similar experiments on the optical flow percepts extracted from the UCF-101 dataset. A temporal stream convolutional net, similar to the one proposed by Simonyan & Zisserman (2014b), was trained on single frame optical flows as well as on stacks of 10 optical flows. This gave an accuracy of 72.2% and 77.5% respectively. Here again, our models took 16 frames as input, reconstructed them and predicted 13 frames into the future. LSTMs with 128 hidden units improved the accuracy by 2.1% to 74.3% for the single frame case. Bigger LSTMs did not improve results. By pretraining the LSTM, we were able to further improve the classification to 74.9% (±0.1plus-or-minus0.1\\pm 0.1). For stacks of 10 frames we improved very slightly to 77.7%. These results are summarized in Table 1. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_39", "text": " The aim of this set of experiments is to compare the different variants of the model proposed in this paper. Since it is always possible to get lower reconstruction error by copying the inputs, we cannot use input reconstruction error as a measure of how good a model is doing. However, we can use the error in predicting the future as a reasonable measure of how good the model is doing. Besides, we can use the performance on supervised tasks as a proxy for how good the unsupervised model is doing. In this section, we present results from these two analyses. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_40", "text": " Future prediction results are summarized in Table 2. For MNIST we compute the cross entropy of the predictions with respect to the ground truth, both of which are 64 ×\\times 64 patches. For natural image patches, we compute the squared loss. We see that the Composite Model always does a better job of predicting the future compared to the Future Predictor. This indicates that having the autoencoder along with the future predictor to force the model to remember more about the inputs actually helps predict the future better. Next, we can compare each model with its conditional variant. Here, we find that the conditional models perform better, as was also noted in Fig. 5. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_41", "text": " Next, we compare the models using performance on a supervised task. Table 3 shows the performance on action recognition achieved by finetuning different unsupervised learning models. Besides running the experiments on the full UCF-101 and HMDB-51 datasets, we also ran the experiments on small subsets of these to better highlight the case where we have very few training examples. We find that all unsupervised models improve over the baseline LSTM which is itself well-regularized by using dropout. The Autoencoder model seems to perform consistently better than the Future Predictor. The Composite model which combines the two does better than either one alone. Conditioning on the generated inputs does not seem to give a clear advantage over not doing so. The Composite Model with a conditional future predictor works the best, although its performance is almost same as that of the Composite Model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_42", "text": " Finally, we compare our models to the state-of-the-art action recognition results. The performance is summarized in Table 4. The table is divided into three sets. The first set compares models that use only RGB data (single or multiple frames). The second set compares models that use explicitly computed flow features only. Models in the third set use both. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_43", "text": " On RGB data, our model performs at par with the best deep models. It performs 3% better than the LRCN model that also used LSTMs on top of convnet features111However, the improvement is only partially from unsupervised learning, since we used a better convnet model.. Our model performs better than C3D features that use a 3D convolutional net. However, when the C3D features are concatenated with fc6 percepts, they do slightly better than our model. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_44", "text": " The improvement for flow features over using a randomly initialized LSTM network is quite small. We believe this is atleast partly due to the fact that the flow percepts already capture a lot of the motion information that the LSTM would otherwise discover. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_45", "text": " When we combine predictions from the RGB and flow models, we obtain 84.3 accuracy on UCF-101. We believe further improvements can be made by running the model over different patch locations and mirroring the patches. Also, our model can be applied deeper inside the convnet instead of just at the top-level. That can potentially lead to further improvements. In this paper, we focus on showing that unsupervised training helps consistently across both datasets and across different sized training sets. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_46", "text": " We proposed models based on LSTMs that can learn good video representations. We compared them and analyzed their properties through visualizations. Moreover, we managed to get an improvement on supervised tasks. The best performing model was the Composite Model that combined an autoencoder and a future predictor. Conditioning on generated outputs did not have a significant impact on the performance for supervised tasks, however it made the future predictions look slightly better. The model was able to persistently generate motion well beyond the time scales it was trained for. However, it lost the precise object features rapidly after the training time scale. The features at the input and output layers were found to have some interesting properties. ", "title": "Unsupervised Learning of Video Representations using LSTMs" }, { "id": "1502.04681_all_47", "text": " To further get improvements for supervised tasks, we believe that the model can be extended by applying it convolutionally across patches of the video and stacking multiple layers of such models. Applying this model in the lower layers of a convolutional net could help extract motion information that would otherwise be lost across max-pooling layers. In our future work, we plan to build models based on these autoencoders from the bottom up instead of applying them only to percepts. ", "title": "Unsupervised Learning of Video Representations using LSTMs" } ]
Why does using larger patches reduce localization accuracy?
Larger patches reduce localization accuracy because they require more max-pooling layers [2].
[ 2 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al.  was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_1", "text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al.  trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_2", "text": " Obviously, the strategy in Ciresan et al.  has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_3", "text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_4", "text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_5", "text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al.  in the scope of unsupervised feature learning. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_6", "text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_7", "text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_8", "text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_9", "text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_10", "text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_11", "text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk​(𝐱)=exp⁡(ak​(𝐱))/(∑k′=1Kexp⁡(ak′​(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak​(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk​(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk​(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak​(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk​(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ​(𝐱)​(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw​(𝐱)​log⁡(pℓ​(𝐱)​(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_12", "text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_13", "text": " The separation border is computed using morphological operations. The weight map is then computed as w​(𝐱)=wc​(𝐱)+w0⋅exp⁡(−(d1​(𝐱)+d2​(𝐱))22​σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_14", "text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_15", "text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_16", "text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge  that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_17", "text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_18", "text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_19", "text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_20", "text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_21", "text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" }, { "id": "1505.04597_all_22", "text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation" } ]
Why is bi-level optimization for meta-learning difficult?
The goal is to optimize these parameters in a way that improves the performance of the primary task by utilizing the auxiliary tasks [16]. The optimization process becomes difficult because the primary task and auxiliary tasks may have conflicting objectives, making it challenging to find a set of parameters that work well for both [7].
[ 16, 7 ]
[ { "id": "2007.08294_all_0", "text": " Graph neural networks  have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including social network analysis , citation network analysis , visual understanding (6, 7), recommender systems , physics  , and drug discovery . Despite the wide operating range of graph neural networks, employing auxiliary (pre-text) tasks has been less explored for further improving graph representation learning. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_1", "text": " Pre-training with an auxiliary task is a common technique for deep neural networks. Indeed, it is the de facto standard step in natural language processing and computer vision to learn a powerful backbone networks such as BERT  and ResNet  leveraging large datasets such as BooksCorpus , English Wikipedia, and ImageNet . The models trained on the auxiliary task are often beneficial for the primary (target) task of interest. Despite the success of pre-training, few approaches have been generalized to graph-structured data due to their fundamental challenges. First, graph structure (e.g., the number of nodes/edges, and diameter) and its meaning can significantly differ between domains. So the model trained on an auxiliary task can harm generalization on the primary task, i.e., negative transfer . Also, many graph neural networks are transductive approaches. This often makes transfer learning between datasets inherently infeasible. So, pre-training on the target dataset has been proposed using auxiliary tasks: graph kernel  , graph reconstruction , and attribute masking  . These assume that the auxiliary tasks for pre-training are carefully selected with substantial domain knowledge and expertise in graph characteristics to assist the primary task. Since most graph neural networks operate on homogeneous graphs, which have a single type of nodes and edges, the previous pre-training/auxiliary tasks are not specifically designed for heterogeneous graphs, which have multiple types of nodes and edges. Heterogeneous graphs commonly occur in real-world applications, for instance, a music dataset has multiple types of nodes (e.g., user, song, artist) and multiple types of relations (e.g., user-artist, song-film, song-instrument). ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_2", "text": " In this paper, we proposed a framework to train a graph neural networks with automatically selected auxiliary self-supervised tasks which assist the target task without additional data and labels. Our approach first generates meta-paths from heterogeneous graphs without manual labeling and train a model with meta-path prediction to assist the primary task such as link prediction and node classification. This can be formulated as a meta-learning problem. Furthermore, our method can be adopted to existing GNNs in a plug-in manner, enhancing the model performance. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_3", "text": " Our contribution is threefold: (i) We propose a self-supervised learning method on a heterogeneous graph via meta-path prediction without additional data. (ii) Our framework automatically selects meta-paths (auxiliary tasks) to assist the primary task via meta-learning. (iii) We develop Hint Network that helps the learner network to benefit from challenging auxiliary tasks. To the best of our knowledge, this is the first auxiliary task with meta-paths specifically designed for leveraging heterogeneous graph structure. Our experiment shows that meta-path prediction improves the representational power and the gain can be further improved to explicitly optimize the auxiliary tasks for the primary task via meta-learning and the Hint Network, built on various state-of-the-art GNNs. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_4", "text": " Graph Neural Networks have provided promising results for various tasks (2, 5, 6, 7, 8, 9, 10). Bruna et al.  proposed a neural network that performs convolution on the graph domain using the Fourier basis from spectral graph theory. In contrast, non-spectral (spatial) approaches have been developed (2, 20, 21, 22, 23, 24, 25). Inspired by self-supervised learning (26, 27, 28, 29) and pre-training (11, 30) in computer vision and natural language processing, pre-training for GNNs has been recently proposed (16, 18). Recent works show promising results that self-supervised learning can be effective for GNNs (16, 17, 18, 31). Hu et al.  have introduced several strategies for pre-training GNNs such as attribute masking and context prediction. Separated from the pre-training and fine-tuning strategy, has studied multi-task learning and analyzed why the pretext tasks are useful for GNNs. However, one problem with both pre-training and multi-task learning strategies is that all the auxiliary tasks are not beneficial for the downstream applications. So, we studied auxiliary learning for GNNs that explicitly focuses on the primary task. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_5", "text": " Auxiliary Learning is a learning strategy to employ auxiliary tasks to assist the primary task. It is similar to multi-task learning, but auxiliary learning cares only the performance of the primary task. A number of auxiliary learning methods are proposed in a wide range of tasks (32, 33, 34). AC-GAN  proposed an auxiliary classifier for generative models. Recently, Meta-Auxiliary Learning  proposes an elegant solution to generate new auxiliary tasks by collapsing existing classes. However, it cannot be applicable to some tasks such as link prediction which has only one positive class. Our approach generates meta-paths on heterogeneous graphs to make new labels and trains models to predict meta-paths as auxiliary tasks. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_6", "text": " Meta-learning aims at learning to learn models efficiently and effectively, and generalizes the learning strategy to new tasks. Meta-learning includes black-box methods to approximate gradients without any information about models (37, 38), optimization-based methods to learn an optimal initialization for adapting new tasks (39, 40, 41), learning loss functions (40, 42) and metric-learning or non-parametric methods for few-shot learning (43, 44, 45). In contrast to classical learning algorithms that generalize across samples, meta-learning generalizes across tasks. In this paper, we use meta-learning to learn a concept across tasks and transfer the knowledge from auxiliary tasks to the primary task. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_7", "text": " The goal of our framework is to learn with multiple auxiliary tasks to improve the performance of the primary task. In this work, we demonstrate our framework with meta-path predictions as auxiliary tasks. But our framework could be extended to include other auxiliary tasks. The meta-paths capture diverse and meaningful relations between nodes on heterogeneous graphs . However, learning with auxiliary tasks has multiple challenges: identifying useful auxiliary tasks, balancing the auxiliary tasks with the primary task, and converting challenging auxiliary tasks into solvable (and relevant) tasks. To address the challenges, we propose SELf-supervised Auxiliary LeaRning (SELAR). Our framework consists of two main components: 1) learning weight functions to softly select auxiliary tasks and balance them with the primary task via meta-learning, and 2) learning Hint Networks to convert challenging auxiliary tasks into more relevant and solvable tasks to the primary task learner. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_8", "text": " Most existing graph neural networks have been studied focusing on homogeneous graphs that have a single type of nodes and edges. However, in real-world applications, heterogeneous graphs , which have multiple types of nodes and edges, commonly occur. Learning models on the heterogeneous graphs requires different considerations to effectively represent their node and edge heterogeneity. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_9", "text": " Heterogeneous graph . Let G=(V,E)𝐺𝑉𝐸G=(V,E) be a graph with a set of nodes V𝑉V and edges E𝐸E. A heterogeneous graph is a graph equipped with a node type mapping function fv:V→𝒯v:subscript𝑓𝑣→𝑉superscript𝒯𝑣f_{v}:V\\rightarrow\\mathcal{T}^{v} and an edge type mapping function fe:E→𝒯e:subscript𝑓𝑒→𝐸superscript𝒯𝑒f_{e}:E\\rightarrow\\mathcal{T}^{e}, where 𝒯vsuperscript𝒯𝑣\\mathcal{T}^{v} is a set of node types and 𝒯esuperscript𝒯𝑒\\mathcal{T}^{e} is a set of edge types. Each node vi∈Vsubscript𝑣𝑖𝑉v_{i}\\in V (and edge ei​j∈Esubscript𝑒𝑖𝑗𝐸e_{ij}\\in E resp.) has one node type, i.e., fv​(vi)∈𝒯vsubscript𝑓𝑣subscript𝑣𝑖superscript𝒯𝑣f_{v}(v_{i})\\in\\mathcal{T}^{v}, (and one edge type fe​(ei​j)∈𝒯esubscript𝑓𝑒subscript𝑒𝑖𝑗superscript𝒯𝑒f_{e}(e_{ij})\\in\\mathcal{T}^{e} resp.). In this paper, we consider the heterogeneous graphs with |𝒯e|>1superscript𝒯𝑒1|\\mathcal{T}^{e}|>1 or |𝒯v|>1superscript𝒯𝑣1|\\mathcal{T}^{v}|>1. When |𝒯e|=1superscript𝒯𝑒1|\\mathcal{T}^{e}|=1 and |𝒯v|=1superscript𝒯𝑣1|\\mathcal{T}^{v}|=1, it becomes a homogeneous graph. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_10", "text": " Meta-Path (46, 49) is a path on a heterogeneous graph G𝐺G that a sequence of nodes connected with heterogeneous edges, i.e., v1→t1v2→t2…→tlvl+1subscript𝑡1→subscript𝑣1subscript𝑣2subscript𝑡2→…subscript𝑡𝑙→subscript𝑣𝑙1{v}_{1}\\xrightarrow{t_{1}}{v}_{2}\\xrightarrow{t_{2}}\\ldots\\xrightarrow{t_{l}}{v}_{l+1}, where tl∈𝒯esubscript𝑡𝑙superscript𝒯𝑒t_{l}\\in\\mathcal{T}^{e} denotes an l𝑙l-th edge type of the meta-path. The meta-path can be viewed as a composite relation R=t1∘t2​…∘tl𝑅subscript𝑡1subscript𝑡2…subscript𝑡𝑙R=t_{1}\\circ t_{2}\\ldots\\circ t_{l} between node v1subscript𝑣1{v}_{1} and vl+1subscript𝑣𝑙1{v}_{l+1}, where R1∘R2subscript𝑅1subscript𝑅2R_{1}\\circ R_{2} denotes the composition of relation R1subscript𝑅1R_{1} and R2subscript𝑅2R_{2}. The definition of meta-path generalizes multi-hop connections and is shown to be useful to analyze heterogeneous graphs. For instance, in Book-Crossing dataset, ‘user-item-written.series-item-user’ indicates that a meta-path that connects users who like the same book series. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_11", "text": " We introduce meta-path prediction as a self-supervised auxiliary task to improve the representational power of graph neural networks. To our knowledge, the meta-path prediction has not been studied in the context of self-supervised learning for graph neural networks in the literature. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_12", "text": " Meta-path prediction is similar to link prediction but meta-paths allow heterogeneous composite relations. The meta-path prediction can be achieved in the same manner as link prediction. If two nodes u𝑢u and v𝑣v are connected by a meta-path p𝑝p with the heterogeneous edges (t1,t2,…​tℓ)subscript𝑡1subscript𝑡2…subscript𝑡ℓ(t_{1},t_{2},\\ldots t_{\\ell}), then yu,vp=1superscriptsubscript𝑦𝑢𝑣𝑝1y_{u,v}^{p}=1, otherwise yu,vp=0superscriptsubscript𝑦𝑢𝑣𝑝0y_{u,v}^{p}=0. The labels can be generated from a heterogeneous graph without any manual labeling. They can be obtained by Ap=Atl​…​At2​At1subscript𝐴𝑝subscript𝐴subscript𝑡𝑙…subscript𝐴subscript𝑡2subscript𝐴subscript𝑡1A_{p}=A_{t_{l}}\\ldots A_{t_{2}}A_{t_{1}}, where Atsubscript𝐴𝑡A_{t} is the adjacency matrix of edge type t𝑡t. The binarized value at (u,v)𝑢𝑣(u,v) in Apsubscript𝐴𝑝A_{p} indicates whether u𝑢u and v𝑣v are connected with the meta-path p𝑝p. In this paper, we use meta-path prediction as a self-supervised auxiliary task. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_13", "text": " Let 𝐗∈R|V|×d𝐗superscriptR𝑉𝑑\\mathbf{X}\\in\\textbf{R}^{|V|\\times d} and 𝐙∈R|V|×d′𝐙superscriptR𝑉superscript𝑑′\\mathbf{Z}\\in\\textbf{R}^{|V|\\times d^{\\prime}} be input features and their hidden representations learnt by GNN f𝑓f, i.e., 𝐙=f​(𝑿;𝐰,𝑨)𝐙𝑓𝑿𝐰𝑨\\mathbf{Z}=f(\\boldsymbol{X};\\mathbf{w},\\boldsymbol{A}), where 𝐰𝐰\\mathbf{w} is the parameter for f𝑓f, and 𝐀∈R|V|×|V|𝐀superscriptR𝑉𝑉\\mathbf{A}\\in\\textbf{R}^{|V|\\times|V|} is the adjacency matrix. Then link prediction and meta-path prediction are obtained by a simple operation as y^u,vt=σ​(Φt​(zu)⊤​Φt​(zv)),superscriptsubscript^𝑦𝑢𝑣𝑡𝜎subscriptΦ𝑡superscriptsubscript𝑧𝑢topsubscriptΦ𝑡subscript𝑧𝑣\\displaystyle\\hat{y}_{u,v}^{t}=\\sigma(\\Phi_{t}(z_{u})^{\\top}\\Phi_{t}(z_{v})), (1) where ΦtsubscriptΦ𝑡\\Phi_{t} is the task-specific network for task t∈𝒯𝑡𝒯t\\in\\mathcal{T} and zusubscript𝑧𝑢z_{u} and zvsubscript𝑧𝑣z_{v} are the node embeddings of node u𝑢u and v𝑣v. e.g., Φ0subscriptΦ0\\Phi_{0} (and Φ1subscriptΦ1\\Phi_{1} resp.) for link prediction (and the first type of meta-path prediction resp.). ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_14", "text": " The architecture is shown in Fig.  1. To optimize the model, as the link prediction, cross entropy is used. The graph neural network f𝑓f is shared by the link prediction and meta-path predictions. As any auxiliary learning methods, the meta-paths (auxiliary tasks) should be carefully chosen and properly weighted so that the meta-path prediction does not compete with link prediction especially when the capacity of GNNs is limited. To address these issues, we propose our framework that automatically selects meta-paths and balances them with the link prediction via meta-learning. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_15", "text": " Our framework SELAR is learning to learn a primary task with multiple auxiliary tasks to assist the primary task. This can be formally written as min𝐰,Θ⁡𝔼​(ℒp​r​(𝐰∗​(Θ)))(x,y)∼Dp​r​ s.t. ​𝐰∗​(Θ)=arg⁡min𝐰⁡𝔼​(ℒp​r+a​u​(𝐰;Θ))(x,y)∼Dp​r+a​u,\\displaystyle\\min_{\\mathbf{w},\\Theta}\\;\\;\\underset{(x,y)\\sim D^{pr}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}{\\text{\\large$\\mathbb{E}$}\\;\\;\\left(\\;\\;\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta))\\;\\;\\right)}\\;\\;\\text{ s.t. }\\;\\;\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\underset{(x,y)\\sim D^{pr+au}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}{\\;\\;\\mathbb{E}\\;\\;\\left(\\;\\;\\mathcal{L}^{pr+au}(\\mathbf{w};\\Theta)\\;\\;\\right)}, (2) where ℒp​r​(⋅)superscriptℒ𝑝𝑟⋅\\mathcal{L}^{pr}(\\cdot) is the primary task loss function to evaluate the trained model f​(x;𝐰∗​(Θ))𝑓𝑥superscript𝐰∗Θf(x;\\mathbf{w}^{\\ast}(\\Theta)) on meta-data (a validation for meta-learning ) Dp​rsuperscript𝐷𝑝𝑟D^{pr} and ℒp​r+a​usuperscriptℒ𝑝𝑟𝑎𝑢\\mathcal{L}^{pr+au} is the loss function to train a model on training data Dp​r+a​usuperscript𝐷𝑝𝑟𝑎𝑢D^{pr+au} with the primary and auxiliary tasks. To avoid cluttered notation, f𝑓f, x𝑥x, and y𝑦y are omitted. Each task 𝒯tsubscript𝒯𝑡\\mathcal{T}_{t} has Ntsubscript𝑁𝑡N_{t} samples and 𝒯0subscript𝒯0\\mathcal{T}_{0} and {𝒯t}t=1Tsuperscriptsubscriptsubscript𝒯𝑡𝑡1𝑇\\{\\mathcal{T}_{t}\\}_{t=1}^{T} denote the primary and auxiliary tasks respectively. The proposed formulation in Eq. (2) learns how to assist the primary task by optimizing ΘΘ\\Theta via meta-learning. The nested optimization problem given ΘΘ\\Theta is a regular training with properly adjusted loss functions to balance the primary and auxiliary tasks. The formulation can be more specifically written as min𝐰,Θsubscript𝐰Θ\\displaystyle\\min_{\\mathbf{w},\\Theta} ∑i=1M01M0ℓ0(yi(0,m​e​t​a),f(xi(0,m​e​t​a);𝐰∗(Θ))\\displaystyle\\sum_{i=1}^{M_{0}}\\frac{1}{M_{0}}\\ell^{0}(y_{i}^{(0,meta)},f(x_{i}^{(0,meta)};\\mathbf{w}^{\\ast}(\\Theta)) (3) s.t. 𝐰∗​(Θ)=arg⁡min𝐰​∑t=0T∑i=1Nt1Nt​𝒱​(ξi(t,t​r​a​i​n);Θ)​ℓt​(yi(t,t​r​a​i​n),ft​(xi(t,t​r​a​i​n);𝐰)),superscript𝐰∗Θsubscript𝐰superscriptsubscript𝑡0𝑇superscriptsubscript𝑖1subscript𝑁𝑡1subscript𝑁𝑡𝒱subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖Θsuperscriptℓ𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscript𝑓𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰\\displaystyle\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\sum_{t=0}^{T}\\sum_{i=1}^{N_{t}}\\frac{1}{N_{t}}\\mathcal{V}(\\xi^{(t,train)}_{i};\\Theta)\\ell^{t}(y_{i}^{(t,train)},f^{t}(x_{i}^{(t,train)};\\mathbf{w})), (4) where ℓtsuperscriptℓ𝑡\\ell^{t} and ftsuperscript𝑓𝑡f^{t} denote the loss function and the model for task t𝑡t. We overload ℓtsuperscriptℓ𝑡\\ell^{t} with its function value, i.e., ℓt=ℓt​(yi(t,t​r​a​i​n),ft​(xi(t,t​r​a​i​n);𝐰))superscriptℓ𝑡superscriptℓ𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscript𝑓𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰\\ell^{t}=\\ell^{t}(y_{i}^{(t,train)},f^{t}(x_{i}^{(t,train)};\\mathbf{w})). ξi(t,t​r​a​i​n)subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖\\xi^{(t,train)}_{i} is the embedding vector of it​hsubscript𝑖𝑡ℎi_{th} sample for task t𝑡t. It is the concatenation of one-hot representation of task types, the label of the sample (positive/negative), and its loss value, i.e., ξi(t,t​r​a​i​n)=(ℓt;et;yi(t,t​r​a​i​n))∈RT+2subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖superscriptℓ𝑡subscript𝑒𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscriptR𝑇2\\xi^{(t,train)}_{i}=\\left(\\ell^{t};e_{t};y_{i}^{(t,train)}\\right)\\in\\textbf{R}^{T+2}. To derive our learning algorithm, we first shorten the objective function in Eq. (3) and Eq. (4) as ℒp​r​(𝐰∗​(Θ))superscriptℒ𝑝𝑟superscript𝐰∗Θ\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta)) and ℒp​r+a​u​(𝐰;Θ)superscriptℒ𝑝𝑟𝑎𝑢𝐰Θ\\mathcal{L}^{pr+au}(\\mathbf{w};\\Theta). This is equivalent to Eq. (2) without expectation. Then, our formulation is given as min𝐰,Θ⁡ℒp​r​(𝐰∗​(Θ))​ s.t. ​𝐰∗​(Θ)=arg⁡min𝐰⁡ℒp​r+a​u​(𝐰;Θ),subscript𝐰Θsuperscriptℒ𝑝𝑟superscript𝐰∗Θ s.t. superscript𝐰∗Θsubscript𝐰superscriptℒ𝑝𝑟𝑎𝑢𝐰Θ\\min_{\\mathbf{w},\\Theta}\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta))\\;\\;\\text{ s.t. }\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\mathcal{L}^{pr+au}(\\mathbf{w};\\Theta), (5) To circumvent the difficulty of the bi-level optimization, as previous works (39, 40) in meta-learning we approximate it with the updated parameters 𝐰^^𝐰\\hat{\\mathbf{w}} using the gradient descent update as 𝐰∗​(Θ)≈𝐰^k​(Θk)=𝐰k−α​∇𝐰ℒp​r+a​u​(𝐰k;Θk),superscript𝐰∗Θsuperscript^𝐰𝑘superscriptΘ𝑘superscript𝐰𝑘𝛼subscript∇𝐰superscriptℒ𝑝𝑟𝑎𝑢superscript𝐰𝑘superscriptΘ𝑘\\displaystyle\\mathbf{w}^{\\ast}(\\Theta)\\approx\\hat{\\mathbf{w}}^{k}(\\Theta^{k})=\\mathbf{w}^{k}-\\alpha\\nabla_{\\mathbf{w}}\\mathcal{L}^{pr+au}(\\mathbf{w}^{k};\\Theta^{k}), (6) where α𝛼\\alpha is the learning rate for 𝐰𝐰\\mathbf{w}. We do not numerically evaluate 𝐰^k​(Θ)superscript^𝐰𝑘Θ\\hat{\\mathbf{w}}^{k}(\\Theta) instead we plug the computational graph of 𝐰^ksuperscript^𝐰𝑘\\hat{\\mathbf{w}}^{k} in ℒp​r​(𝐰∗​(Θ))superscriptℒ𝑝𝑟superscript𝐰∗Θ\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta)) to optimize ΘΘ\\Theta. Let ∇Θℒp​r​(𝐰∗​(Θk))subscript∇Θsuperscriptℒ𝑝𝑟superscript𝐰∗superscriptΘ𝑘\\nabla_{\\Theta}\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta^{k})) be the gradient evaluated at ΘksuperscriptΘ𝑘\\Theta^{k}. Then updating parameters ΘΘ\\Theta is given as Θk+1=Θk−β​∇Θℒp​r​(𝐰^k​(Θk)),superscriptΘ𝑘1superscriptΘ𝑘𝛽subscript∇Θsuperscriptℒ𝑝𝑟superscript^𝐰𝑘superscriptΘ𝑘\\displaystyle\\Theta^{k+1}=\\Theta^{k}-\\beta\\nabla_{\\Theta}\\mathcal{L}^{pr}(\\hat{\\mathbf{w}}^{k}(\\Theta^{k})), (7) where β𝛽\\beta is the learning rate for ΘΘ\\Theta. This update allows softly selecting useful auxiliary tasks (meta-paths) and balance them with the primary task to improve the performance of the primary task. Without balancing tasks with the weighting function 𝒱​(⋅;Θ)𝒱⋅Θ\\mathcal{V}(\\cdot;\\Theta), auxiliary tasks can dominate training and degrade the performance of the primary task. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_16", "text": " The model parameters 𝐰ksuperscript𝐰𝑘\\mathbf{w}^{k} for tasks can be updated with optimized Θk+1superscriptΘ𝑘1\\Theta^{k+1} in (7) as 𝐰k+1=𝐰k−α​∇𝐰ℒp​r+a​u​(𝐰k;Θk+1).superscript𝐰𝑘1superscript𝐰𝑘𝛼subscript∇𝐰superscriptℒ𝑝𝑟𝑎𝑢superscript𝐰𝑘superscriptΘ𝑘1\\displaystyle\\mathbf{w}^{k+1}=\\mathbf{w}^{k}-\\alpha\\nabla_{\\mathbf{w}}\\mathcal{L}^{pr+au}(\\mathbf{w}^{k};\\Theta^{k+1}). (8) Remarks. The proposed formulation can suffer from the meta-overfitting (50, 51) meaning that the parameters ΘΘ\\Theta to learn weights for softly selecting meta-paths and balancing the tasks with the primary task can overfit to the small meta-dataset. In our experiment, we found that the overfitting can be alleviated by meta-validation sets . To learn ΘΘ\\Theta that is generalizable across meta-training sets, we optimize ΘΘ\\Theta across k𝑘k different meta-datasets like k𝑘k-fold cross validation using the following equation: Θk+1=Θk−β𝔼(∇Θℒp​r(𝐰^k(Θk))),Dp​r​(m​e​t​a)∼CV\\displaystyle\\Theta^{k+1}\\;=\\;\\underset{D^{pr(meta)}\\sim CV\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}{\\Theta^{k}\\;-\\;\\;\\beta\\;\\;\\mathbb{E}\\left(\\;\\nabla_{\\Theta}\\mathcal{L}^{pr}(\\hat{\\mathbf{w}}^{k}(\\Theta^{k}))\\;\\right),} (9) where Dp​r​(m​e​t​a)∼C​Vsimilar-tosuperscript𝐷𝑝𝑟𝑚𝑒𝑡𝑎𝐶𝑉D^{pr(meta)}\\sim CV is a meta-dataset from cross validation. We used 3-fold cross validation and the gradients of ΘΘ\\Theta w.r.t different meta-datasets are averaged to update ΘksuperscriptΘ𝑘\\Theta^{k}, see Algorithm 1. The cross validation is crucial to alleviate meta-overfitting and more discussion is Section 4.3. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_17", "text": " Meta-path prediction is generally more challenging than link prediction and node classification since it requires the understanding of long-range relations across heterogeneous nodes. The meta-path prediction gets more difficult when mini-batch training is inevitable due to the size of datasets or models. Within a mini-batch, important nodes and edges for meta-paths are not available. Also, a small learner network, e.g., two-layer GNNs, with a limited receptive field, inherently cannot capture long-range relations. The challenges can hinder representation learning and damage the generalization of the primary task. We proposed a Hint Network (HintNet) which makes the challenge tasks more solvable by correcting the answer with more information at the learner’s need. Specifically, in our experiments, the HintNet corrects the answer of the learner with its own answer from the augmented graph with hub nodes, see Fig.  2. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_18", "text": " The amount of help (correction) by HintNet is optimized maximizing the learner’s gain. Let 𝒱H​(⋅)subscript𝒱𝐻⋅\\mathcal{V}_{H}(\\cdot) and ΘHsubscriptΘ𝐻\\Theta_{H} be a weight function to determine the amount of hint and its parameters which are optimized by meta-learning. Then, our formulation with HintNet is given as min𝐰,Θ​∑i=1M01M0​ℓ0​(yi(0,m​e​t​a),f​(xi(0,m​e​t​a);𝐰∗​(Θ,ΘH)))subscript𝐰Θsuperscriptsubscript𝑖1subscript𝑀01subscript𝑀0superscriptℓ0superscriptsubscript𝑦𝑖0𝑚𝑒𝑡𝑎𝑓superscriptsubscript𝑥𝑖0𝑚𝑒𝑡𝑎superscript𝐰∗ΘsubscriptΘ𝐻\\displaystyle\\min_{\\mathbf{w},\\Theta}\\sum_{i=1}^{M_{0}}\\frac{1}{M_{0}}\\ell^{0}(y_{i}^{(0,meta)},f(x_{i}^{(0,meta)};\\mathbf{w}^{\\ast}(\\Theta,\\Theta_{H}))) (10) s.t. ​𝐰∗​(Θ)=arg⁡min𝐰​∑t=0T∑i=1Nt1Nt​𝒱​(ξi(t,t​r​a​i​n),ℓt;Θ)​ℓt​(yi(t,t​r​a​i​n),y^i(t,t​r​a​i​n)​(ΘH)),s.t. superscript𝐰∗Θsubscript𝐰superscriptsubscript𝑡0𝑇superscriptsubscript𝑖1subscript𝑁𝑡1subscript𝑁𝑡𝒱subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖superscriptℓ𝑡Θsuperscriptℓ𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscriptsubscript^𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛subscriptΘ𝐻\\displaystyle\\text{s.t. }\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\sum_{t=0}^{T}\\sum_{i=1}^{N_{t}}\\frac{1}{N_{t}}\\mathcal{V}(\\xi^{(t,train)}_{i},\\ell^{t};\\Theta)\\ell^{t}(y_{i}^{(t,train)},\\hat{y}_{i}^{(t,train)}(\\Theta_{H})), (11) where y^i(t,t​r​a​i​n)​(ΘH)superscriptsubscript^𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛subscriptΘ𝐻\\hat{y}_{i}^{(t,train)}(\\Theta_{H}) denotes the convex combination of the learner’s answer and HintNet’s answer, i.e., 𝒱H​(ξi(t,t​r​a​i​n);ΘH)​ft​(xi(t,t​r​a​i​n);𝐰)+(1−𝒱H​(ξi(t,t​r​a​i​n);ΘH))​fHt​(xi(t,t​r​a​i​n);𝐰)subscript𝒱𝐻subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖subscriptΘ𝐻superscript𝑓𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰1subscript𝒱𝐻subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖subscriptΘ𝐻superscriptsubscript𝑓𝐻𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰\\mathcal{V}_{H}(\\xi^{(t,train)}_{i};\\Theta_{H})f^{t}(x_{i}^{(t,train)};\\mathbf{w})+(1-\\mathcal{V}_{H}(\\xi^{(t,train)}_{i};\\Theta_{H}))f_{H}^{t}(x_{i}^{(t,train)};\\mathbf{w}). The sample embedding is ξi(t,t​r​a​i​n)=(ℓt;ℓHt;et;yi(t,t​r​a​i​n))∈RT+3subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖superscriptℓ𝑡subscriptsuperscriptℓ𝑡𝐻subscript𝑒𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscriptR𝑇3\\xi^{(t,train)}_{i}=\\left(\\ell^{t};\\ell^{t}_{H};e_{t};y_{i}^{(t,train)}\\right)\\in\\textbf{R}^{T+3}. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_19", "text": " We evaluate our proposed methods on four public benchmark datasets on heterogeneous graphs. Our experiments answer the following research questions: Q1. Is meta-path prediction effective for representation learning on heterogeneous graphs? Q2. Can the meta-path prediction be further improved by the proposed methods (e.g., SELAR, HintNet)? Q3. Why are the proposed methods effective, any relation with hard negative mining? ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_20", "text": " Datasets. We use two public benchmark datasets from different domains for link prediction: Music dataset Last-FM and Book dataset Book-Crossing, released by KGNN-LS , RippleNet . We use two datasets for node classification: citation network datasets ACM and Movie dataset IMDB, used by HAN  for node classification tasks. ACM has three types nodes (Paper(P), Author(A), Subject(S)), four types of edges (PA, AP, PS, SP) and labels (categories of papers). IMDB contains three types of nodes (Movie (M), Actor (A), Director (D)), four types (MA, AM, MD, DM) of edges and labels (genres of movies). ACM and IMDB have node features, which are bag-of-words of keywords and plots. Dataset details are in the supplement. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_21", "text": " Baselines. We evaluate our methods with five graph neural networks : GCN , GAT , GIN , SGConv  and GTN . Our methods can be applied to both homogeneous graphs and heterogeneous graphs. We compare four learning strategies: Vanilla, standard training of base models only with the primary task samples; w/o meta-path, learning a primary task with sample weighting function 𝒱​(ξ;Θ)𝒱𝜉Θ\\mathcal{V}(\\xi;\\Theta); w/ meta-path, training with the primary task and auxiliary tasks (meta-path prediction) with a standard loss function; SELAR proposed in Section 3.2, learning the primary task with optimized auxiliary tasks by meta-learning; SELAR+Hint introduced in Section 3.3. In all the experiments, we report the mean performance of three independent runs. Implementation details are in the supplement. Our experiments were mainly performed based on NAVER Smart Machine Learning platform (NSML) (54, 55). ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_22", "text": " We used five types of meta-paths of length 2 to 4 for auxiliary tasks. Table 1 shows that our methods consistently improve link prediction performance for all the GNNs, compared to the Vanilla and the method using Meta-Weight-Net  only without meta-paths (denoted as w/o meta-path). Overall, a standard training with meta-paths shows 1.1% improvement on average on both Last-FM and Book-Crossing whereas meta-learning that learns sample weights degrades on average on Last-FM and improves only 0.6% on average on Book-Crossing, e.g., GCN, SGC and GTN on Last-FM and GCN and SGC on Book-Crossing, show degradation 0.2% compared to the standard training (Vanilla). As we expected, SELAR and SELAR with HintNet provide more optimized auxiliary learning resulting in 1.9% and 2.0% absolute improvement on Last-FM and 2.6% and 2.7% on the Book-Crossing dataset. Further, in particular, GIN on Book-crossing, SELAR and SELAR+Hint provide ∼similar-to\\sim5.5% and ∼similar-to\\sim5.3% absolute improvement compared to the vanilla algorithm. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_23", "text": " Similar to link prediction above, our SELAR consistently enhances node classification performance of all the GNN models and the improvements are more significant on IMDB which is larger than the ACM dataset. We believe that ACM dataset is already saturated and the room for improvement is limited. However, our methods still show small yet consistent improvement over all the architecture on ACM. We conjecture that the efficacy of our proposed methods differs depending on graph structures. However, it is worth noting that introducing meta-path prediction as auxiliary tasks remarkably improves the performance of primary tasks such as link and node prediction with consistency compared to the existing methods. “w/o meta-path”, the meta-learning to learn sample weight function on a primary task shows marginal degradation in five out of eight settings. Remarkably, SELAR improved the F1-score of GAT on the IMDB by (4.46%) compared to the vanilla learning scheme. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_24", "text": " The effectiveness of meta-path prediction and the proposed learning strategies are answered above. To address the last research question Q3. why the proposed method is effective, we provide analysis on the weighting function 𝒱​(ξ;Θ)𝒱𝜉Θ\\mathcal{V}(\\xi;\\Theta) learned by our framework. Also, we show the evidence that meta-overfitting occurs and can be addressed by cross-validation as in Algorithm 1. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_25", "text": " Weighting function. Our proposed methods can automatically balance multiple auxiliary tasks to improve the primary task. To understand the ability of our method, we analyze the weighting function and the adjusted loss function by the weighting function, i.e.,𝒱​(ξ;Θ)𝒱𝜉Θ\\mathcal{V}(\\xi;\\Theta), 𝒱​(ξ;Θ)​ℓt​(y,y^)𝒱𝜉Θsuperscriptℓ𝑡𝑦^𝑦\\mathcal{V}(\\xi;\\Theta)\\ell^{t}(y,\\hat{y}). The positive and negative samples are solid and dash lines respectively. We present the weighting function learnt by SELAR+HintNet for GAT which is the best-performing construction on Last-FM. The weighting function is from the epoch with the best validation performance. Fig. 3 shows that the learnt weighting function attends to hard examples more than easy ones with a small loss range from 0 to 1. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_26", "text": " Also, the primary task-positive samples are relatively less down weighted than auxiliary tasks even when the samples are easy (i.e., the loss is ranged from 0 to 1). Our adjusted loss 𝒱​(ξ;Θ)​ℓt​(y,y^)𝒱𝜉Θsuperscriptℓ𝑡𝑦^𝑦\\mathcal{V}(\\xi;\\Theta)\\ell^{t}(y,\\hat{y}) is closely related to the focal loss, −(1−pt)γ​log⁡(pt)superscript1subscript𝑝𝑡𝛾subscript𝑝𝑡-(1-p_{t})^{\\gamma}\\log(p_{t}). When ℓtsuperscriptℓ𝑡\\ell^{t} is the cross-entropy, it becomes 𝒱​(ξ;Θ)​log⁡(pt)𝒱𝜉Θsubscript𝑝𝑡\\mathcal{V}(\\xi;\\Theta)\\log(p_{t}), where p𝑝p is the model’s prediction for the correct class and ptsubscript𝑝𝑡p_{t} is defined as p𝑝p if y=1𝑦1y=1, otherwise 1−p1𝑝1-p as . The weighting function differentially evolves over iterations. At the early stage of training, it often focuses on easy examples first and then changes its focus over time. Also, the adjusted loss values by the weighting function learnt by our method differ across tasks. To analyze the contribution of each task, we calculate the average of the task-specific weighted loss on the Last-FM and Book-Crossing datasets. Especially, on the Book-Crossing, our method has more attention to ’user-item’ (primary task) and ‘user-item-literary.series.item-user’ (auxiliary task) which is a meta-path that connects users who like a book series. This implies that two users who like a book series likely have a similar preference. More results and discussion are available in the supplement. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_27", "text": " Meta cross-validation, i.e., cross-validation for meta-learning, helps to keep weighting function from over-fitting on meta data. Table 3 evidence that our algorithms as other meta-learning methods can overfit to meta-data. As in Algorithm 1, our proposed methods, both SELAR and SELAR with HintNet, with cross-validation denoted as ‘3-fold’ alleviates the meta-overfitting problem and provides a significant performance gain, whereas without meta cross-validation denoted as ‘1-fold’ the proposed method can underperform the vanilla training strategy. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_28", "text": " We proposed meta-path prediction as self-supervised auxiliary tasks on heterogeneous graphs. Our experiments show that the representation learning on heterogeneous graphs can benefit from meta-path prediction which encourages to capture rich semantic information. The auxiliary tasks can be further improved by our proposed method SELAR, which automatically balances auxiliary tasks to assist the primary task via a form of meta-learning. The learnt weighting function identifies more beneficial meta-paths for the primary tasks. Within a task, the weighting function can adjust the cross entropy like the focal loss, which focuses on hard examples by decreasing weights for easy samples. Moreover, when it comes to challenging and remotely relevant auxiliary tasks, our HintNet helps the learner by correcting the learner’s answer dynamically and further improves the gain from auxiliary tasks. Our framework based on meta-learning provides learning strategies to balance primary task and auxiliary tasks, and easy/hard (and positive/negative) samples. Interesting future directions include applying our framework to other domains and various auxiliary tasks. Our code is publicly available at https://github.com/mlvlab/SELAR. ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" }, { "id": "2007.08294_all_29", "text": " Acknowledgements. This work was partly supported by NAVER Corp. and Institute for Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT): the Regional Strategic Industry Convergence Security Core Talent Training Business (No.2019-0-01343) and the ICT Creative Consilience Program (IITP-2020-0-01819). ", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs" } ]
How many episodes are needed for training the model with the omniglot dataset?
[The paper computed classification accuracy for its models averaged over 1000 randomly generated episodes from the test set [18].
[ 18 ]
[ { "id": "1703.05175_all_0", "text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that humans have the ability to perform even one-shot classification, where only a single example of each new class is given, with a high degree of accuracy . ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_1", "text": " Two recent approaches have made significant progress in few-shot learning. Vinyals et al. proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classifier applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM  to produce the updates to a classifier, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_2", "text": " We attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is severely limited, we work under the assumption that a classifier should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a class’s prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class. Classification is performed, as in the few-shot scenario, by finding the nearest class prototype for an embedded query point. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_3", "text": " In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_4", "text": " In few-shot classification we are given a small support set of N𝑁N labeled examples S={(𝐱1,y1),…,(𝐱N,yN)}𝑆subscript𝐱1subscript𝑦1…subscript𝐱𝑁subscript𝑦𝑁S=\\{(\\mathbf{x}_{1},y_{1}),\\ldots,(\\mathbf{x}_{N},y_{N})\\} where each 𝐱i∈ℝDsubscript𝐱𝑖superscriptℝ𝐷\\mathbf{x}_{i}\\in\\mathbb{R}^{D} is the D𝐷D-dimensional feature vector of an example and yi∈{1,…,K}subscript𝑦𝑖1…𝐾y_{i}\\in\\{1,\\ldots,K\\} is the corresponding label. Sksubscript𝑆𝑘S_{k} denotes the set of examples labeled with class k𝑘k. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_5", "text": " Prototypical networks compute an M𝑀M-dimensional representation 𝐜k∈ℝMsubscript𝐜𝑘superscriptℝ𝑀\\mathbf{c}_{k}\\in\\mathbb{R}^{M}, or prototype, of each class through an embedding function fϕ:ℝD→ℝM:subscript𝑓bold-italic-ϕ→superscriptℝ𝐷superscriptℝ𝑀f_{\\bm{\\phi}}:\\mathbb{R}^{D}\\rightarrow\\mathbb{R}^{M} with learnable parameters ϕbold-italic-ϕ\\bm{\\phi}. Each prototype is the mean vector of the embedded support points belonging to its class: 𝐜k=1|Sk|​∑(𝐱i,yi)∈Skfϕ​(𝐱i)subscript𝐜𝑘1subscript𝑆𝑘subscriptsubscript𝐱𝑖subscript𝑦𝑖subscript𝑆𝑘subscript𝑓bold-italic-ϕsubscript𝐱𝑖\\mathbf{c}_{k}=\\frac{1}{|S_{k}|}\\sum_{(\\mathbf{x}_{i},y_{i})\\in S_{k}}f_{\\bm{\\phi}}(\\mathbf{x}_{i}) (1) Given a distance function d:ℝM×ℝM→(0,+∞):𝑑→superscriptℝ𝑀superscriptℝ𝑀0d:\\mathbb{R}^{M}\\times\\mathbb{R}^{M}\\rightarrow(0,+\\infty), prototypical networks produce a distribution over classes for a query point 𝐱𝐱\\mathbf{x} based on a softmax over distances to the prototypes in the embedding space: pϕ​(y=k|𝐱)=exp⁡(−d​(fϕ​(𝐱),𝐜k))∑k′exp⁡(−d​(fϕ​(𝐱),𝐜k′))subscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘subscriptsuperscript𝑘′𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜superscript𝑘′p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x})=\\frac{\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k}))}{\\sum_{k^{\\prime}}\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k^{\\prime}}))} (2) Learning proceeds by minimizing the negative log-probability J​(ϕ)=−log⁡pϕ​(y=k|𝐱)𝐽bold-italic-ϕsubscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱J(\\bm{\\phi})=-\\log p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x}) of the true class k𝑘k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J​(ϕ)𝐽bold-italic-ϕJ(\\bm{\\phi}) for a training episode is provided in Algorithm 1. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_6", "text": " For a particular class of distance functions, known as regular Bregman divergences , the prototypical networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dφsubscript𝑑𝜑d_{\\varphi} is defined as: dφ​(𝐳,𝐳′)=φ​(𝐳)−φ​(𝐳′)−(𝐳−𝐳′)T​∇φ​(𝐳′),subscript𝑑𝜑𝐳superscript𝐳′𝜑𝐳𝜑superscript𝐳′superscript𝐳superscript𝐳′𝑇∇𝜑superscript𝐳′d_{\\varphi}(\\mathbf{z},\\mathbf{z}^{\\prime})=\\varphi(\\mathbf{z})-\\varphi(\\mathbf{z}^{\\prime})-(\\mathbf{z}-\\mathbf{z}^{\\prime})^{T}\\nabla\\varphi(\\mathbf{z}^{\\prime}), (3) where φ𝜑\\varphi is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ‖𝐳−𝐳′‖2superscriptnorm𝐳superscript𝐳′2\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2} and Mahalanobis distance. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_7", "text": " Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_8", "text": " Moreover, any regular exponential family distribution pψ​(𝐳|𝜽)subscript𝑝𝜓conditional𝐳𝜽p_{\\psi}(\\mathbf{z}|\\bm{\\theta}) with parameters 𝜽𝜽\\bm{\\theta} and cumulant function ψ𝜓\\psi can be written in terms of a uniquely determined regular Bregman divergence : pψ​(𝐳|𝜽)=exp⁡{𝐳T​𝜽−ψ​(𝜽)−gψ​(𝐳)}=exp⁡{−dφ​(𝐳,𝝁​(𝜽))−gφ​(𝐳)}subscript𝑝𝜓conditional𝐳𝜽superscript𝐳𝑇𝜽𝜓𝜽subscript𝑔𝜓𝐳subscript𝑑𝜑𝐳𝝁𝜽subscript𝑔𝜑𝐳p_{\\psi}(\\mathbf{z}|\\bm{\\theta})=\\exp\\{\\mathbf{z}^{T}\\bm{\\theta}-\\psi(\\bm{\\theta})-g_{\\psi}(\\mathbf{z})\\}=\\exp\\{-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}))-g_{\\varphi}(\\mathbf{z})\\} (4) Consider now a regular exponential family mixture model with parameters 𝚪={𝜽k,πk}k=1K𝚪superscriptsubscriptsubscript𝜽𝑘subscript𝜋𝑘𝑘1𝐾\\bm{\\Gamma}=\\{\\bm{\\theta}_{k},\\pi_{k}\\}_{k=1}^{K}: p​(𝐳|𝚪)=∑k=1Kπk​pψ​(𝐳|𝜽k)=∑k=1Kπk​exp⁡(−dφ​(𝐳,𝝁​(𝜽k))−gφ​(𝐳))𝑝conditional𝐳𝚪superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑝𝜓conditional𝐳subscript𝜽𝑘superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscript𝑔𝜑𝐳p(\\mathbf{z}|\\bm{\\Gamma})=\\sum_{k=1}^{K}\\pi_{k}p_{\\psi}(\\mathbf{z}|\\bm{\\theta}_{k})=\\sum_{k=1}^{K}\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k}))-g_{\\varphi}(\\mathbf{z})) (5) Given 𝚪𝚪\\bm{\\Gamma}, inference of the cluster assignment y𝑦y for an unlabeled point 𝐳𝐳\\mathbf{z} becomes: p​(y=k|𝐳)=πk​exp⁡(−dφ​(𝐳,𝝁​(𝜽k)))∑k′πk′​exp⁡(−dφ​(𝐳,𝝁​(𝜽k)))𝑝𝑦conditional𝑘𝐳subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscriptsuperscript𝑘′subscript𝜋superscript𝑘′subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘p(y=k|\\mathbf{z})=\\frac{\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))}{\\sum_{k^{\\prime}}\\pi_{k^{\\prime}}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))} (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fϕ​(𝐱)=𝐳subscript𝑓italic-ϕ𝐱𝐳f_{\\phi}(\\mathbf{x})=\\mathbf{z} and 𝐜k=𝝁​(𝜽k)subscript𝐜𝑘𝝁subscript𝜽𝑘\\mathbf{c}_{k}=\\bm{\\mu}(\\bm{\\theta}_{k}). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dφsubscript𝑑𝜑d_{\\varphi}. The choice of distance therefore specifies modeling assumptions about the class-conditional data distribution in the embedding space. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_9", "text": " A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d​(𝐳,𝐳′)=‖𝐳−𝐳′‖2𝑑𝐳superscript𝐳′superscriptnorm𝐳superscript𝐳′2d(\\mathbf{z},\\mathbf{z^{\\prime}})=\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2}, then the model in Equation (2) is equivalent to a linear model with a particular parameterization . To see this, expand the term in the exponent: −‖fϕ​(𝐱)−𝐜k‖2superscriptnormsubscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘2\\displaystyle-\\|f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}\\|^{2} =−fϕ​(𝐱)⊤​fϕ​(𝐱)+2​𝐜k⊤​fϕ​(𝐱)−𝐜k⊤​𝐜kabsentsubscript𝑓bold-italic-ϕsuperscript𝐱topsubscript𝑓bold-italic-ϕ𝐱2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘\\displaystyle=-f_{\\bm{\\phi}}(\\mathbf{x})^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (7) The first term in Equation (7) is constant with respect to the class k𝑘k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2​𝐜k⊤​fϕ​(𝐱)−𝐜k⊤​𝐜k=𝐰k⊤​fϕ​(𝐱)+bk​, where ​𝐰k=2​𝐜k​ and ​bk=−𝐜k⊤​𝐜k2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘superscriptsubscript𝐰𝑘topsubscript𝑓bold-italic-ϕ𝐱subscript𝑏𝑘, where subscript𝐰𝑘2subscript𝐜𝑘 and subscript𝑏𝑘superscriptsubscript𝐜𝑘topsubscript𝐜𝑘2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k}=\\mathbf{w}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+b_{k}\\mbox{, where }\\mathbf{w}_{k}=2\\mathbf{c}_{k}\\mbox{ and }b_{k}=-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (8) We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., (14, 28). ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_10", "text": " Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, 𝐜k=𝐱ksubscript𝐜𝑘subscript𝐱𝑘\\mathbf{c}_{k}=\\mathbf{x}_{k} since there is only one support point per class, and matching networks and prototypical networks become equivalent. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_11", "text": " A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is fixed and greater than 111, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. and Rippel et al. ; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_12", "text": " Vinyals et al. propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_13", "text": " Vinyals et al. and Ravi and Larochelle apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_14", "text": " A straightforward way to construct episodes, used in Vinyals et al. and Ravi and Larochelle , is to choose Ncsubscript𝑁𝑐N_{c} classes and NSsubscript𝑁𝑆N_{S} support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 555-way classification and 111-shot learning, then training episodes could be comprised of Nc=5subscript𝑁𝑐5N_{c}=5, NS=1subscript𝑁𝑆1N_{S}=1. We have found, however, that it can be extremely beneficial to train with a higher Ncsubscript𝑁𝑐N_{c}, or “way”, than will be used at test-time. In our experiments, we tune the training Ncsubscript𝑁𝑐N_{c} on a held-out validation set. Another consideration is whether to match NSsubscript𝑁𝑆N_{S}, or “shot”, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same “shot” number. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_15", "text": " Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector 𝐯ksubscript𝐯𝑘\\mathbf{v}_{k} for each class. These could be determined in advance, or they could be learned from e.g., raw text . Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define 𝐜k=gϑ​(𝐯k)subscript𝐜𝑘subscript𝑔bold-italic-ϑsubscript𝐯𝑘\\mathbf{c}_{k}=g_{\\bm{\\vartheta}}(\\mathbf{v}_{k}) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g𝑔g to have unit length, however we do not constrain the query embedding f𝑓f. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_16", "text": " For few-shot learning, we performed experiments on Omniglot and the miniImageNet version of ILSVRC-2012 with the splits proposed by Ravi and Larochelle . We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) . ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_17", "text": " Omniglot is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. by resizing the grayscale images to 28 ×\\times 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. and is composed of four convolutional blocks. Each block comprises a 64-filter 3 ×\\times 3 convolution, batch normalization layer , a ReLU nonlinearity and a 2 ×\\times 2 max-pooling layer. When applied to the 28 ×\\times 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam . We used an initial learning rate of 10−3superscript10310^{-3} and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_18", "text": " We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher “way”) per training episode rather than fewer. We compare against various baselines, including the neural statistician and both the fine-tuned and non-fine-tuned versions of matching networks . We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_19", "text": " The miniImageNet dataset, originally proposed by Vinyals et al. , is derived from the larger ILSVRC-12 dataset . The splits used by Vinyals et al. consist of 60,000 color images of size 84 ×\\times 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_20", "text": " We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle , which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_21", "text": " We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_22", "text": " In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset . The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. in preparing the data. We use their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024-dimensional features extracted by applying GoogLeNet to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-flipped image222Features downloaded from https://github.com/reedscot/cvpr2016.. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_23", "text": " We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10−4superscript10410^{-4} and weight decay of 10−5superscript10510^{-5}. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_24", "text": " Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE , SJE , and DS-SJE/DA-SJE . We also compare to a recent clustering approach which trains an SVM on a learned feature space obtained by fine-tuning AlexNet . These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_25", "text": " The literature on metric learning is vast (15, 5); we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) learns a Mahalanobis distance to maximize K-nearest-neighbor’s (KNN) leave-one-out accuracy in the transformed space. Salakhutdinov and Hinton extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class’s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_26", "text": " Our approach is also similar to the nearest class mean approach , where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k𝑘k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_27", "text": " Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle . The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_28", "text": " Prototypical networks are also related to the neural statistician from the generative modeling literature, which extends the variational autoencoder (12, 24) to learn generative models of datasets rather than individual points. One component of the neural statistician is the “statistic network” which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_29", "text": " With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither nor uses episodic training, which allows us to help speed up training and regularize the model. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_30", "text": " We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning. ", "title": "Prototypical Networks for Few-shot Learning" }, { "id": "1703.05175_all_31", "text": " We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. ", "title": "Prototypical Networks for Few-shot Learning" } ]
Model architecture of StarGAN is based on two other GAN models. What is it?
The CycleGAN and the PatchGAN [20].
[ 20 ]
[ { "id": "1711.09020_all_0", "text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation). This task has experienced significant improvements following the introduction of generative adversarial networks (GANs), with results ranging from changing hair color , reconstructing photos from edge maps , and changing the seasons of scenery images . ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_1", "text": " Given training data from two different domains, these models learn to translate images from one domain to the other. We denote the terms attribute as a meaningful feature inherent in an image such as hair color, gender or age, and attribute value as a particular value of an attribute, e.g., black/blond/brown for hair color or male/female for gender. We further denote domain as a set of images sharing the same attribute value. For example, images of women can represent one domain while those of men represent another. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_2", "text": " Several image datasets come with a number of labeled attributes. For instance, the CelebA dataset contains 40 labels related to facial attributes such as hair color, gender, and age, and the RaFD dataset has 8 labels for facial expressions such as ‘happy’, ‘angry’ and ‘sad’. These settings enable us to perform more interesting tasks, namely multi-domain image-to-image translation, where we change images according to attributes from multiple domains. The first five columns in Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation show how a CelebA image can be translated according to any of the four domains, ‘blond hair’, ‘gender’, ‘aged’, and ‘pale skin’. We can further extend to training multiple domains from different datasets, such as jointly training CelebA and RaFD images to change a CelebA image’s facial expression using features learned by training on RaFD, as in the rightmost columns of Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_3", "text": " However, existing models are both inefficient and ineffective in such multi-domain image translation tasks. Their inefficiency results from the fact that in order to learn all mappings among k𝑘k domains, k​(k−1)𝑘𝑘1k(k\\mathbb{-}1) generators have to be trained. Fig. 2 (a) illustrates how twelve distinct generator networks have to be trained to translate images among four different domains. Meanwhile, they are ineffective that even though there exist global features that can be learned from images of all domains such as face shapes, each generator cannot fully utilize the entire training data and only can learn from two domains out of k𝑘k. Failure to fully utilize training data is likely to limit the quality of generated images. Furthermore, they are incapable of jointly training domains from different datasets because each dataset is partially labeled, which we further discuss in Section 3.2. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_4", "text": " As a solution to such problems we propose StarGAN, a novel and scalable approach capable of learning mappings among multiple domains. As demonstrated in Fig. 2 (b), our model takes in training data of multiple domains, and learns the mappings between all available domains using only a single generator. The idea is simple. Instead of learning a fixed translation (e.g., black-to-blond hair), our generator takes in as inputs both image and domain information, and learns to flexibly translate the image into the corresponding domain. We use a label (e.g., binary or one-hot vector) to represent domain information. During training, we randomly generate a target domain label and train the model to flexibly translate an input image into the target domain. By doing so, we can control the domain label and translate the image into any desired domain at testing phase. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_5", "text": " We also introduce a simple but effective approach that enables joint training between domains of different datasets by adding a mask vector to the domain label. Our proposed method ensures that the model can ignore unknown labels and focus on the label provided by a particular dataset. In this manner, our model can perform well on tasks such as synthesizing facial expressions of CelebA images using features learned from RaFD, as shown in the rightmost columns of Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. As far as our knowledge goes, our work is the first to successfully perform multi-domain image translation across different datasets. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_6", "text": " Overall, our contributions are as follows: ∙∙\\bullet We propose StarGAN, a novel generative adversarial network that learns the mappings among multiple domains using only a single generator and a discriminator, training effectively from images of all domains. ∙∙\\bullet We demonstrate how we can successfully learn multi-domain image translation between multiple datasets by utilizing a mask vector method that enables StarGAN to control all available domain labels. ∙∙\\bullet We provide both qualitative and quantitative results on facial attribute transfer and facial expression synthesis tasks using StarGAN, showing its superiority over baseline models. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_7", "text": " Generative Adversarial Networks. Generative adversarial networks (GANs) have shown remarkable results in various computer vision tasks such as image generation (6, 24, 32, 8), image translation (7, 9, 33), super-resolution imaging , and face image synthesis (10, 16, 26, 31). A typical GAN model consists of two modules: a discriminator and a generator. The discriminator learns to distinguish between real and fake samples, while the generator learns to generate fake samples that are indistinguishable from real samples. Our approach also leverages the adversarial loss to make the generated images as realistic as possible. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_8", "text": " Conditional GANs. GAN-based conditional image generation has also been actively studied. Prior studies have provided both the discriminator and generator with class information in order to generate samples conditioned on the class  (20, 21, 22). Other recent approaches focused on generating particular images highly relevant to a given text description  (25, 30). The idea of conditional image generation has also been successfully applied to domain transfer (9, 28), super-resolution imaging, and photo editing (2, 27). In this paper, we propose a scalable GAN framework that can flexibly steer the image translation to various target domains, by providing conditional domain information. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_9", "text": " Image-to-Image Translation. Recent work have achieved impressive results in image-to-image translation (7, 9, 17, 33). For instance, pix2pix learns this task in a supervised manner using cGANs. It combines an adversarial loss with a L1 loss, thus requires paired data samples. To alleviate the problem of obtaining data pairs, unpaired image-to-image translation frameworks (9, 17, 33) have been proposed. UNIT combines variational autoencoders (VAEs) with CoGAN , a GAN framework where two generators share weights to learn the joint distribution of images in cross domains. CycleGAN and DiscoGAN preserve key attributes between the input and the translated image by utilizing a cycle consistency loss. However, all these frameworks are only capable of learning the relations between two different domains at a time. Their approaches have limited scalability in handling multiple domains since different models should be trained for each pair of domains. Unlike the aforementioned approaches, our framework can learn the relations among multiple domains using only a single model. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_10", "text": " We first describe our proposed StarGAN, a framework to address multi-domain image-to-image translation within a single dataset. Then, we discuss how StarGAN incorporates multiple datasets containing different label sets to flexibly perform image translations using any of these labels. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_11", "text": " Our goal is to train a single generator G𝐺G that learns mappings among multiple domains. To achieve this, we train G𝐺G to translate an input image x𝑥x into an output image y𝑦y conditioned on the target domain label c𝑐c, G​(x,c)→y→𝐺𝑥𝑐𝑦G(x,c)\\rightarrow y. We randomly generate the target domain label c𝑐c so that G𝐺G learns to flexibly translate the input image. We also introduce an auxiliary classifier that allows a single discriminator to control multiple domains. That is, our discriminator produces probability distributions over both sources and domain labels, D:x→{Ds​r​c​(x),Dc​l​s​(x)}:𝐷→𝑥subscript𝐷𝑠𝑟𝑐𝑥subscript𝐷𝑐𝑙𝑠𝑥D:x\\rightarrow\\{{D}_{src}(x),{D}_{cls}(x)\\}. Fig. 3 illustrates the training process of our proposed approach. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_12", "text": " Adversarial Loss. To make the generated images indistinguishable from real images, we adopt an adversarial loss ℒa​d​v=𝔼x​(log⁡Ds​r​c​(x))+𝔼x,c​(log⁡(1−Ds​r​c​(G​(x,c)))),subscriptℒ𝑎𝑑𝑣subscript𝔼𝑥delimited-()subscript𝐷𝑠𝑟𝑐𝑥subscript𝔼𝑥𝑐delimited-()1subscript𝐷𝑠𝑟𝑐𝐺𝑥𝑐\\begin{split}\\mathcal{L}_{adv}=&\\thinspace{\\mathbb{E}}_{x}\\left(\\log{{D}_{src}(x)}\\right)\\>\\>+\\\\ &\\thinspace{\\mathbb{E}}_{x,c}(\\log{(1-{D}_{src}(G(x,c)))}),\\end{split} (1) where G𝐺G generates an image G​(x,c)𝐺𝑥𝑐G(x,c) conditioned on both the input image x𝑥x and the target domain label c𝑐c, while D𝐷D tries to distinguish between real and fake images. In this paper, we refer to the term Ds​r​c​(x)subscript𝐷𝑠𝑟𝑐𝑥{D}_{src}(x) as a probability distribution over sources given by D𝐷D. The generator G𝐺G tries to minimize this objective, while the discriminator D𝐷D tries to maximize it. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_13", "text": " Domain Classification Loss. For a given input image x𝑥x and a target domain label c𝑐c, our goal is to translate x𝑥x into an output image y𝑦y, which is properly classified to the target domain c𝑐c. To achieve this condition, we add an auxiliary classifier on top of D𝐷D and impose the domain classification loss when optimizing both D𝐷D and G𝐺G. That is, we decompose the objective into two terms: a domain classification loss of real images used to optimize D𝐷D, and a domain classification loss of fake images used to optimize G𝐺G. In detail, the former is defined as ℒc​l​sr=𝔼x,c′​(−log⁡Dc​l​s​(c′|x)),superscriptsubscriptℒ𝑐𝑙𝑠𝑟subscript𝔼𝑥superscript𝑐′delimited-()subscript𝐷𝑐𝑙𝑠conditionalsuperscript𝑐′𝑥\\mathcal{L}_{cls}^{r}={\\mathbb{E}}_{x,c^{\\prime}}(-\\log{{D}_{cls}(c^{\\prime}|x)}), (2) where the term Dc​l​s​(c′|x)subscript𝐷𝑐𝑙𝑠conditionalsuperscript𝑐′𝑥{D}_{cls}(c^{\\prime}|x) represents a probability distribution over domain labels computed by D𝐷D. By minimizing this objective, D𝐷D learns to classify a real image x𝑥x to its corresponding original domain c′superscript𝑐′c^{\\prime}. We assume that the input image and domain label pair (x,c′)𝑥superscript𝑐′(x,c^{\\prime}) is given by the training data. On the other hand, the loss function for the domain classification of fake images is defined as ℒc​l​sf=𝔼x,c​(−log⁡Dc​l​s​(c|G​(x,c))).superscriptsubscriptℒ𝑐𝑙𝑠𝑓subscript𝔼𝑥𝑐delimited-()subscript𝐷𝑐𝑙𝑠conditional𝑐𝐺𝑥𝑐\\mathcal{L}_{cls}^{f}={\\mathbb{E}}_{x,c}(-\\log{{D}_{cls}(c|G(x,c))}). (3) In other words, G𝐺G tries to minimize this objective to generate images that can be classified as the target domain c𝑐c. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_14", "text": " Reconstruction Loss. By minimizing the adversarial and classification losses, G𝐺G is trained to generate images that are realistic and classified to its correct target domain. However, minimizing the losses (Eqs. (1) and (3)) does not guarantee that translated images preserve the content of its input images while changing only the domain-related part of the inputs. To alleviate this problem, we apply a cycle consistency loss (9, 33) to the generator, defined as ℒr​e​c=𝔼x,c,c′​(‖x−G​(G​(x,c),c′)‖1),subscriptℒ𝑟𝑒𝑐subscript𝔼𝑥𝑐superscript𝑐′delimited-()subscriptnorm𝑥𝐺𝐺𝑥𝑐superscript𝑐′1\\mathcal{L}_{rec}={\\mathbb{E}}_{x,c,c^{\\prime}}({||x-G(G(x,c),c^{\\prime})||}_{1}), (4) where G𝐺G takes in the translated image G​(x,c)𝐺𝑥𝑐G(x,c) and the original domain label c′superscript𝑐′c^{\\prime} as input and tries to reconstruct the original image x𝑥x. We adopt the L1 norm as our reconstruction loss. Note that we use a single generator twice, first to translate an original image into an image in the target domain and then to reconstruct the original image from the translated image. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_15", "text": " Full Objective. Finally, the objective functions to optimize G𝐺G and D𝐷D are written, respectively, as ℒD=−ℒa​d​v+λc​l​s​ℒc​l​sr,subscriptℒ𝐷subscriptℒ𝑎𝑑𝑣subscript𝜆𝑐𝑙𝑠superscriptsubscriptℒ𝑐𝑙𝑠𝑟\\mathcal{L}_{D}=-\\mathcal{L}_{adv}+{\\lambda}_{cls}\\thinspace\\mathcal{L}_{cls}^{r}, (5) ℒG=ℒa​d​v+λc​l​s​ℒc​l​sf+λr​e​c​ℒr​e​c,subscriptℒ𝐺subscriptℒ𝑎𝑑𝑣subscript𝜆𝑐𝑙𝑠superscriptsubscriptℒ𝑐𝑙𝑠𝑓subscript𝜆𝑟𝑒𝑐subscriptℒ𝑟𝑒𝑐\\mathcal{L}_{G}=\\mathcal{L}_{adv}+{\\lambda}_{cls}\\thinspace\\mathcal{L}_{cls}^{f}+{\\lambda}_{rec}\\thinspace\\mathcal{L}_{rec}, (6) where λc​l​ssubscript𝜆𝑐𝑙𝑠{\\lambda}_{cls} and λr​e​csubscript𝜆𝑟𝑒𝑐{\\lambda}_{rec} are hyper-parameters that control the relative importance of domain classification and reconstruction losses, respectively, compared to the adversarial loss. We use λc​l​s=1subscript𝜆𝑐𝑙𝑠1{\\lambda}_{cls}=1 and λr​e​c=10subscript𝜆𝑟𝑒𝑐10{\\lambda}_{rec}=10 in all of our experiments. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_16", "text": " An important advantage of StarGAN is that it simultaneously incorporates multiple datasets containing different types of labels, so that StarGAN can control all the labels at the test phase. An issue when learning from multiple datasets, however, is that the label information is only partially known to each dataset. In the case of CelebA  and RaFD , while the former contains labels for attributes such as hair color and gender, it does not have any labels for facial expressions such as ‘happy’ and ‘angry’, and vice versa for the latter. This is problematic because the complete information on the label vector c′superscript𝑐′c^{\\prime} is required when reconstructing the input image x𝑥x from the translated image G​(x,c)𝐺𝑥𝑐G(x,c) (See Eq. (4)). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_17", "text": " Mask Vector. To alleviate this problem, we introduce a mask vector m𝑚m that allows StarGAN to ignore unspecified labels and focus on the explicitly known label provided by a particular dataset. In StarGAN, we use an n𝑛n-dimensional one-hot vector to represent m𝑚m, with n𝑛n being the number of datasets. In addition, we define a unified version of the label as a vector c~=(c1,…,cn,m),~𝑐subscript𝑐1…subscript𝑐𝑛𝑚\\tilde{c}=({c}_{1},...,{c}_{n},m), (7) where (⋅)delimited-()⋅(\\cdot) refers to concatenation, and cisubscript𝑐𝑖{c}_{i} represents a vector for the labels of the i𝑖i-th dataset. The vector of the known label cisubscript𝑐𝑖{c}_{i} can be represented as either a binary vector for binary attributes or a one-hot vector for categorical attributes. For the remaining n−1𝑛1n\\mathbb{-}1 unknown labels we simply assign zero values. In our experiments, we utilize the CelebA and RaFD datasets, where n𝑛n is two. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_18", "text": " Training Strategy. When training StarGAN with multiple datasets, we use the domain label c~~𝑐\\tilde{c} defined in Eq. (7) as input to the generator. By doing so, the generator learns to ignore the unspecified labels, which are zero vectors, and focus on the explicitly given label. The structure of the generator is exactly the same as in training with a single dataset, except for the dimension of the input label c~~𝑐\\tilde{c}. On the other hand, we extend the auxiliary classifier of the discriminator to generate probability distributions over labels for all datasets. Then, we train the model in a multi-task learning setting, where the discriminator tries to minimize only the classification error associated to the known label. For example, when training with images in CelebA, the discriminator minimizes only classification errors for labels related to CelebA attributes, and not facial expressions related to RaFD. Under these settings, by alternating between CelebA and RaFD the discriminator learns all of the discriminative features for both datasets, and the generator learns to control all the labels in both datasets. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_19", "text": " Improved GAN Training. To stabilize the training process and generate higher quality images, we replace Eq. (1) with Wasserstein GAN objective with gradient penalty (1, 4) defined as ℒa​d​v=𝔼x​(Ds​r​c​(x))−𝔼x,c​(Ds​r​c​(G​(x,c)))−λg​p​𝔼x^​((‖▽x^​Ds​r​c​(x^)‖2−1)2),subscriptℒ𝑎𝑑𝑣subscript𝔼𝑥delimited-()subscript𝐷𝑠𝑟𝑐𝑥subscript𝔼𝑥𝑐delimited-()subscript𝐷𝑠𝑟𝑐𝐺𝑥𝑐subscript𝜆𝑔𝑝subscript𝔼^𝑥delimited-()superscriptsubscriptnormsubscript▽^𝑥subscript𝐷𝑠𝑟𝑐^𝑥212\\begin{split}\\mathcal{L}_{adv}=\\thinspace&{\\mathbb{E}}_{x}({D}_{src}(x))-{\\mathbb{E}}_{x,c}({D}_{src}(G(x,c)))\\thinspace\\thinspace\\\\ &-{\\lambda}_{gp}\\thinspace{\\mathbb{E}}_{\\hat{x}}({{(||{\\triangledown}_{\\hat{x}}{D}_{src}(\\hat{x})||}_{2}-1)}^{2})\\thinspace,\\end{split} (8) where x^^𝑥\\hat{x} is sampled uniformly along a straight line between a pair of a real and a generated images. We use λg​p=10subscript𝜆𝑔𝑝10{\\lambda}_{gp}=10 for all experiments. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_20", "text": " Network Architecture. Adapted from CycleGAN , StarGAN has the generator network composed of two convolutional layers with the stride size of two for downsampling, six residual blocks , and two transposed convolutional layers with the stride size of two for upsampling. We use instance normalization for the generator but no normalization for the discriminator. We leverage PatchGANs (7, 15, 33) for the discriminator network, which classifies whether local image patches are real or fake. See the appendix (Section 7.2) for more details about the network architecture. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_21", "text": " In this section, we first compare StarGAN against recent methods on facial attribute transfer by conducting user studies. Next, we perform a classification experiment on facial expression synthesis. Lastly, we demonstrate empirical results that StarGAN can learn image-to-image translation from multiple datasets. All our experiments were conducted by using the model output from unseen images during the training phase. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_22", "text": " As our baseline models, we adopt DIAT and CycleGAN , both of which performs image-to-image translation between two different domains. For comparison, we trained these models multiple times for every pair of two different domains. We also adopt IcGAN as a baseline which can perform attribute transfer using a cGAN . ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_23", "text": " DIAT uses an adversarial loss to learn the mapping from x∈X𝑥𝑋x\\in X to y∈Y𝑦𝑌y\\in Y, where x𝑥x and y𝑦y are face images in two different domains X𝑋X and Y𝑌Y, respectively. This method has a regularization term on the mapping as ‖x−F​(G​(x))‖1subscriptnorm𝑥𝐹𝐺𝑥1{||x-F(G(x))||}_{1} to preserve identity features of the source image, where F𝐹F is a feature extractor pretrained on a face recognition task. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_24", "text": " CycleGAN also uses an adversarial loss to learn the mapping between two different domains X𝑋X and Y𝑌Y. This method regularizes the mapping via cycle consistency losses, ‖x−(GY​X​(GX​Y​(x)))‖1subscriptnorm𝑥subscript𝐺𝑌𝑋subscript𝐺𝑋𝑌𝑥1{||x-({G}_{YX}({G}_{XY}(x)))||}_{1} and ‖y−(GX​Y​(GY​X​(y)))‖1subscriptnorm𝑦subscript𝐺𝑋𝑌subscript𝐺𝑌𝑋𝑦1{||y-({G}_{XY}({G}_{YX}(y)))||}_{1}. This method requires two generators and discriminators for each pair of two different domains. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_25", "text": " IcGAN combines an encoder with a cGAN model. cGAN learns the mapping G:{z,c}→x:𝐺→𝑧𝑐𝑥G:\\{z,c\\}\\rightarrow x that generates an image x𝑥x conditioned on both the latent vector z𝑧z and the conditional vector c𝑐c. In addition, IcGAN introduces an encoder to learn the inverse mappings of cGAN, Ez:x→z:subscript𝐸𝑧→𝑥𝑧{E}_{z}:x\\rightarrow z and Ec:x→c:subscript𝐸𝑐→𝑥𝑐{E}_{c}:x\\rightarrow c. This allows IcGAN to synthesis images by only changing the conditional vector and preserving the latent vector. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_26", "text": " CelebA. The CelebFaces Attributes (CelebA) dataset contains 202,599 face images of celebrities, each annotated with 40 binary attributes. We crop the initial 178×218178218178\\times 218 size images to 178×178178178178\\times 178, then resize them as 128×128128128128\\times 128. We randomly select 2,000 images as test set and use all remaining images for training data. We construct seven domains using the following attributes: hair color (black, blond, brown), gender (male/female), and age (young/old). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_27", "text": " RaFD. The Radboud Faces Database (RaFD) consists of 4,824 images collected from 67 participants. Each participant makes eight facial expressions in three different gaze directions, which are captured from three different angles. We crop the images to 256×256256256256\\times 256, where the faces are centered, and then resize them to 128×128128128128\\times 128. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_28", "text": " All models are trained using Adam with β1=0.5subscript𝛽10.5{\\beta}_{1}=0.5 and β2=0.999subscript𝛽20.999{\\beta}_{2}=0.999. For data augmentation we flip the images horizontally with a probability of 0.5. We perform one generator update after five discriminator updates as in . The batch size is set to 16 for all experiments. For experiments on CelebA, we train all models with a learning rate of 0.0001 for the first 10 epochs and linearly decay the learning rate to 0 over the next 10 epochs. To compensate for the lack of data, when training with RaFD we train all models for 100 epochs with a learning rate of 0.0001 and apply the same decaying strategy over the next 100 epochs. Training takes about one day on a single NVIDIA Tesla M40 GPU. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_29", "text": " We first compare our proposed method to the baseline models on a single and multi-attribute transfer tasks. We train the cross-domain models such as DIAT and CycleGAN multiple times considering all possible attribute value pairs. In the case of DIAT and CycleGAN, we perform multi-step translations to synthesize multiple attributes (e.g. transferring a gender attribute after changing a hair color). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_30", "text": " Qualitative evaluation. Fig. 4 shows the facial attribute transfer results on CelebA. We observed that our method provides a higher visual quality of translation results on test data compared to the cross-domain models. One possible reason is the regularization effect of StarGAN through a multi-task learning framework. In other words, rather than training a model to perform a fixed translation (e.g., brown-to-blond hair), which is prone to overfitting, we train our model to flexibly translate images according to the labels of the target domain. This allows our model to learn reliable features universally applicable to multiple domains of images with different facial attribute values. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_31", "text": " Furthermore, compared to IcGAN, our model demonstrates an advantage in preserving the facial identity feature of an input. We conjecture that this is because our method maintains the spatial information by using activation maps from the convolutional layer as latent representation, rather than just a low-dimensional latent vector as in IcGAN. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_32", "text": " Quantitative evaluation protocol. For quantitative evaluations, we performed two user studies in a survey format using Amazon Mechanical Turk (AMT) to assess single and multiple attribute transfer tasks. Given an input image, the Turkers were instructed to choose the best generated image based on perceptual realism, quality of transfer in attribute(s), and preservation of a figure’s original identity. The options were four randomly shuffled images generated from four different methods. The generated images in one study have a single attribute transfer in either hair color (black, blond, brown), gender, or age. In another study, the generated images involve a combination of attribute transfers. Each Turker was asked 30 to 40 questions with a few simple yet logical questions for validating human effort. The number of validated Turkers in each user study is 146 and 100 in single and multiple transfer tasks, respectively. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_33", "text": " Quantitative results. Tables 1 and 2 show the results of our AMT experiment on single- and multi-attribute transfer tasks, respectively. StarGAN obtained the majority of votes for best transferring attributes in all cases. In the case of gender changes in Table 1, the voting difference between our model and other models was marginal, e.g., 39.1% for StarGAN vs. 31.4% for DIAT. However, in multi-attribute changes, e.g., the ‘G+A’ case in Table 2, the performance difference becomes significant, e.g., 49.8% for StarGAN vs. 20.3% for IcGAN), clearly showing the advantages of StarGAN in more complicated, multi-attribute transfer tasks. This is because unlike the other methods, StarGAN can handle image translation involving multiple attribute changes by randomly generating a target domain label in the training phase. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_34", "text": " We next train our model on the RaFD dataset to learn the task of synthesizing facial expressions. To compare StarGAN and baseline models, we fix the input domain as the ‘neutral’ expression, but the target domain varies among the seven remaining expressions. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_35", "text": " Qualitative evaluation. As seen in Fig. 5, StarGAN clearly generates the most natural-looking expressions while properly maintaining the personal identity and facial features of the input. While DIAT and CycleGAN mostly preserve the identity of the input, many of their results are shown blurry and do not maintain the degree of sharpness as seen in the input. IcGAN even fails to preserve the personal identity in the image by generating male images. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_36", "text": " We believe that the superiority of StarGAN in the image quality is due to its implicit data augmentation effect from a multi-task learning setting. RaFD images contain a relatively small size of samples, e.g., 500 images per domain. When trained on two domains, DIAT and CycleGAN can only use 1,000 training images at a time, but StarGAN can use 4,000 images in total from all the available domains for its training. This allows StarGAN to properly learn how to maintain the quality and sharpness of the generated output. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_37", "text": " Quantitative evaluation. For a quantitative evaluation, we compute the classification error of a facial expression on synthesized images. We trained a facial expression classifier on the RaFD dataset (90%/10% splitting for training and test sets) using a ResNet-18 architecture , resulting in a near-perfect accuracy of 99.55%. We then trained each of image translation models using the same training set and performed image translation on the same, unseen test set. Finally, we classified the expression of these translated images using the above-mentioned classifier. As can be seen in Table 3, our model achieves the lowest classification error, indicating that our model produces the most realistic facial expressions among all the methods compared. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_38", "text": " Another important advantage of our model is the scalability in terms of the number of parameters required. The last column in Table 3 shows that the number of parameters required to learn all translations by StarGAN is seven times smaller than that of DIAT and fourteen times smaller than that of CycleGAN. This is because StarGAN requires only a single generator and discriminator pair, regardless of the number of domains, while in the case of cross-domain models such as CycleGAN, a completely different model should be trained for each source-target domain pair. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_39", "text": " Finally, we empirically demonstrate that our model can learn not only from multiple domains within a single dataset, but also from multiple datasets. We train our model jointly on the CelebA and RaFD datasets using the mask vector (see Section 3.2). To distinguish between the model trained only on RaFD and the model trained on both CelebA and RaFD, we denote the former as StarGAN-SNG (single) and the latter as StarGAN-JNT (joint). ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_40", "text": " Effects of joint training. Fig. 6 shows qualitative comparisons between StarGAN-SNG and StarGAN-JNT, where the task is to synthesize facial expressions of images in CelebA. StarGAN-JNT exhibits emotional expressions with high visual quality, while StarGAN-SNG generates reasonable but blurry images with gray backgrounds. This difference is due to the fact that StarGAN-JNT learns to translate CelebA images during training but not StarGAN-SNG. In other words, StarGAN-JNT can leverage both datasets to improve shared low-level tasks such facial keypoint detection and segmentation. By utilizing both CelebA and RaFD, StarGAN-JNT can improve these low-level tasks, which is beneficial to learning facial expression synthesis. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_41", "text": " Learned role of mask vector. In this experiment, we gave a one-hot vector c𝑐c by setting the dimension of a particular facial expression (available from the second dataset, RaFD) to one. In this case, since the label associated with the second data set is explicitly given, the proper mask vector would be (0,1)01(0,1). Fig. 7 shows the case where this proper mask vector was given and the opposite case where a wrong mask vector of (1,0)10(1,0) was given. When the wrong mask vector was used, StarGAN-JNT fails to synthesize facial expressions, and it manipulates the age of the input image. This is because the model ignores the facial expression label as unknown and treats the facial attribute label as valid by the mask vector. Note that since one of the facial attributes is ‘young’, the model translates the image from young to old when it takes in a zero vector as input. From this behavior, we can confirm that StarGAN properly learned the intended role of a mask vector in image-to-image translations when involving all the labels from multiple datasets altogether. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_42", "text": " In this paper, we proposed StarGAN, a scalable image-to-image translation model among multiple domains using a single generator and a discriminator. Besides the advantages in scalability, StarGAN generated images of higher visual quality compared to existing methods (16, 23, 33), owing to the generalization capability behind the multi-task learning setting. In addition, the use of the proposed simple mask vector enables StarGAN to utilize multiple datasets with different sets of domain labels, thus handling all available labels from them. We hope our work to enable users to develop interesting image translation applications across multiple domains. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_43", "text": " Acknowledgements. This work was mainly done while the first author did a research internship at Clova AI Research, NAVER. We thank all the researchers at NAVER, especially Donghyun Kwak, for insightful discussions. This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (No. NRF2016R1C1B2015924). Jaegul Choo is the corresponding author. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_44", "text": " Fig. 8 shows an overview of StarGAN when learning from both the CelebA and RaFD datasets. As can be seen at the top of the figure, the label for CelebA contains binary attributes (Black, Blond, Brown, Male, and Young), while the label for RaFD provides information on categorical attributes (Angry, Fearful, Happy, Sad, and Disgusted). The mask vector is a two-dimensional one-hot vector which indicates whether the CelebA or RaFD label is valid. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_45", "text": " The network architectures of StarGAN are shown in Table 4 and 5. For the generator network, we use instance normalization in all layers except the last output layer. For the discriminator network, we use Leaky ReLU with a negative slope of 0.01. There are some notations; ndsubscript𝑛𝑑{n}_{d}: the number of domain, ncsubscript𝑛𝑐{n}_{c}: the dimension of domain labels (nd+2subscript𝑛𝑑2{n}_{d}+2 when training with both the CelebA and RaFD datasets, otherwise same as ndsubscript𝑛𝑑{n}_{d}), N: the number of output channels, K: kernel size, S: stride size, P: padding size, IN: instance normalization. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" }, { "id": "1711.09020_all_46", "text": " Figs. 9, 10, 11, and 12 show additional images with 256×256256256256\\times 256 resolutions generated by StarGAN. All images were generated by a single generator trained on both the CelebA and RaFD datasets. We trained StarGAN on a single NVIDIA Pascal M40 GPU for seven days. ", "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation" } ]
Why author explained their model as end-to-end?
They explained their model as end-to-end since the model can jointly predict the mentions, entity links and coreference relations between them [3].
[ 3 ]
[ { "id": "2108.13530_all_0", "text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a coreferenced cluster can only be linked to a single entity or NIL (i.e., a non-linkable entity), and all mentions linking to the same entity are coreferent. This contrasts with previous attempts to join coref+EL (Hajishirzi et al., 2013; Dutta and Weikum, 2015; Angell et al., 2021) where coref and EL models are trained separately and additional logic is required to merge the predictions of both tasks. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_1", "text": " Our first approach (Local in Fig. 1(a)) is motivated by current state-of-the-art coreference resolution models (Joshi et al., 2019; Wu et al., 2020) that predict a single antecedent for each span to resolve. We extend this architecture by also considering entity links as potential antecendents: in the example of Fig. 1, the mention “Alliance” can be either connected to its antecedent mention “NATO” or to any of its candidate links (Alliance or Alliance,_Ohio). While straightforward, this approach cannot solve cases where the first coreferenced mention does not include the correct entity in its candidate list (e.g., if the order of “NATO” and “Alliance” mentions in Fig. 1 would be reversed). We therefor propose a second approach, Global, which by construction overcomes this inherent limitation by using bidirectional connections between mentions. Because that implies cycles could be formed, we resort to solving a maximum spanning tree problem. Mentions that refer to the same entity form a cluster, represented as a subtree rooted by the single entity they link to. To encode the overall document’s clusters in a single spanning tree, we introduce a virtual root node (see Fig. 1(b)).222Coreference clusters without a linked entity, i.e., a NIL cluster, have a link of a mention directly to the root. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_2", "text": " This paper contributes: (i) 2 architectures (Local and Global) for joint entity linking (EL) and corefence resolution, (ii) an extended AIDA dataset (Hoffart et al., 2011), adding new annotations of linked and NIL coreference clusters, (iii) experimental analysis on 2 datasets where our joint coref+EL models achieve up to +5% F1-score on both tasks compared to standalone models. We also show up to +50% in accuracy for hard cases of EL where entity mentions lack the correct entity in their candidate list. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_3", "text": " Our model takes as input (i) the full document text, and (ii) an alias table with entity candidates for each of the possible spans. Our end-to-end approach allows to jointly predict the mentions, entity links and coreference relations between them. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_4", "text": " We use SpanBERT (base) from Joshi et al. (2020) to obtain span representations gisubscriptg𝑖\\textbf{g}_{i} for a particular span sisubscript𝑠𝑖s_{i}. Similarly to Luan et al. (2019); Xu and Choi (2020), we apply an additional pruning step to keep only the top-N𝑁N spans based on the pruning score ΦpsubscriptΦp\\Phi_{\\mathrm{p}} from a feed-forward neural net (FFNN): Φp​(si)=FFNNP​(gi).subscriptΦpsubscript𝑠𝑖subscriptFFNN𝑃subscriptg𝑖\\Phi_{\\mathrm{p}}(s_{i})=\\mathrm{FFNN}_{P}(\\textbf{g}_{i}). (1) ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_5", "text": " For a candidate entity ejsubscript𝑒𝑗e_{j} of span sisubscript𝑠𝑖s_{i} we will obtain representation as ejsubscripte𝑗\\textbf{e}_{j} (which is further detailed in §3). ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_6", "text": " We propose two methods for joint coreference and EL. The first, Local, is motivated by end-to-end span-based coreference resolution models (Lee et al., 2017, 2018) that optimize the marginalized probability of the correct antecedents for each given span. We extend this local marginalization to include the span’s candidate entity links. Formally, the modeled probability of y𝑦y (text span or candidate entity) being the antecedent of span sisubscript𝑠𝑖s_{i} is: Pcl​(y|si)=exp⁡(Φcl​(si,y))∑y′∈𝒴​(si)exp⁡(Φcl​(si,y′)),subscript𝑃clconditional𝑦subscript𝑠𝑖subscriptΦclsubscript𝑠𝑖𝑦subscriptsuperscript𝑦′𝒴subscript𝑠𝑖subscriptΦclsubscript𝑠𝑖superscript𝑦′P_{\\mathrm{cl}}(y|s_{i})=\\dfrac{\\exp\\big{(}\\Phi_{\\mathrm{cl}}(s_{i},y)\\big{)}}{\\sum_{y^{\\prime}\\in\\mathcal{Y}(s_{i})}\\exp\\big{(}\\Phi_{\\mathrm{cl}}(s_{i},y^{\\prime})\\big{)}}, (2) where 𝒴​(si)𝒴subscript𝑠𝑖\\mathcal{Y}(s_{i}) is the set of antecedent spans unified with the candidate entities for sisubscript𝑠𝑖s_{i}. For antecedent spans {sj:j<i}conditional-setsubscript𝑠𝑗𝑗𝑖\\{s_{j}:j<i\\} the score ΦclsubscriptΦcl\\Phi_{\\mathrm{cl}} is defined as: \\medmath​Φcl​(si,sj)=Φp​(si)+Φp​(sj)+Φc​(si,sj),\\medmathsubscriptΦclsubscript𝑠𝑖subscript𝑠𝑗subscriptΦpsubscript𝑠𝑖subscriptΦpsubscript𝑠𝑗subscriptΦcsubscript𝑠𝑖subscript𝑠𝑗\\displaystyle\\medmath{\\Phi_{\\mathrm{cl}}(s_{i},s_{j})=\\Phi_{\\mathrm{p}}(s_{i})+\\Phi_{\\mathrm{p}}(s_{j})+\\Phi_{\\mathrm{c}}(s_{i},s_{j})}, (3) \\medmath​Φc​(si,sj)=FFNNC​((gi;gj;gi⊙gj;𝝋i,j)),\\medmathsubscriptΦcsubscript𝑠𝑖subscript𝑠𝑗subscriptFFNN𝐶subscriptg𝑖subscriptg𝑗direct-productsubscriptg𝑖subscriptg𝑗subscript𝝋𝑖𝑗\\displaystyle\\medmath{\\Phi_{\\mathrm{c}}(s_{i},s_{j})=\\mathrm{FFNN}_{C}((\\textbf{g}_{i};\\textbf{g}_{j};\\textbf{g}_{i}\\odot\\textbf{g}_{j};\\boldsymbol{\\varphi}_{i,j}))}, (4) where 𝝋i,jsubscript𝝋𝑖𝑗\\boldsymbol{\\varphi}_{i,j} is an embedding encoding the distance333Measured in number of spans, after pruning. between spans sisubscript𝑠𝑖s_{i} and sjsubscript𝑠𝑗s_{j}. Similarly, for a particular candidate entity ejsubscript𝑒𝑗e_{j}, the score ΦclsubscriptΦcl\\Phi_{\\mathrm{cl}} is: Φcl​(si,ej)=Φp​(si)+Φℓ​(si,ej),subscriptΦclsubscript𝑠𝑖subscript𝑒𝑗subscriptΦpsubscript𝑠𝑖subscriptΦℓsubscript𝑠𝑖subscript𝑒𝑗\\displaystyle\\Phi_{\\mathrm{cl}}(s_{i},e_{j})=\\Phi_{\\mathrm{p}}(s_{i})+\\Phi_{{\\ell}}(s_{i},e_{j}), (5) Φℓ​(si,ej)=FFNNL​((gi;ej)).subscriptΦℓsubscript𝑠𝑖subscript𝑒𝑗subscriptFFNN𝐿subscriptg𝑖subscripte𝑗\\displaystyle\\Phi_{\\ell}(s_{i},e_{j})=\\mathrm{FFNN}_{L}((\\textbf{g}_{i};\\textbf{e}_{j})). (6) An example graph of mentions and entities with edges for which aforementioned scores ΦclsubscriptΦcl\\Phi_{\\mathrm{cl}} would be calculated is sketched in Fig. 1(a). While simple, this approach fails to correctly solve EL when the correct entity is only present in the candidate lists of mention spans occurring later in the text (since earlier mentions have no access to it). ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_7", "text": " To solve EL in the general case, even when the first mention does not have the correct entity, we propose bidirectional connections between mentions, thus leading to a maximum spanning tree problem in our Global approach. Here we define a score for a (sub)tree t𝑡t, noted as Φtr​(t)subscriptΦtr𝑡\\Phi_{\\mathrm{tr}}(t): Φtr​(t)=∑(i,j)∈tΦcl​(ui,uj),subscriptΦtr𝑡subscript𝑖𝑗𝑡subscriptΦclsubscript𝑢𝑖subscript𝑢𝑗\\Phi_{\\mathrm{tr}}(t)=\\sum_{(i,j)\\in t}\\Phi_{\\mathrm{cl}}(u_{i},u_{j}), (7) where uisubscript𝑢𝑖u_{i} and ujsubscript𝑢𝑗u_{j} are two connected nodes (i.e., root, candidate entities or spans) in t𝑡t. For a ground truth cluster c∈C𝑐𝐶c\\in C (with C𝐶C being the set of all such clusters), with its set444For a single cluster annotation, indeed it is possible that multiple correct trees can be drawn. of correct subtree representations 𝒯csubscript𝒯𝑐\\mathcal{T}_{c}, we model the cluster’s likelihood with its subtree scores. We minimize the negative log-likelihood ℒℒ\\mathcal{L} of all clusters: ℒℒ\\displaystyle\\mathcal{L} =−log⁡∏c∈C∑t∈𝒯cexp⁡(Φtr​(t))∑t∈𝒯allexp⁡(Φtr​(t)).absentsubscriptproduct𝑐𝐶subscript𝑡subscript𝒯𝑐subscriptΦtr𝑡subscript𝑡subscript𝒯allsubscriptΦtr𝑡\\displaystyle=-\\log\\frac{\\prod_{c\\in C}\\sum_{t\\in\\mathcal{T}_{c}}\\exp\\big{(}\\Phi_{\\mathrm{tr}}(t)\\big{)}}{\\sum_{t\\in\\mathcal{T}_{\\textit{all}}}\\exp\\big{(}\\Phi_{\\mathrm{tr}}(t)\\big{)}}. (8) Naively enumerating all possible spanning trees (𝒯allsubscript𝒯all\\mathcal{T}_{\\textit{all}} or 𝒯csubscript𝒯𝑐\\mathcal{T}_{c}) implied by this equation is infeasible, since their number is exponentially large. We use the adapted Kirchhoff’s Matrix Tree Theorem (MTT; Koo et al. (2007); Tutte (1984)) to solve this: the sum of the weights of the spanning trees in a directed graph rooted in r is equal to the determinant of the Laplacian matrix of the graph with the row and column corresponding to r removed (i.e., the minor of the Laplacian with respect to r). This way, eq. (8) can be rewritten as ℒℒ\\displaystyle\\mathcal{L} =−log⁡∏c∈Cdet(𝐋^c​(𝚽cl))det(𝐋r​(𝚽cl)),absentsubscriptproduct𝑐𝐶subscript^𝐋𝑐subscript𝚽clsubscript𝐋𝑟subscript𝚽cl\\displaystyle=-\\log\\frac{\\prod_{c\\in C}{\\det\\Big{(}\\mathbf{\\hat{L}}_{c}\\big{(}\\mathbf{\\Phi_{\\mathrm{cl}}}\\big{)}\\Big{)}}}{\\det\\Big{(}\\mathbf{L}_{r}\\big{(}\\mathbf{\\Phi_{\\mathrm{cl}}}\\big{)}\\Big{)}}, (9) where 𝚽clsubscript𝚽cl\\mathbf{\\Phi_{\\mathrm{cl}}} is the weighted adjacency matrix of the graph, and 𝐋rsubscript𝐋𝑟\\mathbf{L}_{r} is the minor of the Laplacian with respect to the root node r𝑟r. An entry in the Laplacian matrix is calculated as \\medmath​Li,j={∑kexp⁡(Φcl​(uk,uj))if i=j−exp⁡(Φcl​(ui,uj))otherwise,\\medmathsubscript𝐿𝑖𝑗casessubscript𝑘subscriptΦclsubscript𝑢𝑘subscript𝑢𝑗if i=jsubscriptΦclsubscript𝑢𝑖subscript𝑢𝑗otherwise\\displaystyle\\medmath{L_{i,j}=\\begin{cases}\\sum\\limits_{k}\\exp(\\Phi_{\\mathrm{cl}}(u_{k},u_{j}))&\\text{if $i=j$}\\\\ -\\exp(\\Phi_{\\mathrm{cl}}(u_{i},u_{j}))&\\text{otherwise}\\end{cases}}, (10) Similarly, 𝐋^csubscript^𝐋𝑐\\mathbf{\\hat{L}}_{c} is a modified Laplacian matrix where the first row is replaced with the root r𝑟r selection scores Φcl​(r,uj)subscriptΦcl𝑟subscript𝑢𝑗\\Phi_{\\mathrm{cl}}(r,u_{j}). For clarity, Appendix A presents a toy example with detailed steps to calculate the loss in eq. (9). ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_8", "text": " To calculate the scores of each of the entries Φcl​(ui,uj)subscriptΦclsubscript𝑢𝑖subscript𝑢𝑗\\Phi_{\\textrm{cl}}(u_{i},u_{j}) to 𝚽clsubscript𝚽cl\\mathbf{\\Phi_{\\mathrm{cl}}} matrix in eqs. (7) and (9) for Global, we use the same approach as in Local for edges between two mention spans, or between a mention and entity. For the directed edges between the root r𝑟r and a candidate entity ejsubscript𝑒𝑗e_{j} we choose Φcl​(r,ej)=0subscriptΦcl𝑟subscript𝑒𝑗0\\Phi_{\\mathrm{cl}}(r,e_{j})=0. Since we represent NIL clusters by edges from the mention spans directly to the root, we also need scores for them: we use eq. (3) with Φp​(r)=0subscriptΦp𝑟0\\Phi_{\\mathrm{p}}(r)=0. We use Edmonds’ algorithm (Edmonds, 1967) for decoding the maximum spanning tree. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_9", "text": " We considered two datasets to evaluate our proposed models: DWIE (Zaporojets et al., 2021) and AIDA (Hoffart et al., 2011). Since AIDA essentially does not contain coreference information, we had to extend it by (i) adding missing mention links in order to make annotations consistent on the coreference cluster level, and (ii) annotating NIL coreference clusters. We note this extended dataset as AIDA+. See Table 1 for the details. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_10", "text": " As input to our models, for DWIE we generate spans of up to 5 tokens. For each mention span sisubscript𝑠𝑖s_{i}, we find candidates from a dictionary of entity surface forms used for hyperlinks in Wikipedia. We then keep the top-16 candidates based on the prior for that surface form, as per Yamada et al. (2016, §3). Each of those candidates ejsubscript𝑒𝑗e_{j} is represented using a Wikipedia2Vec embedding ejsubscripte𝑗\\textbf{e}_{j} (Yamada et al., 2016).555We use Wikipedia version 20200701. For AIDA+, we use the spans, entity candidates, and entity representations from Kolitsas et al. (2018).666https://github.com/dalab/end2end_neural_el ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_11", "text": " To assess the performance of our joint coref+EL models Local and Global, we also provide Standalone implementations for coref and EL tasks. The Standalone coref model is trained using only the coreference component of our joint architecture (eq. (2)–(4)), while the EL model is based only on the linking component (eq. (6)). ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_12", "text": " As performance metrics, for coreference resolution we calculate the average-F1 score of commonly used MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998) and CEAFee{}_{\\textrm{e}} (Luo, 2005) metrics as implemented by Pradhan et al. (2014). For EL, we use (i) mention-level F1 score (ELm), and (ii) cluster-level hard F1 score (ELh) that counts a true positive only if both the coreference cluster (in terms of all its mention spans) and the entity link are correctly predicted. These EL metrics are executed in a strong matching setting that requires predicted spans to exactly match the boundaries of gold mentions. Furthermore, for EL we only report the performance on non-NIL mentions, leaving the study of NIL links for future work. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_13", "text": " Our experiments will answer the following research questions:  (Q1) How does performance of our joint coref+EL models compare to Standalone  models? (Q2) Does jointly solving coreference resolution and EL enable more coherent EL predictions? (Q3) How do our joint models perform on hard cases where some individual entity mentions do not have the correct candidate? ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_14", "text": " Table 2 shows the results of our compared models for EL and coreference resolution tasks. Answering (Q1), we observe a general improvement in performance of our coref+EL joint models (Local and Global) compared to Standalone  on the EL task. Furthermore, this difference is bigger when using our cluster-level hard metrics. This also answers (Q2) by indicating that the joint models tend to produce more coherent cluster-based predictions. To make this more explicit, Table 3 compares the accuracy for singleton clusters (i.e., clusters composed by a single entity mention), denoted as S𝑆S, to that of clusters composed by multiple mentions, denoted as M𝑀M. We observe that the difference in performance between our joint models and Standalone is bigger on M𝑀M clusters (with a consistent superiority of Global), indicating that our approach indeed produces more coherent predictions for mentions that refer to the same concept. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_15", "text": " Further analysis reveals that this difference in performance is even higher for a more complex scenario where the clusters contain mentions with different surface forms (not shown in the table). ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_16", "text": " In order to tackle research question (Q3), we study the accuracy of our models on the important corner case that involves mentions without correct entity in their candidate lists. This is illustrated in Table 4, which focuses on such mentions in clusters where at least one mention contains the correct entity in its candidate list. As expected, the Standalone model cannot link such mentions, as it is limited to the local candidate list. In contrast, both our joint approaches can solve some of these cases by using the correct candidates from other mentions in the cluster, with a superior performance of our Global model compared to the Local one. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_17", "text": " Entity Linking: Related work in entity linking (EL) tackles the document-level linking coherence by exploring relations between entities (Kolitsas et al., 2018; Yang et al., 2019; Le and Titov, 2019), or entities and mentions (Le and Titov, 2018). More recently, contextual BERT-driven (Devlin et al., 2019) language models have been used for the EL task (Broscheit, 2019; De Cao et al., 2020, 2021; Yamada et al., 2020) by jointly embedding mentions and entities. In contrast, we explore a cluster-based EL approach where the coherence is achieved on coreferent entity mentions level. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_18", "text": " Coreference Resolution: Span-based antecedent-ranking coreference resolution (Lee et al., 2017, 2018) has seen a recent boost by using SpanBERT representations (Xu and Choi, 2020; Joshi et al., 2020; Wu et al., 2020). We extend this approach in our Local joint coref+EL architecture. Furthermore, we rely on Kirchhoff’s Matrix Tree Theorem (Koo et al., 2007; Tutte, 1984) to efficiently train a more expressive spanning tree-based Global method. ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_19", "text": " Joint EL+Coref: Fahrni and Strube (2012) introduce a more expensive rule-based Integer Linear Programming component to jointly predict coref and EL. Durrett and Klein (2014) jointly train coreference and entity linking without enforcing single-entity per cluster consistency. More recently, Angell et al. (2021); Agarwal et al. (2021) use additional logic to achieve consistent cluster-level entity linking. In contrast, our proposed approach constrains the space of the predicted spanning trees on a structural level (see Fig. 1). ", "title": "Joint Models for Entity Linking and Coreference Resolution" }, { "id": "2108.13530_all_20", "text": " We propose two end-to-end models to solve entity linking and coreference resolution tasks in a joint setting. Our joint architectures achieve superior performance compared to the standalone counterparts. Further analysis reveals that this boost in performance is driven by more coherent predictions on the level of mention clusters (linking to the same entity) and extended candidate entity coverage. ", "title": "Joint Models for Entity Linking and Coreference Resolution" } ]
Could we guess more about what is the instance of 'underlying relationship'?
[For translation between domains, the underlying relationship could be the relationship between those domains – for instance, that they are two different renderings of the same underlying scene] [5].
[ 5 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed his impression of this same scene through wispy brush strokes and a bright palette.††* indicates equal contribution ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_1", "text": " What if Monet had happened upon the little harbor in Cassis on a cool summer evening (Figure 1, bottom-left)? A brief stroll through a gallery of Monet paintings makes it possible to imagine how he would have rendered the scene: perhaps in pastel shades, with abrupt dabs of paint, and a somewhat flattened dynamic range. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_2", "text": " We can imagine all this despite never having seen a side by side example of a Monet painting next to a photo of the scene he painted. Instead, we have knowledge of the set of Monet paintings and of the set of landscape photographs. We can reason about the stylistic differences between these two sets, and thereby imagine what a scene might look like if we were to “translate” it from one set into the other. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_3", "text": " In this paper, we present a method that can learn to do the same: capturing special characteristics of one image collection and figuring out how these characteristics could be translated into the other image collection, all in the absence of any paired training examples. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_4", "text": " This problem can be more broadly described as image-to-image translation , converting an image from one representation of a given scene, x𝑥x, to another, y𝑦y, e.g., grayscale to color, image to semantic labels, edge-map to photograph. Years of research in computer vision, image processing, computational photography, and graphics have produced powerful translation systems in the supervised setting, where example image pairs {xi,yi}i=1Nsuperscriptsubscriptsubscript𝑥𝑖subscript𝑦𝑖𝑖1𝑁\\{x_{i},y_{i}\\}_{i=1}^{N} are available (Figure 2, left), e.g., (11, 19, 22, 23, 28, 33, 45, 56, 58, 62). However, obtaining paired training data can be difficult and expensive. For example, only a couple of datasets exist for tasks like semantic segmentation (e.g., ), and they are relatively small. Obtaining input-output pairs for graphics tasks like artistic stylization can be even more difficult since the desired output is highly complex, typically requiring artistic authoring. For many tasks, like object transfiguration (e.g., zebra↔↔\\leftrightarrowhorse,  Figure 1 top-middle), the desired output is not even well-defined. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_5", "text": " We therefore seek an algorithm that can learn to translate between domains without paired input-output examples (Figure 2, right). We assume there is some underlying relationship between the domains – for example, that they are two different renderings of the same underlying scene – and seek to learn that relationship. Although we lack supervision in the form of paired examples, we can exploit supervision at the level of sets: we are given one set of images in domain X𝑋X and a different set in domain Y𝑌Y. We may train a mapping G:X→Y:𝐺→𝑋𝑌G:X\\rightarrow Y such that the output y^=G​(x)^𝑦𝐺𝑥\\hat{y}=G(x), x∈X𝑥𝑋x\\in X, is indistinguishable from images y∈Y𝑦𝑌y\\in Y by an adversary trained to classify y^^𝑦\\hat{y} apart from y𝑦y. In theory, this objective can induce an output distribution over y^^𝑦\\hat{y} that matches the empirical distribution pd​a​t​a​(y)subscript𝑝𝑑𝑎𝑡𝑎𝑦p_{data}(y) (in general, this requires G𝐺G to be stochastic) . The optimal G𝐺G thereby translates the domain X𝑋X to a domain Y^^𝑌\\hat{Y} distributed identically to Y𝑌Y. However, such a translation does not guarantee that an individual input x𝑥x and output y𝑦y are paired up in a meaningful way – there are infinitely many mappings G𝐺G that will induce the same distribution over y^^𝑦\\hat{y}. Moreover, in practice, we have found it difficult to optimize the adversarial objective in isolation: standard procedures often lead to the well-known problem of mode collapse, where all input images map to the same output image and the optimization fails to make progress . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_6", "text": " These issues call for adding more structure to our objective. Therefore, we exploit the property that translation should be “cycle consistent”, in the sense that if we translate, e.g., a sentence from English to French, and then translate it back from French to English, we should arrive back at the original sentence . Mathematically, if we have a translator G:X→Y:𝐺→𝑋𝑌G:X\\rightarrow Y and another translator F:Y→X:𝐹→𝑌𝑋F:Y\\rightarrow X, then G𝐺G and F𝐹F should be inverses of each other, and both mappings should be bijections. We apply this structural assumption by training both the mapping G𝐺G and F𝐹F simultaneously, and adding a cycle consistency loss  that encourages F​(G​(x))≈x𝐹𝐺𝑥𝑥F(G(x))\\approx x and G​(F​(y))≈y𝐺𝐹𝑦𝑦G(F(y))\\approx y. Combining this loss with adversarial losses on domains X𝑋X and Y𝑌Y yields our full objective for unpaired image-to-image translation. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_7", "text": " We apply our method to a wide range of applications, including collection style transfer, object transfiguration, season transfer and photo enhancement. We also compare against previous approaches that rely either on hand-defined factorizations of style and content, or on shared embedding functions, and show that our method outperforms these baselines. We provide both PyTorch and Torch implementations. Check out more results at our website. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_8", "text": " Generative Adversarial Networks (GANs) (16, 63) have achieved impressive results in image generation (6, 39), image editing , and representation learning (39, 43, 37).  Recent methods adopt the same idea for conditional image generation applications, such as text2image , image inpainting , and future prediction , as well as to other domains like videos  and 3D data . The key to GANs’ success is the idea of an adversarial loss that forces the generated images to be, in principle, indistinguishable from real photos. This loss is particularly powerful for image generation tasks, as this is exactly the objective that much of computer graphics aims to optimize. We adopt an adversarial loss to learn the mapping such that the translated images cannot be distinguished from images in the target domain. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_9", "text": " Image-to-Image Translation The idea of image-to-image translation goes back at least to Hertzmann et al.’s Image Analogies , who employ a non-parametric texture model  on a single input-output training image pair. More recent approaches use a dataset of input-output examples to learn a parametric translation function using CNNs (e.g., ). Our approach builds on the “pix2pix” framework of Isola et al. , which uses a conditional generative adversarial network  to learn a mapping from input to output images. Similar ideas have been applied to various tasks such as generating photographs from sketches  or from attribute and semantic layouts . However, unlike the above prior work, we learn the mapping without paired training examples. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_10", "text": " Unpaired Image-to-Image Translation Several other methods also tackle the unpaired setting, where the goal is to relate two data domains: X𝑋X and Y𝑌Y. Rosales et al.  propose a Bayesian framework that includes a prior based on a patch-based Markov random field computed from a source image and a likelihood term obtained from multiple style images. More recently, CoGAN  and cross-modal scene networks  use a weight-sharing strategy to learn a common representation across domains. Concurrent to our method, Liu et al.  extends the above framework with a combination of variational autoencoders  and generative adversarial networks . Another line of concurrent work (46, 49, 2) encourages the input and output to share specific “content” features even though they may differ in “style“. These methods also use adversarial networks, with additional terms to enforce the output to be close to the input in a predefined metric space, such as class label space , image pixel space , and image feature space . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_11", "text": " Unlike the above approaches, our formulation does not rely on any task-specific, predefined similarity function between the input and output, nor do we assume that the input and output have to lie in the same low-dimensional embedding space. This makes our method a general-purpose solution for many vision and graphics tasks. We directly compare against several prior and contemporary approaches in Section 5.1. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_12", "text": " Cycle Consistency The idea of using transitivity as a way to regularize structured data has a long history. In visual tracking, enforcing simple forward-backward consistency has been a standard trick for decades (24, 48). In the language domain, verifying and improving translations via “back translation and reconciliation” is a technique used by human translators  (including, humorously, by Mark Twain ), as well as by machines . More recently, higher-order cycle consistency has been used in structure from motion , 3D shape matching , co-segmentation , dense semantic alignment (65, 64), and depth estimation . Of these, Zhou et al.  and Godard et al.  are most similar to our work, as they use a cycle consistency loss as a way of using transitivity to supervise CNN training. In this work, we are introducing a similar loss to push G𝐺G and F𝐹F to be consistent with each other. Concurrent with our work, in these same proceedings, Yi et al.  independently use a similar objective for unpaired image-to-image translation, inspired by dual learning in machine translation . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_13", "text": " Neural Style Transfer (13, 23, 52, 12) is another way to perform image-to-image translation, which synthesizes a novel image by combining the content of one image with the style of another image (typically a painting) based on matching the Gram matrix statistics of pre-trained deep features. Our primary focus, on the other hand, is learning the mapping between two image collections, rather than between two specific images, by trying to capture correspondences between higher-level appearance structures. Therefore, our method can be applied to other tasks, such as painting→→\\rightarrow photo, object transfiguration, etc. where single sample transfer methods do not perform well. We compare these two methods in  Section 5.2. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_14", "text": " Our goal is to learn mapping functions between two domains X𝑋X and Y𝑌Y given training samples {xi}i=1Nsuperscriptsubscriptsubscript𝑥𝑖𝑖1𝑁\\{x_{i}\\}_{i=1}^{N} where xi∈Xsubscript𝑥𝑖𝑋x_{i}\\in X and {yj}j=1Msuperscriptsubscriptsubscript𝑦𝑗𝑗1𝑀\\{y_{j}\\}_{j=1}^{M} where yj∈Ysubscript𝑦𝑗𝑌y_{j}\\in Y111We often omit the subscript i𝑖i and j𝑗j for simplicity.. We denote the data distribution as x∼pd​a​t​a​(x)similar-to𝑥subscript𝑝𝑑𝑎𝑡𝑎𝑥x\\sim p_{data}(x) and y∼pd​a​t​a​(y)similar-to𝑦subscript𝑝𝑑𝑎𝑡𝑎𝑦y\\sim p_{data}(y). As illustrated in Figure 3 (a), our model includes two mappings G:X→Y:𝐺→𝑋𝑌G:X\\rightarrow Y and F:Y→X:𝐹→𝑌𝑋F:Y\\rightarrow X. In addition, we introduce two adversarial discriminators DXsubscript𝐷𝑋D_{X} and DYsubscript𝐷𝑌D_{Y}, where DXsubscript𝐷𝑋D_{X} aims to distinguish between images {x}𝑥\\{x\\} and translated images {F​(y)}𝐹𝑦\\{F(y)\\}; in the same way, DYsubscript𝐷𝑌D_{Y} aims to discriminate between {y}𝑦\\{y\\} and {G​(x)}𝐺𝑥\\{G(x)\\}. Our objective contains two types of terms: adversarial losses  for matching the distribution of generated images to the data distribution in the target domain; and cycle consistency losses to prevent the learned mappings G𝐺G and F𝐹F from contradicting each other. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_15", "text": " We apply adversarial losses  to both mapping functions. For the mapping function G:X→Y:𝐺→𝑋𝑌G:X\\rightarrow Y and its discriminator DYsubscript𝐷𝑌D_{Y}, we express the objective as: ℒGAN​(G,DY,X,Y)=subscriptℒGAN𝐺subscript𝐷𝑌𝑋𝑌absent\\displaystyle\\mathcal{L}_{\\text{GAN}}(G,D_{Y},X,Y)= 𝔼y∼pdata​(y)​(log⁡DY​(y))subscript𝔼similar-to𝑦subscript𝑝data𝑦delimited-()subscript𝐷𝑌𝑦\\displaystyle\\ \\mathbb{E}_{y\\sim p_{\\text{data}}(y)}(\\log D_{Y}(y)) +\\displaystyle+ 𝔼x∼pdata​(x)(log(1−DY(G(x))),\\displaystyle\\ \\mathbb{E}_{x\\sim p_{\\text{data}}(x)}(\\log(1-D_{Y}(G(x))), (1) where G𝐺G tries to generate images G​(x)𝐺𝑥G(x) that look similar to images from domain Y𝑌Y, while DYsubscript𝐷𝑌D_{Y} aims to distinguish between translated samples G​(x)𝐺𝑥G(x) and real samples y𝑦y. G𝐺G aims to minimize this objective against an adversary D𝐷D that tries to maximize it, i.e., minG⁡maxDY⁡ℒGAN​(G,DY,X,Y)subscript𝐺subscriptsubscript𝐷𝑌subscriptℒGAN𝐺subscript𝐷𝑌𝑋𝑌\\min_{G}\\max_{D_{Y}}\\mathcal{L}_{\\text{GAN}}(G,D_{Y},X,Y). We introduce a similar adversarial loss for the mapping function F:Y→X:𝐹→𝑌𝑋F:Y\\rightarrow X and its discriminator DXsubscript𝐷𝑋D_{X} as well: i.e., minF⁡maxDX⁡ℒGAN​(F,DX,Y,X)subscript𝐹subscriptsubscript𝐷𝑋subscriptℒGAN𝐹subscript𝐷𝑋𝑌𝑋\\min_{F}\\max_{D_{X}}\\mathcal{L}_{\\text{GAN}}(F,D_{X},Y,X). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_16", "text": " Adversarial training can, in theory, learn mappings G𝐺G and F𝐹F that produce outputs identically distributed as target domains Y𝑌Y and X𝑋X respectively (strictly speaking, this requires G𝐺G and F𝐹F to be stochastic functions) . However, with large enough capacity, a network can map the same set of input images to any random permutation of images in the target domain, where any of the learned mappings can induce an output distribution that matches the target distribution. Thus, adversarial losses alone cannot guarantee that the learned function can map an individual input xisubscript𝑥𝑖x_{i} to a desired output yisubscript𝑦𝑖y_{i}. To further reduce the space of possible mapping functions, we argue that the learned mapping functions should be cycle-consistent: as shown in Figure 3 (b), for each image x𝑥x from domain X𝑋X, the image translation cycle should be able to bring x𝑥x back to the original image, i.e., x→G​(x)→F​(G​(x))≈x→𝑥𝐺𝑥→𝐹𝐺𝑥𝑥x\\rightarrow G(x)\\rightarrow F(G(x))\\approx x. We call this forward cycle consistency. Similarly, as illustrated in Figure 3 (c), for each image y𝑦y from domain Y𝑌Y, G𝐺G and F𝐹F should also satisfy backward cycle consistency: y→F​(y)→G​(F​(y))≈y→𝑦𝐹𝑦→𝐺𝐹𝑦𝑦y\\rightarrow F(y)\\rightarrow G(F(y))\\approx y. We incentivize this behavior using a cycle consistency loss: ℒcyc​(G,F)=subscriptℒcyc𝐺𝐹absent\\displaystyle\\mathcal{L}_{\\text{cyc}}(G,F)= 𝔼x∼pdata​(x)​(∥F​(G​(x))−x∥1)subscript𝔼similar-to𝑥subscript𝑝data𝑥delimited-()subscriptdelimited-∥∥𝐹𝐺𝑥𝑥1\\displaystyle\\ \\mathbb{E}_{x\\sim p_{\\text{data}}(x)}(\\lVert F(G(x))-x\\rVert_{1}) +\\displaystyle+ 𝔼y∼pdata​(y)​(∥G​(F​(y))−y∥1).subscript𝔼similar-to𝑦subscript𝑝data𝑦delimited-()subscriptdelimited-∥∥𝐺𝐹𝑦𝑦1\\displaystyle\\ \\mathbb{E}_{y\\sim p_{\\text{data}}(y)}(\\lVert G(F(y))-y\\rVert_{1}). (2) In preliminary experiments, we also tried replacing the L1 norm in this loss with an adversarial loss between F​(G​(x))𝐹𝐺𝑥F(G(x)) and x𝑥x, and between G​(F​(y))𝐺𝐹𝑦G(F(y)) and y𝑦y, but did not observe improved performance. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_17", "text": " The behavior induced by the cycle consistency loss can be observed in Figure 4: the reconstructed images F​(G​(x))𝐹𝐺𝑥F(G(x)) end up matching closely to the input images x𝑥x. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_18", "text": " ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_19", "text": " Our full objective is: ℒ​(G,F,DX,DY)=ℒ𝐺𝐹subscript𝐷𝑋subscript𝐷𝑌absent\\displaystyle\\mathcal{L}(G,F,D_{X},D_{Y})= 、​ℒGAN​(G,DY,X,Y)、subscriptℒGAN𝐺subscript𝐷𝑌𝑋𝑌\\displaystyle、\\mathcal{L}_{\\text{GAN}}(G,D_{Y},X,Y) +\\displaystyle+ ℒGAN​(F,DX,Y,X)subscriptℒGAN𝐹subscript𝐷𝑋𝑌𝑋\\displaystyle\\ \\mathcal{L}_{\\text{GAN}}(F,D_{X},Y,X) +\\displaystyle+ λ​ℒcyc​(G,F),𝜆subscriptℒcyc𝐺𝐹\\displaystyle\\ \\lambda\\mathcal{L}_{\\text{cyc}}(G,F), (3) where λ𝜆\\lambda controls the relative importance of the two objectives. We aim to solve: G∗,F∗=arg⁡minG,F⁡maxDx,DY⁡ℒ​(G,F,DX,DY).superscript𝐺superscript𝐹subscript𝐺𝐹subscriptsubscript𝐷𝑥subscript𝐷𝑌ℒ𝐺𝐹subscript𝐷𝑋subscript𝐷𝑌G^{*},F^{*}=\\arg\\min_{G,F}\\max_{D_{x},D_{Y}}\\mathcal{L}(G,F,D_{X},D_{Y}). (4) ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_20", "text": " Notice that our model can be viewed as training two “autoencoders” : we learn one autoencoder F∘G:X→X:𝐹𝐺→𝑋𝑋F\\circ G:X\\rightarrow X jointly with another G∘F:Y→Y:𝐺𝐹→𝑌𝑌G\\circ F:Y\\rightarrow Y. However, these autoencoders each have special internal structures: they map an image to itself via an intermediate representation that is a translation of the image into another domain. Such a setup can also be seen as a special case of “adversarial autoencoders” , which use an adversarial loss to train the bottleneck layer of an autoencoder to match an arbitrary target distribution. In our case, the target distribution for the X→X→𝑋𝑋X\\rightarrow X autoencoder is that of the domain Y𝑌Y. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_21", "text": " In Section 5.1.4, we compare our method against ablations of the full objective, including the adversarial loss ℒGANsubscriptℒGAN\\mathcal{L}_{\\text{GAN}} alone and the cycle consistency loss ℒcycsubscriptℒcyc\\mathcal{L}_{\\text{cyc}} alone, and empirically show that both objectives play critical roles in arriving at high-quality results. We also evaluate our method with only cycle loss in one direction and show that a single cycle is not sufficient to regularize the training for this under-constrained problem. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_22", "text": " We adopt the architecture for our generative networks from Johnson et al.  who have shown impressive results for neural style transfer and super-resolution. This network contains three convolutions, several residual blocks , two fractionally-strided convolutions with stride 1212\\frac{1}{2}, and one convolution that maps features to RGB. We use 666 blocks for 128×128128128128\\times 128 images and 999 blocks for 256×256256256256\\times 256 and higher-resolution training images. Similar to Johnson et al. , we use instance normalization . For the discriminator networks we use 70×70707070\\times 70 PatchGANs (22, 30, 29), which aim to classify whether 70×70707070\\times 70 overlapping image patches are real or fake. Such a patch-level discriminator architecture has fewer parameters than a full-image discriminator and can work on arbitrarily-sized images in a fully convolutional fashion . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_23", "text": " We apply two techniques from recent works to stabilize our model training procedure. First, for ℒGANsubscriptℒGAN\\mathcal{L}_{\\text{GAN}} (Equation 1), we replace the negative log likelihood objective by a least-squares loss . This loss is more stable during training and generates higher quality results. In particular, for a GAN loss ℒGAN​(G,D,X,Y)subscriptℒGAN𝐺𝐷𝑋𝑌\\mathcal{L}_{\\text{GAN}}(G,D,X,Y), we train the G𝐺G to minimize 𝔼x∼pdata​(x)​((D​(G​(x))−1)2)subscript𝔼similar-to𝑥subscript𝑝data𝑥delimited-()superscript𝐷𝐺𝑥12\\mathbb{E}_{x\\sim p_{\\text{data}}(x)}((D(G(x))-1)^{2}) and train the D𝐷D to minimize 𝔼y∼pdata​(y)​((D​(y)−1)2)+𝔼x∼pdata​(x)​(D​(G​(x))2)subscript𝔼similar-to𝑦subscript𝑝data𝑦delimited-()superscript𝐷𝑦12subscript𝔼similar-to𝑥subscript𝑝data𝑥delimited-()𝐷superscript𝐺𝑥2\\mathbb{E}_{y\\sim p_{\\text{data}}(y)}((D(y)-1)^{2})+\\mathbb{E}_{x\\sim p_{\\text{data}}(x)}(D(G(x))^{2}). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_24", "text": " Second, to reduce model oscillation , we follow Shrivastava et al.’s strategy  and update the discriminators using a history of generated images rather than the ones produced by the latest generators. We keep an image buffer that stores the 505050 previously created images. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_25", "text": " For all the experiments, we set λ=10𝜆10\\lambda=10 in Equation 3. We use the Adam solver  with a batch size of 111. All networks were trained from scratch with a learning rate of 0.00020.00020.0002. We keep the same learning rate for the first 100100100 epochs and linearly decay the rate to zero over the next 100100100 epochs. Please see the appendix (Section 7) for more details about the datasets, architectures, and training procedures. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_26", "text": " We first compare our approach against recent methods for unpaired image-to-image translation on paired datasets where ground truth input-output pairs are available for evaluation. We then study the importance of both the adversarial loss and the cycle consistency loss and compare our full method against several variants. Finally, we demonstrate the generality of our algorithm on a wide range of applications where paired data does not exist. For brevity, we refer to our method as CycleGAN. The PyTorch and Torch code, models, and full results can be found at our website. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_27", "text": " Using the same evaluation datasets and metrics as “pix2pix” , we compare our method against several baselines both qualitatively and quantitatively. The tasks include semantic labels↔↔\\leftrightarrowphoto on the Cityscapes dataset , and map↔↔\\leftrightarrowaerial photo on data scraped from Google Maps. We also perform ablation study on the full loss function. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_28", "text": " AMT perceptual studies On the map↔↔\\leftrightarrowaerial photo task, we run “real vs fake” perceptual studies on Amazon Mechanical Turk (AMT) to assess the realism of our outputs. We follow the same perceptual study protocol from Isola et al. , except we only gather data from 252525 participants per algorithm we tested. Participants were shown a sequence of pairs of images, one a real photo or map and one fake (generated by our algorithm or a baseline), and asked to click on the image they thought was real. The first 101010 trials of each session were practice and feedback was given as to whether the participant’s response was correct or incorrect. The remaining 404040 trials were used to assess the rate at which each algorithm fooled participants. Each session only tested a single algorithm, and participants were only allowed to complete a single session. The numbers we report here are not directly comparable to those in  as our ground truth images were processed slightly differently 222We train all the models on 256×256256256256\\times 256 images while in pix2pix , the model was trained on 256×256256256256\\times 256 patches of 512×512512512512\\times 512 images, and run convolutionally on the 512×512512512512\\times 512 images at test time. We choose 256×256256256256\\times 256 in our experiments as many baselines cannot scale up to high-resolution images, and CoGAN cannot be tested fully convolutionally. and the participant pool we tested may be differently distributed from those tested in  (due to running the experiment at a different date and time). Therefore, our numbers should only be used to compare our current method against the baselines (which were run under identical conditions), rather than against . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_29", "text": " FCN score Although perceptual studies may be the gold standard for assessing graphical realism, we also seek an automatic quantitative measure that does not require human experiments. For this, we adopt the “FCN score” from , and use it to evaluate the Cityscapes labels→→\\rightarrowphoto task. The FCN metric evaluates how interpretable the generated photos are according to an off-the-shelf semantic segmentation algorithm (the fully-convolutional network, FCN, from ). The FCN predicts a label map for a generated photo. This label map can then be compared against the input ground truth labels using standard semantic segmentation metrics described below. The intuition is that if we generate a photo from a label map of “car on the road”, then we have succeeded if the FCN applied to the generated photo detects “car on the road”. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_30", "text": " Semantic segmentation metrics To evaluate the performance of photo→→\\rightarrowlabels, we use the standard metrics from the Cityscapes benchmark , including per-pixel accuracy, per-class accuracy, and mean class Intersection-Over-Union (Class IOU) . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_31", "text": " CoGAN  This method learns one GAN generator for domain X𝑋X and one for domain Y𝑌Y, with tied weights on the first few layers for shared latent representations. Translation from X𝑋X to Y𝑌Y can be achieved by finding a latent representation that generates image X𝑋X and then rendering this latent representation into style Y𝑌Y. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_32", "text": " SimGAN  Like our method, Shrivastava et al. uses an adversarial loss to train a translation from X𝑋X to Y𝑌Y. The regularization term ∥x−G​(x)∥1subscriptdelimited-∥∥𝑥𝐺𝑥1\\lVert x-G(x)\\rVert_{1} i s used to penalize making large changes at pixel level. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_33", "text": " Feature loss + GAN We also test a variant of SimGAN  where the L1 loss is computed over deep image features using a pretrained network (VGG-16 relu4_2 ), rather than over RGB pixel values. Computing distances in deep feature space, like this, is also sometimes referred to as using a “perceptual loss” (8, 23). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_34", "text": " BiGAN/ALI (9, 7) Unconditional GANs  learn a generator G:Z→X:𝐺→𝑍𝑋G:Z\\rightarrow X, that maps a random noise z𝑧z to an image x𝑥x. The BiGAN  and ALI  propose to also learn the inverse mapping function F:X→Z:𝐹→𝑋𝑍F:X\\rightarrow Z. Though they were originally designed for mapping a latent vector z𝑧z to an image x𝑥x, we implemented the same objective for mapping a source image x𝑥x to a target image y𝑦y. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_35", "text": " pix2pix  We also compare against pix2pix , which is trained on paired data, to see how close we can get to this “upper bound” without using any paired data. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_36", "text": " For a fair comparison, we implement all the baselines using the same architecture and details as our method, except for CoGAN . CoGAN builds on generators that produce images from a shared latent representation, which is incompatible with our image-to-image network. We use the public implementation of CoGAN instead. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_37", "text": " As can be seen in Figure 5 and Figure 6, we were unable to achieve compelling results with any of the baselines. Our method, on the other hand, can produce translations that are often of similar quality to the fully supervised pix2pix. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_38", "text": " Table 1 reports performance regarding the AMT perceptual realism task. Here, we see that our method can fool participants on around a quarter of trials, in both the maps→→\\rightarrowaerial photos direction and the aerial photos→→\\rightarrowmaps direction at 256×256256256256\\times 256 resolution333We also train CycleGAN and pix2pix at 512×512512512512\\times 512 resolution, and observe the comparable performance: maps→→\\rightarrowaerial photos: CycleGAN: 37.5%±3.6%plus-or-minuspercent37.5percent3.637.5\\%\\pm 3.6\\% and pix2pix: 33.9%±3.1%plus-or-minuspercent33.9percent3.133.9\\%\\pm 3.1\\%; aerial photos→→\\rightarrowmaps: CycleGAN: 16.5%±4.1%plus-or-minuspercent16.5percent4.116.5\\%\\pm 4.1\\% and pix2pix: 8.5%±2.6%plus-or-minuspercent8.5percent2.68.5\\%\\pm 2.6\\%. All the baselines almost never fooled participants. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_39", "text": " Table 2 assesses the performance of the labels→→\\rightarrowphoto task on the Cityscapes and Table 3 evaluates the opposite mapping (photos→→\\rightarrowlabels). In both cases, our method again outperforms the baselines. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_40", "text": " In Table 4 and Table 5, we compare against ablations of our full loss. Removing the GAN loss substantially degrades results, as does removing the cycle-consistency loss. We therefore conclude that both terms are critical to our results. We also evaluate our method with the cycle loss in only one direction: GAN + forward cycle loss 𝔼x∼pdata​(x)​(∥F​(G​(x))−x∥1)subscript𝔼similar-to𝑥subscript𝑝data𝑥delimited-()subscriptdelimited-∥∥𝐹𝐺𝑥𝑥1\\mathbb{E}_{x\\sim p_{\\text{data}}(x)}(\\lVert F(G(x))-x\\rVert_{1}), or GAN + backward cycle loss 𝔼y∼pdata​(y)​(∥G​(F​(y))−y∥1)subscript𝔼similar-to𝑦subscript𝑝data𝑦delimited-()subscriptdelimited-∥∥𝐺𝐹𝑦𝑦1\\mathbb{E}_{y\\sim p_{\\text{data}}(y)}(\\lVert G(F(y))-y\\rVert_{1}) (Equation 2) and find that it often incurs training instability and causes mode collapse, especially for the direction of the mapping that was removed. Figure 7 shows several qualitative examples. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_41", "text": " In Figure 4, we show a few random samples of the reconstructed images F​(G​(x))𝐹𝐺𝑥F(G(x)). We observed that the reconstructed images were often close to the original inputs x𝑥x, at both training and testing time, even in cases where one domain represents significantly more diverse information, such as map↔↔\\leftrightarrowaerial photos. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_42", "text": " Figure 8 shows some example results on other paired datasets used in “pix2pix” , such as architectural labels↔↔\\leftrightarrowphotos from the CMP Facade Database , and edges↔↔\\leftrightarrowshoes from the UT Zappos50K dataset . The image quality of our results is close to those produced by the fully supervised pix2pix while our method learns the mapping without paired supervision. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_43", "text": " We demonstrate our method on several applications where paired training data does not exist. Please refer to the appendix (Section 7) for more details about the datasets. We observe that translations on training data are often more appealing than those on test data, and full results of all applications on both training and test data can be viewed on our project website. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_44", "text": " Collection style transfer (Figure 10 and  Figure 11) We train the model on landscape photographs downloaded from Flickr and WikiArt. Unlike recent work on “neural style transfer” , our method learns to mimic the style of an entire collection of artworks, rather than transferring the style of a single selected piece of art. Therefore, we can learn to generate photos in the style of, e.g., Van Gogh, rather than just in the style of Starry Night. The size of the dataset for each artist/style was 526526526, 107310731073, 400400400, and 563563563 for Cezanne, Monet, Van Gogh, and Ukiyo-e. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_45", "text": " Object transfiguration (Figure 13) The model is trained to translate one object class from ImageNet  to another (each class contains around 100010001000 training images). Turmukhambetov et al.  propose a subspace model to translate one object into another object of the same category, while our method focuses on object transfiguration between two visually similar categories. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_46", "text": " Season transfer (Figure 13) The model is trained on 854854854 winter photos and 127312731273 summer photos of Yosemite downloaded from Flickr. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_47", "text": " Photo generation from paintings (Figure 12) For painting→→\\rightarrowphoto, we find that it is helpful to introduce an additional loss to encourage the mapping to preserve color composition between the input and output. In particular, we adopt the technique of Taigman et al.  and regularize the generator to be near an identity mapping when real samples of the target domain are provided as the input to the generator: i.e., ℒidentity​(G,F)=𝔼y∼pdata​(y)​(∥G​(y)−y∥1)+𝔼x∼pdata​(x)​(∥F​(x)−x∥1).subscriptℒidentity𝐺𝐹subscript𝔼similar-to𝑦subscript𝑝data𝑦delimited-()subscriptdelimited-∥∥𝐺𝑦𝑦1subscript𝔼similar-to𝑥subscript𝑝data𝑥delimited-()subscriptdelimited-∥∥𝐹𝑥𝑥1\\mathcal{L}_{\\text{identity}}(G,F)=\\mathbb{E}_{y\\sim p_{\\text{data}}(y)}(\\lVert G(y)-y\\rVert_{1})+\\mathbb{E}_{x\\sim p_{\\text{data}}(x)}(\\lVert F(x)-x\\rVert_{1}). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_48", "text": " Without ℒidentitysubscriptℒidentity\\mathcal{L}_{\\text{identity}}, the generator G𝐺G and F𝐹F are free to change the tint of input images when there is no need to. For example, when learning the mapping between Monet’s paintings and Flickr photographs, the generator often maps paintings of daytime to photographs taken during sunset, because such a mapping may be equally valid under the adversarial loss and cycle consistency loss. The effect of this identity mapping loss are shown in Figure 9. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_49", "text": " In Figure 12, we show additional results translating Monet’s paintings to photographs. This figure and Figure 9 show results on paintings that were included in the training set, whereas for all other experiments in the paper, we only evaluate and show test set results. Because the training set does not include paired data, coming up with a plausible translation for a training set painting is a nontrivial task. Indeed, since Monet is no longer able to create new paintings, generalization to unseen, “test set”, paintings is not a pressing problem. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_50", "text": " Photo enhancement (Figure 14) We show that our method can be used to generate photos with shallower depth of field. We train the model on flower photos downloaded from Flickr. The source domain consists of flower photos taken by smartphones, which usually have deep DoF due to a small aperture. The target contains photos captured by DSLRs with a larger aperture. Our model successfully generates photos with shallower depth of field from the photos taken by smartphones. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_51", "text": " Comparison with Gatys et al.  In  Figure 15, we compare our results with neural style transfer  on photo stylization. For each row, we first use two representative artworks as the style images for  . Our method, on the other hand, can produce photos in the style of entire collection. To compare against neural style transfer of an entire collection, we compute the average Gram Matrix across the target domain and use this matrix to transfer the “average style” with Gatys et al . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_52", "text": " Figure 16 demonstrates similar comparisons for other translation tasks. We observe that Gatys et al.  requires finding target style images that closely match the desired output, but still often fails to produce photorealistic results, while our method succeeds to generate natural-looking results, similar to the target domain. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_53", "text": " Although our method can achieve compelling results in many cases, the results are far from uniformly positive. Figure 17 shows several typical failure cases. On translation tasks that involve color and texture changes, as many of those reported above, the method often succeeds. We have also explored tasks that require geometric changes, with little success. For example, on the task of dog→→\\rightarrowcat transfiguration, the learned translation degenerates into making minimal changes to the input (Figure 17). This failure might be caused by our generator architectures which are tailored for good performance on the appearance changes. Handling more varied and extreme transformations, especially geometric changes, is an important problem for future work. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_54", "text": " Some failure cases are caused by the distribution characteristics of the training datasets. For example, our method has got confused in the horse →→\\rightarrow zebra example (Figure 17, right), because our model was trained on the wild horse and zebra synsets of ImageNet, which does not contain images of a person riding a horse or zebra. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_55", "text": " We also observe a lingering gap between the results achievable with paired training data and those achieved by our unpaired method. In some cases, this gap may be very hard – or even impossible – to close: for example, our method sometimes permutes the labels for tree and building in the output of the photos→→\\rightarrowlabels task. Resolving this ambiguity may require some form of weak semantic supervision. Integrating weak or semi-supervised data may lead to substantially more powerful translators, still at a fraction of the annotation cost of the fully-supervised systems. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_56", "text": " Nonetheless, in many cases completely unpaired data is plentifully available and should be made use of. This paper pushes the boundaries of what is possible in this “unsupervised” setting. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_57", "text": " Acknowledgments: We thank Aaron Hertzmann, Shiry Ginosar, Deepak Pathak, Bryan Russell, Eli Shechtman, Richard Zhang, and Tinghui Zhou for many helpful comments. This work was supported in part by NSF SMA-1514512, NSF IIS-1633310, a Google Research Award, Intel Corp, and hardware donations from NVIDIA. JYZ is supported by the Facebook Graduate Fellowship and TP is supported by the Samsung Scholarship. The photographs used for style transfer were taken by AE, mostly in France. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_58", "text": " We train our networks from scratch, with a learning rate of 0.00020.00020.0002. In practice, we divide the objective by 222 while optimizing D𝐷D, which slows down the rate at which D𝐷D learns, relative to the rate of G𝐺G. We keep the same learning rate for the first 100100100 epochs and linearly decay the rate to zero over the next 100100100 epochs. Weights are initialized from a Gaussian distribution 𝒩​(0,0.02)𝒩00.02\\mathcal{N}(0,0.02). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_59", "text": " Cityscapes label↔↔\\leftrightarrowPhoto 297529752975 training images from the Cityscapes training set with image size 128×128128128128\\times 128. We used the Cityscapes val set for testing. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_60", "text": " Maps↔↔\\leftrightarrowaerial photograph 109610961096 training images were scraped from Google Maps  with image size 256×256256256256\\times 256. Images were sampled from in and around New York City. Data was then split into train and test about the median latitude of the sampling region (with a buffer region added to ensure that no training pixel appeared in the test set). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_61", "text": " Architectural facades labels↔↔\\leftrightarrowphoto 400400400 training images from the CMP Facade Database . ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_62", "text": " Edges→→\\rightarrowshoes around 50,0005000050,000 training images from UT Zappos50K dataset . The model was trained for 555 epochs. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_63", "text": " Horse↔↔\\leftrightarrowZebra and Apple↔↔\\leftrightarrowOrange We downloaded the images from ImageNet  using keywords wild horse, zebra, apple, and navel orange. The images were scaled to 256×256256256256\\times 256 pixels. The training set size of each class: 939939939 (horse), 117711771177 (zebra), 996996996 (apple), and 102010201020 (orange). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_64", "text": " Summer↔↔\\leftrightarrowWinter Yosemite The images were downloaded using Flickr API with the tag yosemite and the datetaken field. Black-and-white photos were pruned. The images were scaled to 256×256256256256\\times 256 pixels. The training size of each class: 127312731273 (summer) and 854854854 ( winter). ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_65", "text": " Photo↔↔\\leftrightarrowArt for style transfer The art images were downloaded from Wikiart.org. Some artworks that were sketches or too obscene were pruned by hand. The photos were downloaded from Flickr using the combination of tags landscape and landscapephotography. Black-and-white photos were pruned. The images were scaled to 256×256256256256\\times 256 pixels. The training set size of each class was 107410741074 (Monet), 584584584 (Cezanne), 401401401 (Van Gogh), 143314331433 (Ukiyo-e), and 685368536853 (Photographs). The Monet dataset was particularly pruned to include only landscape paintings, and the Van Gogh dataset included only his later works that represent his most recognizable artistic style. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_66", "text": " Monet’s paintings→→\\rightarrowphotos To achieve high resolution while conserving memory, we used random square crops of the original images for training. To generate results, we passed images of width 512512512 pixels with correct aspect ratio to the generator network as input. The weight for the identity mapping loss was 0.5​λ0.5𝜆0.5\\lambda where λ𝜆\\lambda was the weight for cycle consistency loss. We set λ=10𝜆10\\lambda=10. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_67", "text": " Flower photo enhancement Flower images taken on smartphones were downloaded from Flickr by searching for the photos taken by Apple iPhone 5, 5s, or 6, with search text flower. DSLR images with shallow DoF were also downloaded from Flickr by search tag flower, dof. The images were scaled to 360360360 pixels by width. The identity mapping loss of weight 0.5​λ0.5𝜆0.5\\lambda was used. The training set size of the smartphone and DSLR dataset were 181318131813 and 332633263326, respectively. We set λ=10𝜆10\\lambda=10. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_68", "text": " We provide both PyTorch and Torch implementations. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_69", "text": " Generator architectures We adopt our architectures from Johnson et al. . We use 666 residual blocks for 128×128128128128\\times 128 training images, and 999 residual blocks for 256×256256256256\\times 256 or higher-resolution training images. Below, we follow the naming convention used in the Johnson et al.’s Github repository. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_70", "text": " Let c7s1-k denote a 7×7777\\times 7 Convolution-InstanceNorm-ReLU layer with k𝑘k filters and stride 111. dk denotes a 3×3333\\times 3 Convolution-InstanceNorm-ReLU layer with k𝑘k filters and stride 222. Reflection padding was used to reduce artifacts. Rk denotes a residual block that contains two 3×3333\\times 3 convolutional layers with the same number of filters on both layer. uk denotes a 3×3333\\times 3 fractional-strided-Convolution-InstanceNorm-ReLU layer with k𝑘k filters and stride 1212\\frac{1}{2}. ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_71", "text": " The network with 6 residual blocks consists of: c7s1-64,d128,d256,R256,R256,R256, R256,R256,R256,u128,u64,c7s1-3 ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_72", "text": " The network with 9 residual blocks consists of: c7s1-64,d128,d256,R256,R256,R256, R256,R256,R256,R256,R256,R256,u128 u64,c7s1-3 ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" }, { "id": "1703.10593_all_73", "text": " Discriminator architectures For discriminator networks, we use 70×70707070\\times 70 PatchGAN . Let Ck denote a 4×4444\\times 4 Convolution-InstanceNorm-LeakyReLU layer with k filters and stride 222. After the last layer, we apply a convolution to produce a 111-dimensional output. We do not use InstanceNorm for the first C64 layer. We use leaky ReLUs with a slope of 0.20.20.2. The discriminator architecture is: C64-C128-C256-C512 ", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks" } ]
How classification subnet is different from regression subnet?
Even though the design of box regression subnet is similar to classification subnet, the design terminates in 4A linear outputs per spatial location [31].
[ 31 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of the foreground classes or as background using a convolutional neural network. Through a sequence of advances (10, 28, 20, 14), this two-stage framework consistently achieves top accuracy on the challenging COCO benchmark . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_1", "text": " Despite the success of two-stage detectors, a natural question to ask is: could a simple one-stage detector achieve similar accuracy? One stage detectors are applied over a regular, dense sampling of object locations, scales, and aspect ratios. Recent work on one-stage detectors, such as YOLO (26, 27) and SSD (22, 9), demonstrates promising results, yielding faster detectors with accuracy within 10-40% relative to state-of-the-art two-stage methods. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_2", "text": " This paper pushes the envelop further: we present a one-stage object detector that, for the first time, matches the state-of-the-art COCO AP of more complex two-stage detectors, such as the Feature Pyramid Network (FPN) or Mask R-CNN variants of Faster R-CNN . To achieve this result, we identify class imbalance during training as the main obstacle impeding one-stage detector from achieving state-of-the-art accuracy and propose a new loss function that eliminates this barrier. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_3", "text": " Class imbalance is addressed in R-CNN-like detectors by a two-stage cascade and sampling heuristics. The proposal stage (e.g., Selective Search , EdgeBoxes , DeepMask (24, 25), RPN ) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio (1:3), or online hard example mining (OHEM) , are performed to maintain a manageable balance between foreground and background. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_4", "text": " In contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. In practice this often amounts to enumerating ∼similar-to\\scriptstyle\\sim100k locations that densely cover spatial positions, scales, and aspect ratios. While similar sampling heuristics may also be applied, they are inefficient as the training procedure is still dominated by easily classified background examples. This inefficiency is a classic problem in object detection that is typically addressed via techniques such as bootstrapping (33, 29) or hard example mining (37, 8, 31). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_5", "text": " In this paper, we propose a new loss function that acts as a more effective alternative to previous approaches for dealing with class imbalance. The loss function is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases, see Figure 1. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. Experiments show that our proposed Focal Loss enables us to train a high-accuracy, one-stage detector that significantly outperforms the alternatives of training with the sampling heuristics or hard example mining, the previous state-of-the-art techniques for training one-stage detectors. Finally, we note that the exact form of the focal loss is not crucial, and we show other instantiations can achieve similar results. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_6", "text": " To demonstrate the effectiveness of the proposed focal loss, we design a simple one-stage object detector called RetinaNet, named for its dense sampling of object locations in an input image. Its design features an efficient in-network feature pyramid and use of anchor boxes. It draws on a variety of recent ideas from (22, 6, 28, 20). RetinaNet is efficient and accurate; our best model, based on a ResNet-101-FPN backbone, achieves a COCO test-dev AP of 39.1 while running at 5 fps, surpassing the previously best published single-model results from both one and two-stage detectors, see Figure 2. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_7", "text": " The sliding-window paradigm, in which a classifier is applied on a dense image grid, has a long and rich history. One of the earliest successes is the classic work of LeCun et al. who applied convolutional neural networks to handwritten digit recognition (19, 36). Viola and Jones used boosted object detectors for face detection, leading to widespread adoption of such models. The introduction of HOG and integral channel features gave rise to effective methods for pedestrian detection. DPMs helped extend dense detectors to more general object categories and had top results on PASCAL for many years. While the sliding-window approach was the leading detection paradigm in classic computer vision, with the resurgence of deep learning , two-stage detectors, described next, quickly came to dominate object detection. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_8", "text": " The dominant paradigm in modern object detection is based on a two-stage approach. As pioneered in the Selective Search work , the first stage generates a sparse set of candidate proposals that should contain all objects while filtering out the majority of negative locations, and the second stage classifies the proposals into foreground classes / background. R-CNN upgraded the second-stage classifier to a convolutional network yielding large gains in accuracy and ushering in the modern era of object detection. R-CNN was improved over the years, both in terms of speed (15, 10) and by using learned object proposals (6, 24, 28). Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework . Numerous extensions to this framework have been proposed, e.g. (20, 31, 32, 16, 14). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_9", "text": " OverFeat was one of the first modern one-stage object detector based on deep networks. More recently SSD (22, 9) and YOLO (26, 27) have renewed interest in one-stage methods. These detectors have been tuned for speed but their accuracy trails that of two-stage methods. SSD has a 10-20% lower AP, while YOLO focuses on an even more extreme speed/accuracy trade-off. See Figure 2. Recent work showed that two-stage detectors can be made fast simply by reducing input image resolution and the number of proposals, but one-stage methods trailed in accuracy even with a larger compute budget . In contrast, the aim of this work is to understand if one-stage detectors can match or surpass the accuracy of two-stage detectors while running at similar or faster speeds. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_10", "text": " The design of our RetinaNet detector shares many similarities with previous dense detectors, in particular the concept of ‘anchors’ introduced by RPN and use of features pyramids as in SSD and FPN . We emphasize that our simple detector achieves top results not based on innovations in network design but due to our novel loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_11", "text": " Both classic one-stage object detection methods, like boosted detectors (37, 5) and DPMs , and more recent methods, like SSD , face a large class imbalance during training. These detectors evaluate 104superscript10410^{4}-105superscript10510^{5} candidate locations per image but only a few locations contain objects. This imbalance causes two problems: (1) training is inefficient as most locations are easy negatives that contribute no useful learning signal; (2) en masse, the easy negatives can overwhelm training and lead to degenerate models. A common solution is to perform some form of hard negative mining (33, 37, 8, 31, 22) that samples hard examples during training or more complex sampling/reweighing schemes . In contrast, we show that our proposed focal loss naturally handles the class imbalance faced by a one-stage detector and allows us to efficiently train on all examples without sampling and without easy negatives overwhelming the loss and computed gradients. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_12", "text": " There has been much interest in designing robust loss functions (e.g., Huber loss ) that reduce the contribution of outliers by down-weighting the loss of examples with large errors (hard examples). In contrast, rather than addressing outliers, our focal loss is designed to address class imbalance by down-weighting inliers (easy examples) such that their contribution to the total loss is small even if their number is large. In other words, the focal loss performs the opposite role of a robust loss: it focuses training on a sparse set of hard examples. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_13", "text": " The Focal Loss is designed to address the one-stage object detection scenario in which there is an extreme imbalance between foreground and background classes during training (e.g., 1:1000). We introduce the focal loss starting from the cross entropy (CE) loss for binary classification111Extending the focal loss to the multi-class case is straightforward and works well; for simplicity we focus on the binary loss in this work.: CE​(p,y)={−log⁡(p)if y=1−log⁡(1−p)otherwise.CE𝑝𝑦cases𝑝if y=11𝑝otherwise.\\textrm{CE}(p,y)=\\begin{cases}-\\log(p)&\\text{if $y=1$}\\\\ -\\log(1-p)&\\text{otherwise.}\\end{cases} (1) In the above y∈{±1}𝑦plus-or-minus1y\\in\\{\\pm 1\\} specifies the ground-truth class and p∈(0,1)𝑝01p\\in(0,1) is the model’s estimated probability for the class with label y=1𝑦1y=1. For notational convenience, we define ptsubscript𝑝tp_{\\textrm{t}}: pt={pif y=11−potherwise,subscript𝑝tcases𝑝if y=11𝑝otherwise,p_{\\textrm{t}}=\\begin{cases}p&\\text{if $y=1$}\\\\ 1-p&\\text{otherwise,}\\end{cases} (2) and rewrite CE​(p,y)=CE​(pt)=−log⁡(pt)CE𝑝𝑦CEsubscript𝑝tsubscript𝑝t\\textrm{CE}(p,y)=\\textrm{CE}(p_{\\textrm{t}})=-\\log(p_{\\textrm{t}}). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_14", "text": " The CE loss can be seen as the blue (top) curve in Figure 1. One notable property of this loss, which can be easily seen in its plot, is that even examples that are easily classified (pt≫.5much-greater-thansubscript𝑝t.5p_{\\textrm{t}}\\gg.5) incur a loss with non-trivial magnitude. When summed over a large number of easy examples, these small loss values can overwhelm the rare class. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_15", "text": " A common method for addressing class imbalance is to introduce a weighting factor α∈(0,1)𝛼01\\alpha\\in(0,1) for class 111 and 1−α1𝛼1-\\alpha for class −11-1. In practice α𝛼\\alpha may be set by inverse class frequency or treated as a hyperparameter to set by cross validation. For notational convenience, we define αtsubscript𝛼t\\alpha_{\\textrm{t}} analogously to how we defined ptsubscript𝑝tp_{\\textrm{t}}. We write the α𝛼\\alpha-balanced CE loss as: CE​(pt)=−αt​log⁡(pt).CEsubscript𝑝tsubscript𝛼tsubscript𝑝t\\textrm{CE}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}\\log(p_{\\textrm{t}}). (3) This loss is a simple extension to CE that we consider as an experimental baseline for our proposed focal loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_16", "text": " As our experiments will show, the large class imbalance encountered during training of dense detectors overwhelms the cross entropy loss. Easily classified negatives comprise the majority of the loss and dominate the gradient. While α𝛼\\alpha balances the importance of positive/negative examples, it does not differentiate between easy/hard examples. Instead, we propose to reshape the loss function to down-weight easy examples and thus focus training on hard negatives. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_17", "text": " More formally, we propose to add a modulating factor (1−pt)γsuperscript1subscript𝑝t𝛾(1-p_{\\textrm{t}})^{\\gamma} to the cross entropy loss, with tunable focusing parameter γ≥0𝛾0\\gamma\\geq 0. We define the focal loss as: FL​(pt)=−(1−pt)γ​log⁡(pt).FLsubscript𝑝tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (4) ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_18", "text": " The focal loss is visualized for several values of γ∈(0,5)𝛾05\\gamma\\in(0,5) in Figure 1. We note two properties of the focal loss. (1) When an example is misclassified and ptsubscript𝑝tp_{\\textrm{t}} is small, the modulating factor is near 111 and the loss is unaffected. As pt→1→subscript𝑝t1p_{\\textrm{t}}\\rightarrow 1, the factor goes to 0 and the loss for well-classified examples is down-weighted. (2) The focusing parameter γ𝛾\\gamma smoothly adjusts the rate at which easy examples are down-weighted. When γ=0𝛾0\\gamma=0, FL is equivalent to CE, and as γ𝛾\\gamma is increased the effect of the modulating factor is likewise increased (we found γ=2𝛾2\\gamma=2 to work best in our experiments). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_19", "text": " Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss. For instance, with γ=2𝛾2\\gamma=2, an example classified with pt=0.9subscript𝑝t0.9p_{\\textrm{t}}=0.9 would have 100×100\\times lower loss compared with CE and with pt≈0.968subscript𝑝t0.968p_{\\textrm{t}}\\approx 0.968 it would have 1000×1000\\times lower loss. This in turn increases the importance of correcting misclassified examples (whose loss is scaled down by at most 4×4\\times for pt≤.5subscript𝑝t.5p_{\\textrm{t}}\\leq.5 and γ=2𝛾2\\gamma=2). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_20", "text": " In practice we use an α𝛼\\alpha-balanced variant of the focal loss: FL​(pt)=−αt​(1−pt)γ​log⁡(pt).FLsubscript𝑝tsubscript𝛼tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (5) We adopt this form in our experiments as it yields slightly improved accuracy over the non-α𝛼\\alpha-balanced form. Finally, we note that the implementation of the loss layer combines the sigmoid operation for computing p𝑝p with the loss computation, resulting in greater numerical stability. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_21", "text": " While in our main experimental results we use the focal loss definition above, its precise form is not crucial. In the appendix we consider other instantiations of the focal loss and demonstrate that these can be equally effective. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_22", "text": " Binary classification models are by default initialized to have equal probability of outputting either y=−1𝑦1y=-1 or 111. Under such an initialization, in the presence of class imbalance, the loss due to the frequent class can dominate total loss and cause instability in early training. To counter this, we introduce the concept of a ‘prior’ for the value of p𝑝p estimated by the model for the rare class (foreground) at the start of training. We denote the prior by π𝜋\\pi and set it so that the model’s estimated p𝑝p for examples of the rare class is low, e.g. 0.010.010.01. We note that this is a change in model initialization (see §4.1) and not of the loss function. We found this to improve training stability for both the cross entropy and focal loss in the case of heavy class imbalance. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_23", "text": " Two-stage detectors are often trained with the cross entropy loss without use of α𝛼\\alpha-balancing or our proposed loss. Instead, they address class imbalance through two mechanisms: (1) a two-stage cascade and (2) biased minibatch sampling. The first cascade stage is an object proposal mechanism (35, 24, 28) that reduces the nearly infinite set of possible object locations down to one or two thousand. Importantly, the selected proposals are not random, but are likely to correspond to true object locations, which removes the vast majority of easy negatives. When training the second stage, biased sampling is typically used to construct minibatches that contain, for instance, a 1:3 ratio of positive to negative examples. This ratio is like an implicit α𝛼\\alpha-balancing factor that is implemented via sampling. Our proposed focal loss is designed to address these mechanisms in a one-stage detection system directly via the loss function. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_24", "text": " RetinaNet is a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network. The first subnet performs convolutional object classification on the backbone’s output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that we propose specifically for one-stage, dense detection, see Figure 3. While there are many possible choices for the details of these components, most design parameters are not particularly sensitive to exact values as shown in the experiments. We describe each component of RetinaNet next. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_25", "text": " We adopt the Feature Pyramid Network (FPN) from as the backbone network for RetinaNet. In brief, FPN augments a standard convolutional network with a top-down pathway and lateral connections so the network efficiently constructs a rich, multi-scale feature pyramid from a single resolution input image, see Figure 3(a)-(b). Each level of the pyramid can be used for detecting objects at a different scale. FPN improves multi-scale predictions from fully convolutional networks (FCN) , as shown by its gains for RPN and DeepMask-style proposals , as well at two-stage detectors such as Fast R-CNN or Mask R-CNN . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_26", "text": " Following , we build FPN on top of the ResNet architecture . We construct a pyramid with levels P3subscript𝑃3P_{3} through P7subscript𝑃7P_{7}, where l𝑙l indicates pyramid level (Plsubscript𝑃𝑙P_{l} has resolution 2lsuperscript2𝑙2^{l} lower than the input). As in all pyramid levels have C=256𝐶256C=256 channels. Details of the pyramid generally follow with a few modest differences.222RetinaNet uses feature pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, where P3subscript𝑃3P_{3} to P5subscript𝑃5P_{5} are computed from the output of the corresponding ResNet residual stage (C3subscript𝐶3C_{3} through C5subscript𝐶5C_{5}) using top-down and lateral connections just as in , P6subscript𝑃6P_{6} is obtained via a 3×\\times3 stride-2 conv on C5subscript𝐶5C_{5}, and P7subscript𝑃7P_{7} is computed by applying ReLU followed by a 3×\\times3 stride-2 conv on P6subscript𝑃6P_{6}. This differs slightly from : (1) we don’t use the high-resolution pyramid level P2subscript𝑃2P_{2} for computational reasons, (2) P6subscript𝑃6P_{6} is computed by strided convolution instead of downsampling, and (3) we include P7subscript𝑃7P_{7} to improve large object detection. These minor modifications improve speed while maintaining accuracy. While many design choices are not crucial, we emphasize the use of the FPN backbone is; preliminary experiments using features from only the final ResNet layer yielded low AP. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_27", "text": " We use translation-invariant anchor boxes similar to those in the RPN variant in . The anchors have areas of 322superscript32232^{2} to 5122superscript5122512^{2} on pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, respectively. As in , at each pyramid level we use anchors at three aspect ratios {1\\{1:2,22, 111:111, 222:1}1\\}. For denser scale coverage than in , at each level we add anchors of sizes {20superscript202^{0}, 21/3superscript2132^{1/3}, 22/3superscript2232^{2/3}} of the original set of 3 aspect ratio anchors. This improve AP in our setting. In total there are A=9𝐴9A=9 anchors per level and across levels they cover the scale range 32 - 813 pixels with respect to the network’s input image. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_28", "text": " Each anchor is assigned a length K𝐾K one-hot vector of classification targets, where K𝐾K is the number of object classes, and a 4-vector of box regression targets. We use the assignment rule from RPN but modified for multi-class detection and with adjusted thresholds. Specifically, anchors are assigned to ground-truth object boxes using an intersection-over-union (IoU) threshold of 0.5; and to background if their IoU is in (0, 0.4). As each anchor is assigned to at most one object box, we set the corresponding entry in its length K𝐾K label vector to 111 and all other entries to 00. If an anchor is unassigned, which may happen with overlap in (0.4, 0.5), it is ignored during training. Box regression targets are computed as the offset between each anchor and its assigned object box, or omitted if there is no assignment. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_29", "text": " The classification subnet predicts the probability of object presence at each spatial position for each of the A𝐴A anchors and K𝐾K object classes. This subnet is a small FCN attached to each FPN level; parameters of this subnet are shared across all pyramid levels. Its design is simple. Taking an input feature map with C𝐶C channels from a given pyramid level, the subnet applies four 3×\\times3 conv layers, each with C𝐶C filters and each followed by ReLU activations, followed by a 3×\\times3 conv layer with K​A𝐾𝐴KA filters. Finally sigmoid activations are attached to output the K​A𝐾𝐴KA binary predictions per spatial location, see Figure 3 (c). We use C=256𝐶256C=256 and A=9𝐴9A=9 in most experiments. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_30", "text": " In contrast to RPN , our object classification subnet is deeper, uses only 3×\\times3 convs, and does not share parameters with the box regression subnet (described next). We found these higher-level design decisions to be more important than specific values of hyperparameters. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_31", "text": " In parallel with the object classification subnet, we attach another small FCN to each pyramid level for the purpose of regressing the offset from each anchor box to a nearby ground-truth object, if one exists. The design of the box regression subnet is identical to the classification subnet except that it terminates in 4​A4𝐴4A linear outputs per spatial location, see Figure 3 (d). For each of the A𝐴A anchors per spatial location, these 444 outputs predict the relative offset between the anchor and the ground-truth box (we use the standard box parameterization from R-CNN ). We note that unlike most recent work, we use a class-agnostic bounding box regressor which uses fewer parameters and we found to be equally effective. The object classification subnet and the box regression subnet, though sharing a common structure, use separate parameters. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_32", "text": " RetinaNet forms a single FCN comprised of a ResNet-FPN backbone, a classification subnet, and a box regression subnet, see Figure 3. As such, inference involves simply forwarding an image through the network. To improve speed, we only decode box predictions from at most 1k top-scoring predictions per FPN level, after thresholding detector confidence at 0.05. The top predictions from all levels are merged and non-maximum suppression with a threshold of 0.5 is applied to yield the final detections. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_33", "text": " We use the focal loss introduced in this work as the loss on the output of the classification subnet. As we will show in §5, we find that γ=2𝛾2\\gamma=2 works well in practice and the RetinaNet is relatively robust to γ∈(0.5,5)𝛾0.55\\gamma\\in(0.5,5). We emphasize that when training RetinaNet, the focal loss is applied to all ∼similar-to\\scriptstyle\\sim100k anchors in each sampled image. This stands in contrast to common practice of using heuristic sampling (RPN) or hard example mining (OHEM, SSD) to select a small set of anchors (e.g., 256) for each minibatch. The total focal loss of an image is computed as the sum of the focal loss over all ∼similar-to\\scriptstyle\\sim100k anchors, normalized by the number of anchors assigned to a ground-truth box. We perform the normalization by the number of assigned anchors, not total anchors, since the vast majority of anchors are easy negatives and receive negligible loss values under the focal loss. Finally we note that α𝛼\\alpha, the weight assigned to the rare class, also has a stable range, but it interacts with γ𝛾\\gamma making it necessary to select the two together (see Tables 1a and 1b). In general α𝛼\\alpha should be decreased slightly as γ𝛾\\gamma is increased (for γ=2𝛾2\\gamma=2, α=0.25𝛼0.25\\alpha=0.25 works best). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_34", "text": " We experiment with ResNet-50-FPN and ResNet-101-FPN backbones . The base ResNet-50 and ResNet-101 models are pre-trained on ImageNet1k; we use the models released by . New layers added for FPN are initialized as in . All new conv layers except the final one in the RetinaNet subnets are initialized with bias b=0𝑏0b=0 and a Gaussian weight fill with σ=0.01𝜎0.01\\sigma=0.01. For the final conv layer of the classification subnet, we set the bias initialization to b=−log⁡((1−π)/π)𝑏1𝜋𝜋b=-\\log((1-\\pi)/\\pi), where π𝜋\\pi specifies that at the start of training every anchor should be labeled as foreground with confidence of ∼similar-to\\scriptstyle\\simπ𝜋\\pi. We use π=.01𝜋.01\\pi=.01 in all experiments, although results are robust to the exact value. As explained in §3.3, this initialization prevents the large number of background anchors from generating a large, destabilizing loss value in the first iteration of training. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_35", "text": " RetinaNet is trained with stochastic gradient descent (SGD). We use synchronized SGD over 8 GPUs with a total of 16 images per minibatch (2 images per GPU). Unless otherwise specified, all models are trained for 90k iterations with an initial learning rate of 0.01, which is then divided by 10 at 60k and again at 80k iterations. We use horizontal image flipping as the only form of data augmentation unless otherwise noted. Weight decay of 0.0001 and momentum of 0.9 are used. The training loss is the sum the focal loss and the standard smooth L1subscript𝐿1L_{1} loss used for box regression . Training time ranges between 10 and 35 hours for the models in Table 1e. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_36", "text": " We present experimental results on the bounding box detection track of the challenging COCO benchmark . For training, we follow common practice (1, 20) and use the COCO trainval35k split (union of 80k images from train and a random 35k subset of images from the 40k image val split). We report lesion and sensitivity studies by evaluating on the minival split (the remaining 5k images from val). For our main results, we report COCO AP on the test-dev split, which has no public labels and requires use of the evaluation server. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_37", "text": " We run numerous experiments to analyze the behavior of the loss function for dense detection along with various optimization strategies. For all experiments we use depth 50 or 101 ResNets with a Feature Pyramid Network (FPN)  constructed on top. For all ablation studies we use an image scale of 600 pixels for training and testing. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_38", "text": " Our first attempt to train RetinaNet uses standard cross entropy (CE) loss without any modifications to the initialization or learning strategy. This fails quickly, with the network diverging during training. However, simply initializing the last layer of our model such that the prior probability of detecting an object is π=.01𝜋.01\\pi=.01 (see §4.1) enables effective learning. Training RetinaNet with ResNet-50 and this initialization already yields a respectable AP of 30.2 on COCO. Results are insensitive to the exact value of π𝜋\\pi so we use π=.01𝜋.01\\pi=.01 for all experiments. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_39", "text": " Our next attempt to improve learning involved using the α𝛼\\alpha-balanced CE loss described in §3.1. Results for various α𝛼\\alpha are shown in Table 1a. Setting α=.75𝛼.75\\alpha=.75 gives a gain of 0.9 points AP. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_40", "text": " Results using our proposed focal loss are shown in Table 1b. The focal loss introduces one new hyperparameter, the focusing parameter γ𝛾\\gamma, that controls the strength of the modulating term. When γ=0𝛾0\\gamma=0, our loss is equivalent to the CE loss. As γ𝛾\\gamma increases, the shape of the loss changes so that “easy” examples with low loss get further discounted, see Figure 1. FL shows large gains over CE as γ𝛾\\gamma is increased. With γ=2𝛾2\\gamma=2, FL yields a 2.9 AP improvement over the α𝛼\\alpha-balanced CE loss. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_41", "text": " For the experiments in Table 1b, for a fair comparison we find the best α𝛼\\alpha for each γ𝛾\\gamma. We observe that lower α𝛼\\alpha’s are selected for higher γ𝛾\\gamma’s (as easy negatives are down-weighted, less emphasis needs to be placed on the positives). Overall, however, the benefit of changing γ𝛾\\gamma is much larger, and indeed the best α𝛼\\alpha’s ranged in just (.25,.75) (we tested α∈(.01,.999)𝛼.01.999\\alpha\\in(.01,.999)). We use γ=2.0𝛾2.0\\gamma=2.0 with α=.25𝛼.25\\alpha=.25 for all experiments but α=.5𝛼.5\\alpha=.5 works nearly as well (.4 AP lower). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_42", "text": " To understand the focal loss better, we analyze the empirical distribution of the loss of a converged model. For this, we take take our default ResNet-101 600-pixel model trained with γ=2𝛾2\\gamma=2 (which has 36.0 AP). We apply this model to a large number of random images and sample the predicted probability for ∼similar-to\\scriptstyle\\sim107superscript10710^{7} negative windows and ∼similar-to\\scriptstyle\\sim105superscript10510^{5} positive windows. Next, separately for positives and negatives, we compute FL for these samples, and normalize the loss such that it sums to one. Given the normalized loss, we can sort the loss from lowest to highest and plot its cumulative distribution function (CDF) for both positive and negative samples and for different settings for γ𝛾\\gamma (even though model was trained with γ=2𝛾2\\gamma=2). ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_43", "text": " Cumulative distribution functions for positive and negative samples are shown in Figure 4. If we observe the positive samples, we see that the CDF looks fairly similar for different values of γ𝛾\\gamma. For example, approximately 20% of the hardest positive samples account for roughly half of the positive loss, as γ𝛾\\gamma increases more of the loss gets concentrated in the top 20% of examples, but the effect is minor. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_44", "text": " The effect of γ𝛾\\gamma on negative samples is dramatically different. For γ=0𝛾0\\gamma=0, the positive and negative CDFs are quite similar. However, as γ𝛾\\gamma increases, substantially more weight becomes concentrated on the hard negative examples. In fact, with γ=2𝛾2\\gamma=2 (our default setting), the vast majority of the loss comes from a small fraction of samples. As can be seen, FL can effectively discount the effect of easy negatives, focusing all attention on the hard negative examples. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_45", "text": " proposed to improve training of two-stage detectors by constructing minibatches using high-loss examples. Specifically, in OHEM each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples. The nms threshold and batch size are tunable parameters. Like the focal loss, OHEM puts more emphasis on misclassified examples, but unlike FL, OHEM completely discards easy examples. We also implement a variant of OHEM used in SSD : after applying nms to all examples, the minibatch is constructed to enforce a 1:3 ratio between positives and negatives to help ensure each minibatch has enough positives. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_46", "text": " We test both OHEM variants in our setting of one-stage detection which has large class imbalance. Results for the original OHEM strategy and the ‘OHEM 1:3’ strategy for selected batch sizes and nms thresholds are shown in Table 1d. These results use ResNet-101, our baseline trained with FL achieves 36.0 AP for this setting. In contrast, the best setting for OHEM (no 1:3 ratio, batch size 128, nms of .5) achieves 32.8 AP. This is a gap of 3.2 AP, showing FL is more effective than OHEM for training dense detectors. We note that we tried other parameter setting and variants for OHEM but did not achieve better results. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_47", "text": " Finally, in early experiments, we attempted to train with the hinge loss on ptsubscript𝑝tp_{\\textrm{t}}, which sets loss to 0 above a certain value of ptsubscript𝑝tp_{\\textrm{t}}. However, this was unstable and we did not manage to obtain meaningful results. Results exploring alternate loss functions are in the appendix. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_48", "text": " One of the most important design factors in a one-stage detection system is how densely it covers the space of possible image boxes. Two-stage detectors can classify boxes at any position, scale, and aspect ratio using a region pooling operation . In contrast, as one-stage detectors use a fixed sampling grid, a popular approach for achieving high coverage of boxes in these approaches is to use multiple ‘anchors’ at each spatial position to cover boxes of various scales and aspect ratios. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_49", "text": " We sweep over the number of scale and aspect ratio anchors used at each spatial position and each pyramid level in FPN. We consider cases from a single square anchor at each location to 12 anchors per location spanning 4 sub-octave scales (2k/4superscript2𝑘42^{k/4}, for k≤3𝑘3k\\leq 3) and 3 aspect ratios (0.5, 1, 2). Results using ResNet-50 are shown in Table 1c. A surprisingly good AP (30.3) is achieved using just one square anchor. However, the AP can be improved by nearly 4 points (to 34.0) when using 3 scales and 3 aspect ratios per location. We used this setting for all other experiments in this work. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_50", "text": " Finally, we note that increasing beyond 6-9 anchors did not shown further gains. Thus while two-stage systems can classify arbitrary boxes in an image, the saturation of performance w.r.t. density implies the higher potential density of two-stage systems may not offer an advantage. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_51", "text": " Larger backbone networks yield higher accuracy, but also slower inference speeds. Likewise for input image scale (defined by the shorter image side). We show the impact of these two factors in Table 1e. In Figure 2 we plot the speed/accuracy trade-off curve for RetinaNet and compare it to recent methods using public numbers on COCO test-dev. The plot reveals that RetinaNet, enabled by our focal loss, forms an upper envelope over all existing methods, discounting the low-accuracy regime. RetinaNet with ResNet-101-FPN and a 600 pixel image scale (which we denote by RetinaNet-101-600 for simplicity) matches the accuracy of the recently published ResNet-101-FPN Faster R-CNN , while running in 122 ms per image compared to 172 ms (both measured on an Nvidia M40 GPU). Using larger scales allows RetinaNet to surpass the accuracy of all two-stage approaches, while still being faster. For faster runtimes, there is only one operating point (500 pixel input) at which using ResNet-50-FPN improves over ResNet-101-FPN. Addressing the high frame rate regime will likely require special network design, as in , and is beyond the scope of this work. We note that after publication, faster and more accurate results can now be obtained by a variant of Faster R-CNN from . ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_52", "text": " We evaluate RetinaNet on the challenging COCO dataset and compare test-dev results to recent state-of-the-art methods including both one-stage and two-stage models. Results are presented in Table 2 for our RetinaNet-101-800 model trained using scale jitter and for 1.5×\\times longer than the models in Table 1e (giving a 1.3 AP gain). Compared to existing one-stage methods, our approach achieves a healthy 5.9 point AP gap (39.1 vs. 33.2) with the closest competitor, DSSD , while also being faster, see Figure 2. Compared to recent two-stage methods, RetinaNet achieves a 2.3 point gap above the top-performing Faster R-CNN model based on Inception-ResNet-v2-TDM . Plugging in ResNeXt-32x8d-101-FPN as the RetinaNet backbone further improves results another 1.7 AP, surpassing 40 AP on COCO. ", "title": "Focal Loss for Dense Object Detection" }, { "id": "1708.02002_all_53", "text": " In this work, we identify class imbalance as the primary obstacle preventing one-stage object detectors from surpassing top-performing, two-stage methods. To address this, we propose the focal loss which applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. Our approach is simple and highly effective. We demonstrate its efficacy by designing a fully convolutional one-stage detector and report extensive experimental analysis showing that it achieves state-of-the-art accuracy and speed. Source code is available at https://github.com/facebookresearch/Detectron . ", "title": "Focal Loss for Dense Object Detection" } ]
What should be the size of the latent space for generative model in case of MNIST datasets for getting the best results?
For higher dimensional latent space the estimates became unreliable and authors use the MNIST dataset which is a low dimensional dataset [29]. For likelihood lower bound, they trained generative models (decoders) and corresponding encoders (aka [30].
[ 29, 30 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior. Unfortunately, the common mean-field approach requires analytical solutions of expectations w.r.t. the approximate posterior, which are also intractable in the general case. We show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for efficient approximate posterior inference in almost any model with continuous latent variables and/or parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_1", "text": " For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can also be used for a host of tasks such as recognition, denoising, representation and visualization purposes. When a neural network is used for the recognition model, we arrive at the variational auto-encoder. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_2", "text": " The strategy in this section can be used to derive a lower bound estimator (a stochastic objective function) for a variety of directed graphical models with continuous latent variables. We will restrict ourselves here to the common case where we have an i.i.d. dataset with latent variables per datapoint, and where we like to perform maximum likelihood (ML) or maximum a posteriori (MAP) inference on the (global) parameters, and variational inference on the latent variables. It is, for example, straightforward to extend this scenario to the case where we also perform variational inference on the global parameters; that algorithm is put in the appendix, but experiments with that case are left to future work. Note that our method can be applied to online, non-stationary settings, e.g. streaming data, but here we assume a fixed dataset for simplicity. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_3", "text": " Let us consider some dataset 𝐗={𝐱(i)}i=1N𝐗superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑁\\mathbf{X}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{N} consisting of N𝑁N i.i.d. samples of some continuous or discrete variable 𝐱𝐱\\mathbf{x}. We assume that the data are generated by some random process, involving an unobserved continuous random variable 𝐳𝐳\\mathbf{z}. The process consists of two steps: (1) a value 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} is generated from some prior distribution p𝜽∗​(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}); (2) a value 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is generated from some conditional distribution p𝜽∗​(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}). We assume that the prior p𝜽∗​(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}) and likelihood p𝜽∗​(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}) come from parametric families of distributions p𝜽​(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) and p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), and that their PDFs are differentiable almost everywhere w.r.t. both 𝜽𝜽\\boldsymbol{\\theta} and 𝐳𝐳\\mathbf{z}. Unfortunately, a lot of this process is hidden from our view: the true parameters 𝜽∗superscript𝜽\\boldsymbol{\\theta}^{*} as well as the values of the latent variables 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} are unknown to us. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_4", "text": " Very importantly, we do not make the common simplifying assumptions about the marginal or posterior probabilities. Conversely, we are here interested in a general algorithm that even works efficiently in the case of: 1. Intractability: the case where the integral of the marginal likelihood p𝜽​(𝐱)=∫p𝜽​(𝐳)​p𝜽​(𝐱|𝐳)​𝑑𝐳subscript𝑝𝜽𝐱subscript𝑝𝜽𝐳subscript𝑝𝜽conditional𝐱𝐳differential-d𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x})=\\int p_{\\boldsymbol{\\theta}}(\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})\\,d\\mathbf{z} is intractable (so we cannot evaluate or differentiate the marginal likelihood), where the true posterior density p𝜽​(𝐳|𝐱)=p𝜽​(𝐱|𝐳)​p𝜽​(𝐳)/p𝜽​(𝐱)subscript𝑝𝜽conditional𝐳𝐱subscript𝑝𝜽conditional𝐱𝐳subscript𝑝𝜽𝐳subscript𝑝𝜽𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x})=p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{z})/p_{\\boldsymbol{\\theta}}(\\mathbf{x}) is intractable (so the EM algorithm cannot be used), and where the required integrals for any reasonable mean-field VB algorithm are also intractable. These intractabilities are quite common and appear in cases of moderately complicated likelihood functions p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), e.g. a neural network with a nonlinear hidden layer. 2. A large dataset: we have so much data that batch optimization is too costly; we would like to make parameter updates using small minibatches or even single datapoints. Sampling-based solutions, e.g. Monte Carlo EM, would in general be too slow, since it involves a typically expensive sampling loop per datapoint. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_5", "text": " We are interested in, and propose a solution to, three related problems in the above scenario: 1. Efficient approximate ML or MAP estimation for the parameters 𝜽𝜽\\boldsymbol{\\theta}. The parameters can be of interest themselves, e.g. if we are analyzing some natural process. They also allow us to mimic the hidden random process and generate artificial data that resembles the real data. 2. Efficient approximate posterior inference of the latent variable 𝐳𝐳\\mathbf{z} given an observed value 𝐱𝐱\\mathbf{x} for a choice of parameters 𝜽𝜽\\boldsymbol{\\theta}. This is useful for coding or data representation tasks. 3. Efficient approximate marginal inference of the variable 𝐱𝐱\\mathbf{x}. This allows us to perform all kinds of inference tasks where a prior over 𝐱𝐱\\mathbf{x} is required. Common applications in computer vision include image denoising, inpainting and super-resolution. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_6", "text": " For the purpose of solving the above problems, let us introduce a recognition model qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}): an approximation to the intractable true posterior p𝜽​(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}). Note that in contrast with the approximate posterior in mean-field variational inference, it is not necessarily factorial and its parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} are not computed from some closed-form expectation. Instead, we’ll introduce a method for learning the recognition model parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} jointly with the generative model parameters 𝜽𝜽\\boldsymbol{\\theta}. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_7", "text": " From a coding theory perspective, the unobserved variables 𝐳𝐳\\mathbf{z} have an interpretation as a latent representation or code. In this paper we will therefore also refer to the recognition model qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as a probabilistic encoder, since given a datapoint 𝐱𝐱\\mathbf{x} it produces a distribution (e.g. a Gaussian) over the possible values of the code 𝐳𝐳\\mathbf{z} from which the datapoint 𝐱𝐱\\mathbf{x} could have been generated. In a similar vein we will refer to p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) as a probabilistic decoder, since given a code 𝐳𝐳\\mathbf{z} it produces a distribution over the possible corresponding values of 𝐱𝐱\\mathbf{x}. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_8", "text": " The marginal likelihood is composed of a sum over the marginal likelihoods of individual datapoints log⁡p𝜽​(𝐱(1),⋯,𝐱(N))=∑i=1Nlog⁡p𝜽​(𝐱(i))subscript𝑝𝜽superscript𝐱1⋯superscript𝐱𝑁superscriptsubscript𝑖1𝑁subscript𝑝𝜽superscript𝐱𝑖\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(1)},\\cdots,\\mathbf{x}^{(N)})=\\sum_{i=1}^{N}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}), which can each be rewritten as: logp𝜽(𝐱(i))=DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳|𝐱(i)))+ℒ(𝜽,ϕ;𝐱(i))\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})=D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}^{(i)}))+\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (1) The first RHS term is the KL divergence of the approximate from the true posterior. Since this KL-divergence is non-negative, the second RHS term ℒ​(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) is called the (variational) lower bound on the marginal likelihood of datapoint i𝑖i, and can be written as: log⁡p𝜽​(𝐱(i))≥ℒ​(𝜽,ϕ;𝐱(i))subscript𝑝𝜽superscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})\\geq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =𝔼qϕ​(𝐳|𝐱)​(−log⁡qϕ​(𝐳|𝐱)+log⁡p𝜽​(𝐱,𝐳))absentsubscript𝔼subscript𝑞bold-italic-ϕconditional𝐳𝐱delimited-()subscript𝑞bold-italic-ϕconditional𝐳𝐱subscript𝑝𝜽𝐱𝐳\\displaystyle=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})}\\left(-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})+\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})\\right) (2) which can also be written as: ℒ(𝜽,ϕ;𝐱(i))=−DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+𝔼qϕ​(𝐳|𝐱(i))(logp𝜽(𝐱(i)|𝐳))\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) (3) We want to differentiate and optimize the lower bound ℒ​(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) w.r.t. both the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and generative parameters 𝜽𝜽\\boldsymbol{\\theta}. However, the gradient of the lower bound w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi} is a bit problematic. The usual (naïve) Monte Carlo gradient estimator for this type of problem is: ∇ϕ𝔼qϕ​(𝐳)​(f​(𝐳))=𝔼qϕ​(𝐳)​(f​(𝐳)​∇qϕ​(𝐳)log⁡qϕ​(𝐳))≃1L​∑l=1Lf​(𝐳)​∇qϕ​(𝐳(l))log⁡qϕ​(𝐳(l))subscript∇bold-italic-ϕsubscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript∇subscript𝑞bold-italic-ϕ𝐳subscript𝑞bold-italic-ϕ𝐳similar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝐳subscript∇subscript𝑞bold-italic-ϕsuperscript𝐳𝑙subscript𝑞bold-italic-ϕsuperscript𝐳𝑙\\nabla_{\\boldsymbol{\\phi}}\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z})\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)}) where 𝐳(l)∼qϕ​(𝐳|𝐱(i))similar-tosuperscript𝐳𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). This gradient estimator exhibits exhibits very high variance (see e.g.  (BJP12)) and is impractical for our purposes. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_9", "text": " In this section we introduce a practical estimator of the lower bound and its derivatives w.r.t. the parameters. We assume an approximate posterior in the form qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), but please note that the technique can be applied to the case qϕ​(𝐳)subscript𝑞bold-italic-ϕ𝐳q_{\\boldsymbol{\\phi}}(\\mathbf{z}), i.e. where we do not condition on 𝐱𝐱\\mathbf{x}, as well. The fully variational Bayesian method for inferring a posterior over the parameters is given in the appendix. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_10", "text": " Under certain mild conditions outlined in section 2.4 for a chosen approximate posterior qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) we can reparameterize the random variable 𝐳~∼qϕ​(𝐳|𝐱)similar-to~𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\widetilde{\\mathbf{z}}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) using a differentiable transformation gϕ​(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) of an (auxiliary) noise variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}: 𝐳~=gϕ​(ϵ,𝐱)​ with ​ϵ∼p​(ϵ)~𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱 with bold-italic-ϵsimilar-to𝑝bold-italic-ϵ\\displaystyle\\widetilde{\\mathbf{z}}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x})\\text{\\quad with \\quad}\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon}) (4) See section 2.4 for general strategies for chosing such an approriate distribution p​(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}) and function gϕ​(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}). We can now form Monte Carlo estimates of expectations of some function f​(𝐳)𝑓𝐳f(\\mathbf{z}) w.r.t. qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as follows: 𝔼qϕ​(𝐳|𝐱(i))​(f​(𝐳))=𝔼p​(ϵ)​(f​(gϕ​(ϵ,𝐱(i))))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()𝑓𝐳subscript𝔼𝑝bold-italic-ϵdelimited-()𝑓subscript𝑔bold-italic-ϕbold-italic-ϵsuperscript𝐱𝑖\\displaystyle\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{p(\\boldsymbol{\\epsilon})}\\left(f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}^{(i)}))\\right) ≃1L​∑l=1Lf​(gϕ​(ϵ(l),𝐱(i)))​ where ​ϵ(l)∼p​(ϵ)similar-to-or-equalsabsent1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖 where superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle\\simeq\\frac{1}{L}\\sum_{l=1}^{L}{f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}))}\\text{\\quad where \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (5) We apply this technique to the variational lower bound (eq. (2)), yielding our generic Stochastic Gradient Variational Bayes (SGVB) estimator ℒ~A​(𝜽,ϕ;𝐱(i))≃ℒ​(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}): ℒ~A​(𝜽,ϕ;𝐱(i))superscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =1L​∑l=1Llog⁡p𝜽​(𝐱(i),𝐳(i,l))−log⁡qϕ​(𝐳(i,l)|𝐱(i))absent1𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽superscript𝐱𝑖superscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditionalsuperscript𝐳𝑖𝑙superscript𝐱𝑖\\displaystyle=\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)},\\mathbf{z}^{(i,l)})-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(i,l)}|\\mathbf{x}^{(i)}) where ​𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ​(ϵ(i,l),𝐱(i))​ and ​ϵ(l)∼p​(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (6) Often, the KL-divergence DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z})) of eq. (3) can be integrated analytically (see appendix B), such that only the expected reconstruction error 𝔼qϕ​(𝐳|𝐱(i))​(log⁡p𝜽​(𝐱(i)|𝐳))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()subscript𝑝𝜽conditionalsuperscript𝐱𝑖𝐳\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) requires estimation by sampling. The KL-divergence term can then be interpreted as regularizing ϕbold-italic-ϕ\\boldsymbol{\\phi}, encouraging the approximate posterior to be close to the prior p𝜽​(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}). This yields a second version of the SGVB estimator ℒ~B​(𝜽,ϕ;𝐱(i))≃ℒ​(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}), corresponding to eq. (3), which typically has less variance than the generic estimator: ℒ~B​(𝜽,ϕ;𝐱(i))superscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =−DK​L(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+1L∑l=1L(logp𝜽(𝐱(i)|𝐳(i,l)))\\displaystyle=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\frac{1}{L}\\sum_{l=1}^{L}(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)})) where ​𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ​(ϵ(i,l),𝐱(i))​ and ​ϵ(l)∼p​(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (7) Given multiple datapoints from a dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints, we can construct an estimator of the marginal likelihood lower bound of the full dataset, based on minibatches: ℒ​(𝜽,ϕ;𝐗)≃ℒ~M​(𝜽,ϕ;𝐗M)=NM​∑i=1Mℒ~​(𝜽,ϕ;𝐱(i))similar-to-or-equalsℒ𝜽bold-italic-ϕ𝐗superscript~ℒ𝑀𝜽bold-italic-ϕsuperscript𝐗𝑀𝑁𝑀superscriptsubscript𝑖1𝑀~ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X})\\simeq\\widetilde{\\mathcal{L}}^{M}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}^{M})=\\frac{N}{M}\\sum_{i=1}^{M}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (8) where the minibatch 𝐗M={𝐱(i)}i=1Msuperscript𝐗𝑀superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑀\\mathbf{X}^{M}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{M} is a randomly drawn sample of M𝑀M datapoints from the full dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints. In our experiments we found that the number of samples L𝐿L per datapoint can be set to 111 as long as the minibatch size M𝑀M was large enough, e.g. M=100𝑀100M=100. Derivatives ∇𝜽,ϕℒ~​(𝜽;𝐗M)subscript∇𝜽bold-italic-ϕ~ℒ𝜽superscript𝐗𝑀\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta};\\mathbf{X}^{M}) can be taken, and the resulting gradients can be used in conjunction with stochastic optimization methods such as SGD or Adagrad (DHS10). See algorithm 1 for a basic approach to compute the stochastic gradients. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_11", "text": " A connection with auto-encoders becomes clear when looking at the objective function given at eq. (7). The first term is (the KL divergence of the approximate posterior from the prior) acts as a regularizer, while the second term is a an expected negative reconstruction error. The function gϕ(.)g_{\\boldsymbol{\\phi}}(.) is chosen such that it maps a datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and a random noise vector ϵ(l)superscriptbold-italic-ϵ𝑙\\boldsymbol{\\epsilon}^{(l)} to a sample from the approximate posterior for that datapoint: 𝐳(i,l)=gϕ​(ϵ(l),𝐱(i))superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}) where 𝐳(i,l)∼qϕ​(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). Subsequently, the sample 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)} is then input to function log⁡p𝜽​(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}), which equals the probability density (or mass) of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} under the generative model, given 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)}. This term is a negative reconstruction error in auto-encoder parlance. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_12", "text": " In order to solve our problem we invoked an alternative method for generating samples from qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). The essential parameterization trick is quite simple. Let 𝐳𝐳\\mathbf{z} be a continuous random variable, and 𝐳∼qϕ​(𝐳|𝐱)similar-to𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\mathbf{z}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) be some conditional distribution. It is then often possible to express the random variable 𝐳𝐳\\mathbf{z} as a deterministic variable 𝐳=gϕ​(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}), where ϵbold-italic-ϵ\\boldsymbol{\\epsilon} is an auxiliary variable with independent marginal p​(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}), and gϕ(.)g_{\\boldsymbol{\\phi}}(.) is some vector-valued function parameterized by ϕbold-italic-ϕ\\boldsymbol{\\phi}. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_13", "text": " This reparameterization is useful for our case since it can be used to rewrite an expectation w.r.t qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) such that the Monte Carlo estimate of the expectation is differentiable w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi}. A proof is as follows. Given the deterministic mapping 𝐳=gϕ​(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) we know that qϕ​(𝐳|𝐱)​∏id​zi=p​(ϵ)​∏id​ϵisubscript𝑞bold-italic-ϕconditional𝐳𝐱subscriptproduct𝑖𝑑subscript𝑧𝑖𝑝bold-italic-ϵsubscriptproduct𝑖𝑑subscriptitalic-ϵ𝑖q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})\\prod_{i}dz_{i}=p(\\boldsymbol{\\epsilon})\\prod_{i}d\\epsilon_{i}. Therefore111Note that for infinitesimals we use the notational convention d​𝐳=∏id​zi𝑑𝐳subscriptproduct𝑖𝑑subscript𝑧𝑖d\\mathbf{z}=\\prod_{i}dz_{i}, ∫qϕ​(𝐳|𝐱)​f​(𝐳)​𝑑𝐳=∫p​(ϵ)​f​(𝐳)​𝑑ϵ=∫p​(ϵ)​f​(gϕ​(ϵ,𝐱))​𝑑ϵsubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳𝑝bold-italic-ϵ𝑓𝐳differential-dbold-italic-ϵ𝑝bold-italic-ϵ𝑓subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱differential-dbold-italic-ϵ\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}=\\int p(\\boldsymbol{\\epsilon})f(\\mathbf{z})\\,d\\boldsymbol{\\epsilon}=\\int p(\\boldsymbol{\\epsilon})f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}))\\,d\\boldsymbol{\\epsilon}. It follows that a differentiable estimator can be constructed: ∫qϕ​(𝐳|𝐱)​f​(𝐳)​𝑑𝐳≃1L​∑l=1Lf​(gϕ​(𝐱,ϵ(l)))similar-to-or-equalssubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕ𝐱superscriptbold-italic-ϵ𝑙\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(g_{\\boldsymbol{\\phi}}(\\mathbf{x},\\boldsymbol{\\epsilon}^{(l)})) where ϵ(l)∼p​(ϵ)similar-tosuperscriptbold-italic-ϵ𝑙𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}). In section 2.3 we applied this trick to obtain a differentiable estimator of the variational lower bound. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_14", "text": " Take, for example, the univariate Gaussian case: let z∼p​(z|x)=𝒩​(μ,σ2)similar-to𝑧𝑝conditional𝑧𝑥𝒩𝜇superscript𝜎2z\\sim p(z|x)=\\mathcal{N}(\\mu,\\sigma^{2}). In this case, a valid reparameterization is z=μ+σ​ϵ𝑧𝜇𝜎italic-ϵz=\\mu+\\sigma\\epsilon, where ϵitalic-ϵ\\epsilon is an auxiliary noise variable ϵ∼𝒩​(0,1)similar-toitalic-ϵ𝒩01\\epsilon\\sim\\mathcal{N}(0,1). Therefore, 𝔼𝒩​(z;μ,σ2)​(f​(z))=𝔼𝒩​(ϵ;0,1)​(f​(μ+σ​ϵ))≃1L​∑l=1Lf​(μ+σ​ϵ(l))subscript𝔼𝒩𝑧𝜇superscript𝜎2delimited-()𝑓𝑧subscript𝔼𝒩italic-ϵ01delimited-()𝑓𝜇𝜎italic-ϵsimilar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝜇𝜎superscriptitalic-ϵ𝑙\\mathbb{E}_{\\mathcal{N}(z;\\mu,\\sigma^{2})}\\left(f(z)\\right)=\\mathbb{E}_{\\mathcal{N}(\\epsilon;0,1)}\\left(f(\\mu+\\sigma\\epsilon)\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mu+\\sigma\\epsilon^{(l)}) where ϵ(l)∼𝒩​(0,1)similar-tosuperscriptitalic-ϵ𝑙𝒩01\\epsilon^{(l)}\\sim\\mathcal{N}(0,1). ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_15", "text": " For which qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) can we choose such a differentiable transformation gϕ(.)g_{\\boldsymbol{\\phi}}(.) and auxiliary variable ϵ∼p​(ϵ)similar-tobold-italic-ϵ𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon})? Three basic approaches are: 1. Tractable inverse CDF. In this case, let ϵ∼𝒰​(𝟎,𝐈)similar-tobold-italic-ϵ𝒰0𝐈\\boldsymbol{\\epsilon}\\sim\\mathcal{U}(\\mathbf{0},\\mathbf{I}), and let gϕ​(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) be the inverse CDF of qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). Examples: Exponential, Cauchy, Logistic, Rayleigh, Pareto, Weibull, Reciprocal, Gompertz, Gumbel and Erlang distributions. 2. Analogous to the Gaussian example, for any ”location-scale” family of distributions we can choose the standard distribution (with location=0location0\\text{location}=0, scale=1scale1\\text{scale}=1) as the auxiliary variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}, and let g(.)=location+scale⋅ϵg(.)=\\text{location}+\\text{scale}\\cdot\\boldsymbol{\\epsilon}. Examples: Laplace, Elliptical, Student’s t, Logistic, Uniform, Triangular and Gaussian distributions. 3. Composition: It is often possible to express random variables as different transformations of auxiliary variables. Examples: Log-Normal (exponentiation of normally distributed variable), Gamma (a sum over exponentially distributed variables), Dirichlet (weighted sum of Gamma variates), Beta, Chi-Squared, and F distributions. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_16", "text": " When all three approaches fail, good approximations to the inverse CDF exist requiring computations with time complexity comparable to the PDF (see e.g.  (Dev86) for some methods). ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_17", "text": " In this section we’ll give an example where we use a neural network for the probabilistic encoder qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) (the approximation to the posterior of the generative model p𝜽​(𝐱,𝐳)subscript𝑝𝜽𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})) and where the parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and 𝜽𝜽\\boldsymbol{\\theta} are optimized jointly with the AEVB algorithm. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_18", "text": " Let the prior over the latent variables be the centered isotropic multivariate Gaussian p𝜽​(𝐳)=𝒩​(𝐳;𝟎,𝐈)subscript𝑝𝜽𝐳𝒩𝐳0𝐈p_{\\boldsymbol{\\theta}}(\\mathbf{z})=\\mathcal{N}(\\mathbf{z};\\mathbf{0},\\mathbf{I}). Note that in this case, the prior lacks parameters. We let p𝜽​(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from 𝐳𝐳\\mathbf{z} with a MLP (a fully-connected neural network with a single hidden layer, see appendix C). Note the true posterior p𝜽​(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}) is in this case intractable. While there is much freedom in the form qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), we’ll assume the true (but intractable) posterior takes on a approximate Gaussian form with an approximately diagonal covariance. In this case, we can let the variational approximate posterior be a multivariate Gaussian with a diagonal covariance structure222Note that this is just a (simplifying) choice, and not a limitation of our method.: log⁡qϕ​(𝐳|𝐱(i))subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\displaystyle\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) =log⁡𝒩​(𝐳;𝝁(i),𝝈2​(i)​𝐈)absent𝒩𝐳superscript𝝁𝑖superscript𝝈2𝑖𝐈\\displaystyle=\\log\\mathcal{N}(\\mathbf{z};\\boldsymbol{\\mu}^{(i)},\\boldsymbol{\\sigma}^{2(i)}\\mathbf{I}) (9) where the mean and s.d. of the approximate posterior, 𝝁(i)superscript𝝁𝑖\\boldsymbol{\\mu}^{(i)} and 𝝈(i)superscript𝝈𝑖\\boldsymbol{\\sigma}^{(i)}, are outputs of the encoding MLP, i.e. nonlinear functions of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} (see appendix C). ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_19", "text": " As explained in section 2.4, we sample from the posterior 𝐳(i,l)∼qϕ​(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) using 𝐳(i,l)=gϕ​(𝐱(i),ϵ(l))=𝝁(i)+𝝈(i)⊙ϵ(l)superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscript𝐱𝑖superscriptbold-italic-ϵ𝑙superscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\mathbf{x}^{(i)},\\boldsymbol{\\epsilon}^{(l)})=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)} where ϵ(l)∼𝒩​(𝟎,𝐈)similar-tosuperscriptbold-italic-ϵ𝑙𝒩0𝐈\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}). With ⊙direct-product\\odot we signify an element-wise product. In this model both p𝜽​(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) (the prior) and qϕ​(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) are Gaussian; in this case, we can use the estimator of eq. (7) where the KL divergence can be computed and differentiated without estimation (see appendix B). The resulting estimator for this model and datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is: ℒ​(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) ≃12​∑j=1J(1+log⁡((σj(i))2)−(μj(i))2−(σj(i))2)+1L​∑l=1Llog⁡p𝜽​(𝐱(i)|𝐳(i,l))similar-to-or-equalsabsent12superscriptsubscript𝑗1𝐽1superscriptsuperscriptsubscript𝜎𝑗𝑖2superscriptsuperscriptsubscript𝜇𝑗𝑖2superscriptsuperscriptsubscript𝜎𝑗𝑖21𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\displaystyle\\simeq\\frac{1}{2}\\sum_{j=1}^{J}\\left(1+\\log((\\sigma_{j}^{(i)})^{2})-(\\mu_{j}^{(i)})^{2}-(\\sigma_{j}^{(i)})^{2}\\right)+\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) where ​𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where\\quad}\\mathbf{z}^{(i,l)} =𝝁(i)+𝝈(i)⊙ϵ(l)​ and ​ϵ(l)∼𝒩​(0,𝐈)absentsuperscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙 and superscriptbold-italic-ϵ𝑙similar-to𝒩0𝐈\\displaystyle=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)}\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(0,\\mathbf{I}) (10) As explained above and in appendix C, the decoding term log⁡p𝜽​(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) is a Bernoulli or Gaussian MLP, depending on the type of data we are modelling. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_20", "text": " The wake-sleep algorithm (HDFN95) is, to the best of our knowledge, the only other on-line learning method in the literature that is applicable to the same general class of continuous latent variable models. Like our method, the wake-sleep algorithm employs a recognition model that approximates the true posterior. A drawback of the wake-sleep algorithm is that it requires a concurrent optimization of two objective functions, which together do not correspond to optimization of (a bound of) the marginal likelihood. An advantage of wake-sleep is that it also applies to models with discrete latent variables. Wake-Sleep has the same computational complexity as AEVB per datapoint. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_21", "text": " Stochastic variational inference (HBWP13) has recently received increasing interest. Recently, (BJP12) introduced a control variate schemes to reduce the high variance of the naïve gradient estimator discussed in section 2.1, and applied to exponential family approximations of the posterior. In (RGB13) some general methods, i.e. a control variate scheme, were introduced for reducing the variance of the original gradient estimator. In (SK13), a similar reparameterization as in this paper was used in an efficient version of a stochastic variational inference algorithm for learning the natural parameters of exponential-family approximating distributions. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_22", "text": " The AEVB algorithm exposes a connection between directed probabilistic models (trained with a variational objective) and auto-encoders. A connection between linear auto-encoders and a certain class of generative linear-Gaussian models has long been known. In  (Row98) it was shown that PCA corresponds to the maximum-likelihood (ML) solution of a special case of the linear-Gaussian model with a prior p​(𝐳)=𝒩​(0,𝐈)𝑝𝐳𝒩0𝐈p(\\mathbf{z})=\\mathcal{N}(0,\\mathbf{I}) and a conditional distribution p​(𝐱|𝐳)=𝒩​(𝐱;𝐖𝐳,ϵ​𝐈)𝑝conditional𝐱𝐳𝒩𝐱𝐖𝐳italic-ϵ𝐈p(\\mathbf{x}|\\mathbf{z})=\\mathcal{N}(\\mathbf{x};\\mathbf{W}\\mathbf{z},\\epsilon\\mathbf{I}), specifically the case with infinitesimally small ϵitalic-ϵ\\epsilon. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_23", "text": " In relevant recent work on autoencoders (VLL+10) it was shown that the training criterion of unregularized autoencoders corresponds to maximization of a lower bound (see the infomax principle (Lin89)) of the mutual information between input X𝑋X and latent representation Z𝑍Z. Maximizing (w.r.t. parameters) of the mutual information is equivalent to maximizing the conditional entropy, which is lower bounded by the expected loglikelihood of the data under the autoencoding model (VLL+10), i.e. the negative reconstrution error. However, it is well known that this reconstruction criterion is in itself not sufficient for learning useful representations (BCV13). Regularization techniques have been proposed to make autoencoders learn useful representations, such as denoising, contractive and sparse autoencoder variants  (BCV13). The SGVB objective contains a regularization term dictated by the variational bound (e.g. eq. (10)), lacking the usual nuisance regularization hyperparameter required to learn useful representations. Related are also encoder-decoder architectures such as the predictive sparse decomposition (PSD) (KRL08), from which we drew some inspiration. Also relevant are the recently introduced Generative Stochastic Networks (BTL13) where noisy auto-encoders learn the transition operator of a Markov chain that samples from the data distribution. In (SL10) a recognition model was employed for efficient learning with Deep Boltzmann Machines. These methods are targeted at either unnormalized models (i.e. undirected models like Boltzmann machines) or limited to sparse coding models, in contrast to our proposed algorithm for learning a general class of directed probabilistic models. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_24", "text": " The recently proposed DARN method  (GMW13), also learns a directed probabilistic model using an auto-encoding structure, however their method applies to binary latent variables. Even more recently,  (RMW14) also make the connection between auto-encoders, directed proabilistic models and stochastic variational inference using the reparameterization trick we describe in this paper. Their work was developed independently of ours and provides an additional perspective on AEVB. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_25", "text": " We trained generative models of images from the MNIST and Frey Face datasets333Available at http://www.cs.nyu.edu/~roweis/data.html and compared learning algorithms in terms of the variational lower bound, and the estimated marginal likelihood. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_26", "text": " The generative model (encoder) and variational approximation (decoder) from section 3 were used, where the described encoder and decoder have an equal number of hidden units. Since the Frey Face data are continuous, we used a decoder with Gaussian outputs, identical to the encoder, except that the means were constrained to the interval (0,1)01(0,1) using a sigmoidal activation function at the decoder output. Note that with hidden units we refer to the hidden layer of the neural networks of the encoder and decoder. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_27", "text": " Parameters are updated using stochastic gradient ascent where gradients are computed by differentiating the lower bound estimator ∇𝜽,ϕℒ​(𝜽,ϕ;𝐗)subscript∇𝜽bold-italic-ϕℒ𝜽bold-italic-ϕ𝐗\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}) (see algorithm  1), plus a small weight decay term corresponding to a prior p​(𝜽)=𝒩​(0,𝐈)𝑝𝜽𝒩0𝐈p(\\boldsymbol{\\theta})=\\mathcal{N}(0,\\mathbf{I}). Optimization of this objective is equivalent to approximate MAP estimation, where the likelihood gradient is approximated by the gradient of the lower bound. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_28", "text": " We compared performance of AEVB to the wake-sleep algorithm (HDFN95). We employed the same encoder (also called recognition model) for the wake-sleep algorithm and the variational auto-encoder. All parameters, both variational and generative, were initialized by random sampling from 𝒩​(0,0.01)𝒩00.01\\mathcal{N}(0,0.01), and were jointly stochastically optimized using the MAP criterion. Stepsizes were adapted with Adagrad (DHS10); the Adagrad global stepsize parameters were chosen from {0.01, 0.02, 0.1} based on performance on the training set in the first few iterations. Minibatches of size M=100𝑀100M=100 were used, with L=1𝐿1L=1 samples per datapoint. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_29", "text": " We trained generative models (decoders) and corresponding encoders (a.k.a. recognition models) having 500500500 hidden units in case of MNIST, and 200200200 hidden units in case of the Frey Face dataset (to prevent overfitting, since it is a considerably smaller dataset). The chosen number of hidden units is based on prior literature on auto-encoders, and the relative performance of different algorithms was not very sensitive to these choices. Figure 2 shows the results when comparing the lower bounds. Interestingly, superfluous latent variables did not result in overfitting, which is explained by the regularizing nature of the variational bound. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_30", "text": " For very low-dimensional latent space it is possible to estimate the marginal likelihood of the learned generative models using an MCMC estimator. More information about the marginal likelihood estimator is available in the appendix. For the encoder and decoder we again used neural networks, this time with 100 hidden units, and 3 latent variables; for higher dimensional latent space the estimates became unreliable. Again, the MNIST dataset was used. The AEVB and Wake-Sleep methods were compared to Monte Carlo EM (MCEM) with a Hybrid Monte Carlo (HMC) (DKPR87) sampler; details are in the appendix. We compared the convergence speed for the three algorithms, for a small and large training set size. Results are in figure 3. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_31", "text": " If we choose a low-dimensional latent space (e.g. 2D), we can use the learned encoders (recognition model) to project high-dimensional data to a low-dimensional manifold. See appendix A for visualisations of the 2D latent manifolds for the MNIST and Frey Face datasets. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_32", "text": " We have introduced a novel estimator of the variational lower bound, Stochastic Gradient VB (SGVB), for efficient approximate inference with continuous latent variables. The proposed estimator can be straightforwardly differentiated and optimized using standard stochastic gradient methods. For the case of i.i.d. datasets and continuous latent variables per datapoint we introduce an efficient algorithm for efficient inference and learning, Auto-Encoding VB (AEVB), that learns an approximate inference model using the SGVB estimator. The theoretical advantages are reflected in experimental results. ", "title": "Auto-Encoding Variational Bayes" }, { "id": "1312.6114_all_33", "text": " Since the SGVB estimator and the AEVB algorithm can be applied to almost any inference and learning problem with continuous latent variables, there are plenty of future directions: (i) learning hierarchical generative architectures with deep neural networks (e.g. convolutional networks) used for the encoders and decoders, trained jointly with AEVB; (ii) time-series models (i.e. dynamic Bayesian networks); (iii) application of SGVB to the global parameters; (iv) supervised models with latent variables, useful for learning complicated noise distributions. ", "title": "Auto-Encoding Variational Bayes" } ]
Does this bring better performance? If so, what is the reasoning for this?
In the limited volume of available videos at training time, conditioning on a varying number of frames-per-second, enables an additional augmentation method, and provides rovides additional control on the generated video at inference time [2]. In human evaluation experiments, it shows that raters choose this method more realistic motion more than the half of the time [24]. It is observed that this method excels when there are large differences between frames where having real-world knowledge of how objects move is crucial [30]. Table 1 demonstrates the quantitative results of Make-A-Video [31].
[ 2, 24, 30, 31 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, video) dataset cannot be easily collected. It would be wasteful to train Text-to-Video (T2V) models from scratch when there already exist models that can generate images. Moreover, unsupervised learning enables networks to learn from orders of magnitude more data. This large quantity of data is important to learn representations of more subtle, less common concepts in the world. Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) (Liu et al., 2019a; Brown et al., 2020). Models pre-trained this way yield considerably higher performance than when solely trained in a supervised manner. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_1", "text": " Inspired by these motivations, we propose Make-A-Video. Make-A-Video leverages T2I models to learn the correspondence between text and the visual world, and uses unsupervised learning on unlabeled (unpaired) video data, to learn realistic motion. Together, Make-A-Video generates videos from text without leveraging paired text-video data. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_2", "text": " Clearly, text describing images does not capture the entirety of phenomena observed in videos. That said, one can often infer actions and events from static images (e.g. a woman drinking coffee, or an elephant kicking a football) as done in image-based action recognition systems (Girish et al., 2020). Moreover, even without text descriptions, unsupervised videos are sufficient to learn how different entities in the world move and interact (e.g. the motion of waves at the beach, or of an elephant’s trunk). As a result, a model that has only seen text describing images is surprisingly effective at generating short videos, as demonstrated by our temporal diffusion-based method. Make-A-Video sets the new state-of-the-art in T2V generation. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_3", "text": " Using function-preserving transformations, we extend the spatial layers at the model initialization stage, to include temporal information. The extended spatial-temporal network includes new attention modules that learn temporal world dynamics from a collection of videos. This procedure significantly accelerates the T2V training process by instantaneously transferring the knowledge from a previously trained T2I network to a new T2V one. To enhance the visual quality, we train spatial super-resolution models as well as frame interpolation models. This increases the resolution of the generated videos, as well as enables a higher (controllable) frame rate. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_4", "text": " Our main contributions are: • We present Make-A-Video – an effective method that extends a diffusion-based T2I model to T2V through a spatiotemporally factorized diffusion model. • We leverage joint text-image priors to bypass the need for paired text-video data, which in turn allows us to potentially scale to larger quantities of video data. • We present super-resolution strategies in space and time that, for the first time, generate high-definition, high frame-rate videos given a user-provided textual input. • We evaluate Make-A-Video against existing T2V systems and present: (a) State-of-the-art results in quantitative as well as qualitative measures, and (b) A more thorough evaluation than existing literature in T2V. We also collect a test set of 300 prompts for zero-shot T2V human evaluation which we plan to release. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_5", "text": " Text-to-Image Generation.  (Reed et al., 2016) is among the first methods to extend unconditional Generative Adversairal Network (GAN) (Goodfellow et al., 2014) to T2I generation. Later GAN variants have focused on progressive generation (Zhang et al., 2017; Hong et al., 2018), or better text-image alignment (Xu et al., 2018; Zhang et al., 2021). The pioneering work of DALL-E (Ramesh et al., 2021) considers T2I generation as a sequence-to-sequence translation problem using a discrete variational auto-encoder (VQVAE) and Transformer (Vaswani et al., 2017). Additional variants (Ding et al., 2022) have been proposed since then. For example, Make-A-Scene (Gafni et al., 2022) explores controllable T2I generation using semantic maps. Parti (Yu et al., 2022a) aims for more diverse content generation through an encoder-decoder architecture and an improved image tokenizer (Yu et al., 2021). On the other hand, Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) are successfully leveraged for T2I generation. GLIDE (Nichol et al., 2021) trained a T2I and an upsampling diffusion model for cascade generation. GLIDE’s proposed classifier-free guidance has been widely adopted in T2I generation to improve image quality and text faithfulness. DALLE-2 (Ramesh et al., 2022) leverages the CLIP (Radford et al., 2021) latent space and a prior model. VQ-diffusion (Gu et al., 2022) and stable diffusion (Rombach et al., 2022) performs T2I generation in the latent space instead of pixel space to improve efficiency. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_6", "text": " Text-to-Video Generation. While there is remarkable progress in T2I generation, the progress of T2V generation lags behind largely due to two main reasons: the lack of large-scale datasets with high-quality text-video pairs, and the complexity of modeling higher-dimensional video data. Early works (Mittal et al., 2017; Pan et al., 2017; Marwah et al., 2017; Li et al., 2018; Gupta et al., 2018; Liu et al., 2019b) are mainly focused on video generation in simple domains, such as moving digits or specific human actions. To our knowledge, Sync-DRAW (Mittal et al., 2017) is the first T2V generation approach that leverages a VAE with recurrent attention. (Pan et al., 2017) and (Li et al., 2018) extend GANs from image generation to T2V generation. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_7", "text": " More recently, GODIVA (Wu et al., 2021a) is the first to use 2D VQVAE and sparse attention for T2V generation supporting more realistic scenes. NÜWA (Wu et al., 2021b) extends GODIVA, and presents a unified representation for various generation tasks in a multitask learning scheme. To further improve the performance of T2V generation, CogVideo (Hong et al., 2022) is built on top of a frozen CogView-2 (Ding et al., 2022) T2I model by adding additional temporal attention modules. Video Diffusion Models (VDM) (Ho et al., 2022) uses a space-time factorized U-Net with joint image and video data training. While both CogVideo and VDM collected 10M private text-video pairs for training, our work uses solely open-source datasets, making it easier to reproduce. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_8", "text": " Leveraging Image Priors for Video Generation. Due to the complexity of modeling videos and the challenges in high-quality video data collection, it is natural to consider leveraging image priors for videos to simplifying the learning process. After all, an image is a video with a single frame (Bain et al., 2021). In unconditional video generation, MoCoGAN-HD (Tian et al., 2021) formulates video generation as the task of finding a trajectory in the latent space of a pre-trained and fixed image generation model. In T2V generation, NÜWA (Wu et al., 2021b) combines image and video datasets in a multitask pre-training stage to improve model generalization for fine-tuning. CogVideo (Hong et al., 2022) uses a pre-trained and fixed T2I model for T2V generation with only a small number of trainable parameters to reduce memory usage during training. But the fixed autoencoder and T2I models can be restrictive for T2V generation. The architecture of VDM (Ho et al., 2022) can enable joint image and video generation. However, they sample random independent images from random videos as their source of images, and do not leverage the massive text-image datasets. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_9", "text": " Make-A-Video differs from previous works in several aspects. First, our architecture breaks the dependency on text-video pairs for T2V generation. This is a significant advantage compared to prior work, that has to be restricted to narrow domains (Mittal et al., 2017; Gupta et al., 2018; Ge et al., 2022; Hayes et al., 2022), or require large-scale paired text-video data (Hong et al., 2022; Ho et al., 2022). Second, we fine-tune the T2I model for video generation, gaining the advantage of adapting the model weights effectively, compared to freezing the weights as in CogVideo (Hong et al., 2022). Third, motivated from prior work on efficient architectures for video and 3D vision tasks (Ye et al., 2019; Qiu et al., 2017; Xie et al., 2018), our use of pseudo-3D convolution (Qiu et al., 2017) and temporal attention layers not only better leverage a T2I architecture, it also allows for better temporal information fusion compared to VDM (Ho et al., 2022). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_10", "text": " Make-A-Video consists of three main components: (i) A base T2I model trained on text-image pairs (Sec. 3.1), (ii) spatiotemporal convolution and attention layers that extend the networks’ building blocks to the temporal dimension (Sec. 3.2), and (iii) spatiotemporal networks that consist of both spatiotemporal layers, as well as another crucial element needed for T2V generation - a frame interpolation network for high frame rate generation (Sec. 3.3). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_11", "text": " Make-A-Video’s final T2V inference scheme (depicted in Fig. 2) can be formulated as: yt^=SRh∘SRlt∘↑F∘Dt∘P∘(x^,Cx(x)),\\hat{y_{t}}=\\operatorname{SR}_{h}\\circ\\operatorname{SR}_{l}^{t}\\circ\\uparrow_{F}\\circ\\operatorname{D}^{t}\\circ\\operatorname{P}\\circ(\\hat{x},\\operatorname{C}_{x}(x)), (1) where yt^^subscript𝑦𝑡\\hat{y_{t}} is the generated video, SRh,SRlsubscriptSRℎsubscriptSR𝑙\\operatorname{SR}_{h},\\operatorname{SR}_{l} are the spatial and spatiotemporal super-resolution networks (Sec. 3.2), ↑Fsubscript↑𝐹\\uparrow_{F} is a frame interpolation network (Sec. 3.3), DtsuperscriptD𝑡\\operatorname{D}^{t} is the spatiotemporal decoder (Sec. 3.2), PP\\operatorname{P} is the prior (Sec. 3.1), x^^𝑥\\hat{x} is the BPE-encoded text, CxsubscriptC𝑥\\operatorname{C}_{x} is the CLIP text encoder (Radford et al., 2021), and x𝑥x is the input text. The three main components are described in detail in the following sections. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_12", "text": " Prior to the addition of the temporal components, we train the backbone of our method: a T2I model trained on text-image pairs, sharing the core components with the work of (Ramesh et al., 2022). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_13", "text": " We use the following networks to produce high-resolution images from text: (i) A prior network PP\\operatorname{\\textbf{P}}, that during inference generates image embeddings yesubscript𝑦𝑒y_{e} given text embeddings xesubscript𝑥𝑒x_{e} and BPE encoded text tokens x^^𝑥\\hat{x}, (ii) a decoder network DD\\operatorname{\\textbf{D}} that generates a low-resolution 64×64646464\\times 64 RGB image y^lsubscript^𝑦𝑙\\hat{y}_{l}, conditioned on the image embeddings yesubscript𝑦𝑒y_{e}, and (iii) two super-resolution networks SRlsubscriptSRl\\operatorname{\\textbf{SR}}_{\\textbf{l}},SRhsubscriptSRh\\operatorname{\\textbf{SR}}_{\\textbf{h}} that increase the generated image y^lsubscript^𝑦𝑙\\hat{y}_{l} resolution to 256×256256256256\\times 256 and 768×768768768768\\times 768 pixels respectively, resulting in the final222We then downsample to 512 using bicubic interpolation for a cleaner aesthetic. Maintaining a clean aesthetic for high definition videos is part of future work. generated image y^^𝑦\\hat{y}. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_14", "text": " In order to expand the two-dimensional (2D) conditional network into the temporal dimension, we modify the two key building blocks that now require not just spatial but also temporal dimensions in order to generate videos: (i) Convolutional layers (Sec. 3.2.1), and (ii) attention layers (Sec. 3.2.2), discussed in the following two subsections. Other layers, such as fully-connected layers, do not require specific handling when adding an additional dimension, as they are agnostic to structured spatial and temporal information. Temporal modifications are made in most U-Net-based diffusion networks: the spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} now generating 161616 RGB frames, each of size 64×64646464\\times 64, the newly added frame interpolation network ↑Fsubscript↑𝐹\\uparrow_{F}, increasing the effective frame rate by interpolating between the 161616 generated frames (as depicted in Fig. 2), and the super-resolution networks SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t}. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_15", "text": " Note that super resolution involves hallucinating information. In order to not have flickering artifacts, the hallucination must be consistent across frames. As a result, our SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} module operates across spatial and temporal dimensions. In qualitative inspection we found this to significantly outperform per-frame super resolution. It is challenging to extend SRhsubscriptSRℎ\\operatorname{SR}_{h} to the temporal dimension due to memory and compute constraints, as well as a scarcity of high resolution video data. So SRhsubscriptSRℎ\\operatorname{SR}_{h} operates only along the spatial dimensions. But to encourage consistent detail hallucination across frames, we use the same noise initialization for each frame. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_16", "text": " Motivated by separable convolutions (Chollet, 2017), we stack a 1D convolution following each 2D convolutional (conv) layer, as shown in Fig. 3. This facilitates information sharing between the spatial and temporal axes, without succumbing to the heavy computational load of 3D conv layers. In addition, it creates a concrete partition between the pre-trained 2D conv layers and the newly initialized 1D conv layers, allowing us to train the temporal convolutions from scratch, while retaining the previously learned spatial knowledge in the spatial convolutions’ weights. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_17", "text": " Given an input tensor h∈ℝB×C×F×H×Wℎsuperscriptℝ𝐵𝐶𝐹𝐻𝑊h\\in\\mathbb{R}^{B\\times C\\times F\\times H\\times W}, where B𝐵B, C𝐶C, F𝐹F, H𝐻H, W𝑊W are the batch, channels, frames, height, and width dimensions respectively, the Pseudo-3D convolutional layer is defined as: ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_18", "text": " C​o​n​vP​3​D​(h):=C​o​n​v1​D​(C​o​n​v2​D​(h)∘T)∘T,assign𝐶𝑜𝑛subscript𝑣𝑃3𝐷ℎ𝐶𝑜𝑛subscript𝑣1𝐷𝐶𝑜𝑛subscript𝑣2𝐷ℎ𝑇𝑇Conv_{P3D}(h):=Conv_{1D}(Conv_{2D}(h)\\circ T)\\circ T, (2) ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_19", "text": " where the transpose operator ∘Tabsent𝑇\\circ T swaps between the spatial and temporal dimensions. For smooth initialization, while the C​o​n​v2​D𝐶𝑜𝑛subscript𝑣2𝐷Conv_{2D} layer is initialized from the pre-trained T2I model, the C​o​n​v1​D𝐶𝑜𝑛subscript𝑣1𝐷Conv_{1D} layer is initialized as the identity function, enabling a seamless transition from training spatial-only layers, to spatiotemporal layers. Note that at initialization, the network will generate K different images (due to random noise), each faithful to the input text but lacking temporal coherence. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_20", "text": " A crucial component of T2I networks is the attention layer, where in addition to self-attending to extracted features, text information is injected to several network hierarchies, alongside other relevant information, such as the diffusion time-step. While using 3D convolutional layers is computationally heavy, adding the temporal dimension to attention layers is outright infeasible in terms of memory consumption. Inspired by the work of (Ho et al., 2022), we extend our dimension decomposition strategy to attention layers as well. Following each (pre-trained) spatial attention layer, we stack a temporal attention layer, which as with the convolutional layers, approximates a full spatiotemporal attention layer. Specifically, given an input tensor hℎh, we define f​l​a​t​t​e​n𝑓𝑙𝑎𝑡𝑡𝑒𝑛flatten as a matrix operator that flattens the spatial dimension into h′∈RB×C×F×H​Wsuperscriptℎ′superscript𝑅𝐵𝐶𝐹𝐻𝑊h^{\\prime}\\in R^{B\\times C\\times F\\times HW}. u​n​f​l​a​t​t​e​n𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛unflatten is defined as the inverse matrix operator. The Pseudo-3D attention layer therefore is therefore defined as: ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_21", "text": " A​T​T​NP​3​D​(h)=u​n​f​l​a​t​t​e​n​(A​T​T​N1​D​(A​T​T​N2​D​(f​l​a​t​t​e​n​(h))∘T)∘T).𝐴𝑇𝑇subscript𝑁𝑃3𝐷ℎ𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛𝐴𝑇𝑇subscript𝑁1𝐷𝐴𝑇𝑇subscript𝑁2𝐷𝑓𝑙𝑎𝑡𝑡𝑒𝑛ℎ𝑇𝑇ATTN_{P3D}(h)=unflatten(ATTN_{1D}(ATTN_{2D}(flatten(h))\\circ T)\\circ T). (3) ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_22", "text": " Similarly to C​o​n​vP​3​D𝐶𝑜𝑛subscript𝑣𝑃3𝐷Conv_{P3D}, to allow for smooth spatiotemporal initialization, the A​T​T​N2​D𝐴𝑇𝑇subscript𝑁2𝐷ATTN_{2D} layer is initialized from the pre-trained T2I model and the A​T​T​N1​D𝐴𝑇𝑇subscript𝑁1𝐷ATTN_{1D} layer is initialized as the identity function. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_23", "text": " Factorized space-time attention layers have also been used in VDM (Ho et al., 2022) and CogVideo (Hong et al., 2022). CogVideo has added temporal layers to each (frozen) spatial layers whereas we train them jointly. In order to force their network to train for images and videos interchangeably, VDM has extended their 2D U-Net to 3D through unflattened 1x3x3 convolution filters, such that the subsequent spatial attention remains 2D, and added 1D temporal attention through relative position embeddings. In contrast, we apply an additional 3x1x1 convolution projection (after each 1x3x3) such that the temporal information will also be passed through each convolution layer. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_24", "text": " Frame rate conditioning. In addition to the T2I conditionings, similar to CogVideo (Hong et al., 2022), we add an additional conditioning parameter f​p​s𝑓𝑝𝑠fps, representing the number of frames-per-second in a generated video. Conditioning on a varying number of frames-per-second, enables an additional augmentation method to tackle the limited volume of available videos at training time, and provides additional control on the generated video at inference time. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_25", "text": " In addition to the spatiotemporal modifications discussed in Sec. 3.2, we train a new masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F}, capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video, or by pre/post frame extrapolation for extending the video length. In order to increase the frame rate within memory and compute constraints, we fine-tune a spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} on the task of masked frame interpolation, by zero-padding the masked input frames, enabling video upsampling. When fine-tuning on masked frame interpolation, we add an additional 4 channels to the input of the U-Net: 3 channels for the RGB masked video input and an additional binary channel indicating which frames are masked. We fine-tune with variable frame-skips and f​p​s𝑓𝑝𝑠fps conditioning to enable multiple temporal upsample rates at inference time. We denote ↑Fsubscript↑𝐹\\uparrow_{F} as the operator that expands the given video tensor through masked frame interpolation. For all of our experiments we applied ↑Fsubscript↑𝐹\\uparrow_{F} with frame skip 5 to upsample a 16 frame video to 76 frames ((16-1)×\\times5+1). Note that we can use the same architecture for video extrapolation or image animation by masking frames at the beginning or end of a video. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_26", "text": " The different components of Make-A-Video described above are trained independently. The only component that receives text as input is the prior PP\\operatorname{P}. We train it on paired text-image data and do not fine-tune it on videos. The decoder, prior, and two super-resolution components are first trained on images alone (no aligned text). Recall that the decoder receives CLIP image embedding as input, and the super-resolution components receive downsampled images as input during training. After training on images, we add and initialize the new temporal layers and fine-tune them over unlabeled video data. 16 frames are sampled from the original video with random f​p​s𝑓𝑝𝑠fps ranging from 111 to 303030. We use the beta function for sampling and while training the decoder, start from higher FPS ranges (less motion) and then transition to lower FPS ranges (more motion). The masked-frame-interpolation component is fine-tuned from the temporal decoder. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_27", "text": " Datasets. To train the image models, we use a 2.32.32.3B subset of the dataset from  (Schuhmann et al., ) where the text is English. We filter out sample pairs with NSFW images 333We used this model: https://github.com/GantMan/nsfw_model, toxic words in the text, or images with a watermark probability larger than 0.50.50.5. We use WebVid-10M (Bain et al., 2021) and a 101010M subset from HD-VILA-100M (Xue et al., 2022) 444These 100100100M clips are sourced from 3.13.13.1M videos. We randomly downloaded 333 clips per video to form our HD-VILA-10M subset. to train our video generation models. Note that only the videos (no aligned text) are used. The decoder DtsuperscriptD𝑡\\operatorname{D}^{t} and the interpolation model is trained on WebVid-10M. SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} is trained on both WebVid-10M and HD-VILA-10M. While prior work (Hong et al., 2022; Ho et al., 2022) have collected private text-video pairs for T2V generation, we use only public datasets (and no paired text for videos). We conduct automatic evaluation on UCF-101 (Soomro et al., 2012) and MSR-VTT (Xu et al., 2016) in a zero-shot setting. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_28", "text": " Automatic Metrics. For UCF-101, we write one template sentence for each class (without generating any video) and fix it for evaluation. We report Frechet Video Distance (FVD) and Inception Score (IS) on 101010K samples following (Ho et al., 2022). We generate samples that follow the same class distribution as the training set. For MSR-VTT, we report Frechet Inception Distance (FID) (Parmar et al., 2022) and CLIPSIM (average CLIP similarity between video frames and text) (Wu et al., 2021a), where all 59,7945979459,794 captions from the test set are used, following (Wu et al., 2021b). ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_29", "text": " Human Evaluation Set and Metrics. We collect an evaluation set from Amazon Mechanical Turk (AMT) that consists of 300300300 prompts. We asked annotators what they would be interested in generating if there were a T2V system. We filtered out prompts that were incomplete (e.g., “jump into water”), too abstract (e.g., “climate change”), or offensive. We then identified 555 categories (animals, fantasy, people, nature and scenes, food and beverage) and selected prompts for these categories. These prompts were selected without generating any videos for them, and were kept fixed. In addition, we also used the DrawBench prompts from Imagen (Saharia et al., 2022) for human evaluation. We evaluate video quality and text-video faithfulness. For video quality, we show two videos in random order and ask annotators which one is of higher quality. For faithfulness, we additionally show the text and ask annotators which video has a better correspondence with the text (we suggest them to ignore quality issues). In addition, we also conducted human evaluation to compare video motion realism of our interpolation model and FILM (Reda et al., 2022). For each comparison, we use the majority vote from 555 different annotators as the final result. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_30", "text": " Automatic Evaluation on MSR-VTT. In addition to GODIVA and NÜWA that report on MSR-VTT, we also perform inference on the officially released CogVideo model with both Chinese and English inputs for comparison. For CogVideo and Make-A-Video, we only generate one sample for each prompt in a zero-shot setting. We only generate videos that are at 16×256×2561625625616\\times 256\\times 256 as the evaluation models do not expect higher resolutions and frame rate. The results are shown in Table 1. Make-A-Video’s zero-shot performance is much better than GODIVA and NÜWA which are trained on MSR-VTT. We also outperform CogVideo in both Chinese and English settings. Thus, Make-A-Video has significantly better generalization capabilities than prior work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_31", "text": " Automatic Evaluation on UCF-101. UCF-101 is a popular benchmark to evaluate video generation and has been recently used in T2V models. CogVideo performed finetuning of their pretrained model for class-conditional video generation. VDM (Ho et al., 2022) performed unconditional video generation and trained from scratch on UCF-101. We argue that both settings are not ideal and is not a direct evaluation of the T2V generation capabilities. Moreover, the FVD evaluation model expects the videos to be 0.50.50.5 second (161616 frames), which is too short to be used for video generation in practice. Nevertheless, in order to compare to prior work, we conducted evaluation on UCF-101 in both zero-shot and finetuning settings. As shown in Table 2, Make-A-Video’s zero-shot performance is already competitive than other approaches that are trained on UCF-101, and is much better than CogVideo, which indicates that Make-A-Video can generalize better even to such a specific domain. Our finetuning setting achieves state-of-the-art results with a significant reduction in FVD, which suggests that Make-A-Video can generate more coherent videos than prior work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_32", "text": " Human Evaluation. We compare to CogVideo (the only public zero-shot T2V generation model) on DrawBench and our test set. We also evaluate on the 282828 videos shown on the webpage of VDM (Ho et al., 2022) (which may be biased towards showcasing the model’s strengths). Since this is a very small test set, we randomly generate 888 videos for each input and perform evaluation 888 times and report the average results. We generate videos at 76×256×2567625625676\\times 256\\times 256 resolution for human evaluation. The results are shown in Table 3. Make-A-Video achieves much better performance in both video quality and text-video faithfulness in all benchmarks and comparisons. For CogVideo, the results are similar on DrawBench and our evaluation set. For VDM, it is worth noting that we have achieved significantly better results without any cherry-picking. We also evaluate our frame interpolation network in comparison to FILM (Reda et al., 2022). We first generate low frame rate videos (1 FPS) from text prompts in DrawBench and our evaluation set, then use each method to upsample to 4 FPS. Raters choose our method for more realistic motion 62% of the time on our evaluation set and 54% of the time on DrawBench. We observe that our method excels when there are large differences between frames where having real-world knowledge of how objects move is crucial. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_33", "text": " Examples of Make-A-Video’s generations are shown in Figure 1. In this section, we will show T2V generation comparison to CogVideo (Hong et al., 2022) and VDM (Ho et al., 2022), and video interpolation comparison to FILM (Reda et al., 2022). In addition, our models can be used for a variety of other tasks such as image animation, video variation, etc. Due to space constraint, we only show a single example of each. Figure 4 (a) shows the comparison of Make-A-Video to CogVideo and VDM. Make-A-Video can generate richer content with motion consistency and text correspondence. Figure 4 (b) shows an example of image animation where we condition the masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F} on the image and CLIP image embedding to extrapolate the rest of the video. This allows a user to generate a video using their own image – giving them the opportunity to personalize and directly control the generated video. Figure 4 (c) shows a comparison of our approach to FILM (Reda et al., 2022) on the task of interpolation between two images. We achieve this by using the interpolation model that takes the two images as the beginning and end frames and masks 141414 frames in between for generation. Our model generates more semantically meaningful interpolation while FILM seems to primarily smoothly transition between frames without semantic real-world understanding of what is moving. Figure 4 (d) shows an example for video variation. We take the average CLIP embedding of all frames from a video as the condition to generate a semantically similar video. More video generation examples and applications can be found here: make-a-video.github.io. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_34", "text": " Learning from the world around us is one of the greatest strengths of human intelligence. Just as we quickly learn to recognize people, places, things, and actions through observation, generative systems will be more creative and useful if they can mimic the way humans learn. Learning world dynamics from orders of magnitude more videos using unsupervised learning helps researchers break away from the reliance on labeled data. The presented work has shown how labeled images combined effectively with unlabeled video footage can achieve that. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_35", "text": " As a next step we plan to address several of the technical limitations. As discussed earlier, our approach can not learn associations between text and phenomenon that can only be inferred in videos. How to incorporate these (e.g., generating a video of a person waving their hand left-to-right or right-to-left), along with generating longer videos, with multiple scenes and events, depicting more detailed stories, is left for future work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_36", "text": " As with all large-scale models trained on data from the web, our models have learnt and likely exaggerated social biases, including harmful ones. Our T2I generation model was trained on data that removed NSFW content and toxic words. All our data (image as well as videos) is publicly available, adding a layer of transparency to our models, and making it possible for the community to reproduce our work. ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" }, { "id": "2209.14792_all_37", "text": " Mustafa Said Mehmetoglu, Jacob Xu, Katayoun Zand, Jia-Bin-Huang, Jiebo Luo, Shelly Sheynin, Angela Fan, Kelly Freed. Thank you for your contributions! ", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA" } ]
How does the use of shortcut connections in this paper compare to previous practices and theories, such as those used in MLPs and highway networks?
The shortcut connections presented in this paper are parameter free and always learn residual functions [11]. All information is always passed through additional residual functions to be learned [12].
[ 11, 12 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence (41, 44) reveals that network depth is of crucial importance, and the leading results (41, 44, 13, 16) on the challenging ImageNet dataset all exploit “very deep” models, with a depth of sixteen to thirty . Many other nontrivial visual recognition tasks (8, 12, 7, 32, 27) have also greatly benefited from very deep models. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_1", "text": " Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients (1, 9), which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization (23, 9, 37, 13) and intermediate normalization layers , which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_2", "text": " When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in (11, 42) and thoroughly verified by our experiments. Fig. 1 shows a typical example. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_3", "text": " The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_4", "text": " In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we let the stacked nonlinear layers fit another mapping of ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original mapping is recast into ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_5", "text": " The formulation of ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x} can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections (2, 34, 49) are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe ) without modifying the solvers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_6", "text": " We present comprehensive experiments on ImageNet to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_7", "text": " Similar phenomena are also shown on the CIFAR-10 set , suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_8", "text": " On the ImageNet classification dataset , we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets . Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_9", "text": " Residual Representations. In image recognition, VLAD is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector can be formulated as a probabilistic version of VLAD. Both of them are powerful shallow representations for image retrieval and classification (4, 48). For vector quantization, encoding residual vectors is shown to be more effective than encoding original vectors. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_10", "text": " In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning (45, 46), which relies on variables that represent residual vectors between two scales. It has been shown (3, 45, 46) that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_11", "text": " Shortcut Connections. Practices and theories that lead to shortcut connections (2, 34, 49) have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output (34, 49). In (44, 24), a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of (39, 38, 31, 47) propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In , an “inception” layer is composed of a shortcut branch and a few deeper branches. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_12", "text": " Concurrent with our work, “highway networks” (42, 43) present shortcut connections with gating functions . These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_13", "text": " Let us consider ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with 𝐱𝐱\\mathbf{x} denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions222This hypothesis, however, is still an open question. See ., then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., ℋ​(𝐱)−𝐱ℋ𝐱𝐱\\mathcal{H}(\\mathbf{x})-\\mathbf{x} (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate ℋ​(𝐱)ℋ𝐱\\mathcal{H}(\\mathbf{x}), we explicitly let these layers approximate a residual function ℱ​(𝐱):=ℋ​(𝐱)−𝐱assignℱ𝐱ℋ𝐱𝐱\\mathcal{F}(\\mathbf{x}):=\\mathcal{H}(\\mathbf{x})-\\mathbf{x}. The original function thus becomes ℱ​(𝐱)+𝐱ℱ𝐱𝐱\\mathcal{F}(\\mathbf{x})+\\mathbf{x}. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_14", "text": " This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_15", "text": " In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_16", "text": " We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: 𝐲=ℱ​(𝐱,{Wi})+𝐱.𝐲ℱ𝐱subscript𝑊𝑖𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+\\mathbf{x}. (1) Here 𝐱𝐱\\mathbf{x} and 𝐲𝐲\\mathbf{y} are the input and output vectors of the layers considered. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, ℱ=W2​σ​(W1​𝐱)ℱsubscript𝑊2𝜎subscript𝑊1𝐱\\mathcal{F}=W_{2}\\sigma(W_{1}\\mathbf{x}) in which σ𝜎\\sigma denotes ReLU and the biases are omitted for simplifying notations. The operation ℱ+𝐱ℱ𝐱\\mathcal{F}+\\mathbf{x} is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ​(𝐲)𝜎𝐲\\sigma(\\mathbf{y}), see Fig. 2). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_17", "text": " The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_18", "text": " The dimensions of 𝐱𝐱\\mathbf{x} and ℱℱ\\mathcal{F} must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Wssubscript𝑊𝑠W_{s} by the shortcut connections to match the dimensions: 𝐲=ℱ​(𝐱,{Wi})+Ws​𝐱.𝐲ℱ𝐱subscript𝑊𝑖subscript𝑊𝑠𝐱\\mathbf{y}=\\mathcal{F}(\\mathbf{x},\\{W_{i}\\})+W_{s}\\mathbf{x}. (2) We can also use a square matrix Wssubscript𝑊𝑠W_{s} in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Wssubscript𝑊𝑠W_{s} is only used when matching dimensions. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_19", "text": " The form of the residual function ℱℱ\\mathcal{F} is flexible. Experiments in this paper involve a function ℱℱ\\mathcal{F} that has two or three layers (Fig. 5), while more layers are possible. But if ℱℱ\\mathcal{F} has only a single layer, Eqn.(1) is similar to a linear layer: 𝐲=W1​𝐱+𝐱𝐲subscript𝑊1𝐱𝐱\\mathbf{y}=W_{1}\\mathbf{x}+\\mathbf{x}, for which we have not observed advantages. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_20", "text": " We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function ℱ​(𝐱,{Wi})ℱ𝐱subscript𝑊𝑖\\mathcal{F}(\\mathbf{x},\\{W_{i}\\}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_21", "text": " We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_22", "text": " Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets (Fig. 3, left). The convolutional layers mostly have 3×\\times3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_23", "text": " It is worth noticing that our model has fewer filters and lower complexity than VGG nets (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_24", "text": " Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×\\times1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_25", "text": " Our implementation for ImageNet follows the practice in (21, 41). The image is resized with its shorter side randomly sampled in (256,480)256480(256,480) for scale augmentation . A 224×\\times224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted . The standard color augmentation in is used. We adopt batch normalization (BN) right after each convolution and before activation, following . We initialize the weights as in and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×10460superscript10460\\times 10^{4} iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout , following the practice in . ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_26", "text": " In testing, for comparison studies we adopt the standard 10-crop testing . For best results, we adopt the fully-convolutional form as in (41, 13), and average the scores at multiple scales (images are resized such that the shorter side is in {224,256,384,480,640}224256384480640\\{224,256,384,480,640\\}). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_27", "text": " We evaluate our method on the ImageNet 2012 classification dataset that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_28", "text": " Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_29", "text": " The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_30", "text": " We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN , which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error333We have experimented with more training iterations (3×\\times) and still observed the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.. The reason for such optimization difficulties will be studied in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_31", "text": " Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×\\times3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_32", "text": " We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_33", "text": " Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_34", "text": " Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_35", "text": " Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_36", "text": " Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_37", "text": " Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design444Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.. For each residual function ℱℱ\\mathcal{F}, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×\\times1, 3×\\times3, and 1×\\times1 convolutions, where the 1×\\times1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×\\times3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_38", "text": " The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_39", "text": " 50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_40", "text": " 101-layer and 152-layer ResNets: We construct 101-layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_41", "text": " The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 5). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 5). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_42", "text": " Comparisons with State-of-the-art Methods. In Table 5 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_43", "text": " We conducted more studies on the CIFAR-10 dataset , which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_44", "text": " The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×\\times32 images, with the per-pixel mean subtracted. The first layer is 3×\\times3 convolutions. Then we use a stack of 6​n6𝑛6n layers with 3×\\times3 convolutions on the feature maps of sizes {32,16,8}32168\\{32,16,8\\} respectively, with 2n𝑛n layers for each feature map size. The numbers of filters are {16,32,64}163264\\{16,32,64\\} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n𝑛n+2 stacked weighted layers. The following table summarizes the architecture: output map size 32×\\times32 16×\\times16 8×\\times8 # layers 1+2n𝑛n 2n𝑛n 2n𝑛n # filters 16 32 64 When shortcut connections are used, they are connected to the pairs of 3×\\times3 layers (totally 3​n3𝑛3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_45", "text": " We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in and BN but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in for training: 4 pixels are padded on each side, and a 32×\\times32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×\\times32 image. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_46", "text": " We compare n={3,5,7,9}𝑛3579n=\\{3,5,7,9\\}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see ), suggesting that such an optimization difficulty is a fundamental problem. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_47", "text": " Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_48", "text": " We further explore n=18𝑛18n=18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging555With an initial learning rate of 0.1, it starts converging (<<90% error) after several epochs, but still reaches similar accuracy.. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet and Highway (Table 6), yet is among the state-of-the-art results (6.43%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_49", "text": " Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×\\times3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_50", "text": " Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n=200𝑛200n=200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103superscript10310^{3}-layer network is able to achieve training error <<0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6). ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_51", "text": " But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout or dropout is applied to obtain the best results ((10, 25, 24, 35)) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_52", "text": " Our method has good generalization performance on other recognition tasks. Table 8 and  8 show the object detection baseline results on PASCAL VOC 2007 and 2012 and COCO . We adopt Faster R-CNN as the detection method. Here we are interested in the improvements of replacing VGG-16 with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@(.5, .95)), which is a 28% relative improvement. This gain is solely due to the learned representations. ", "title": "Deep Residual Learning for Image Recognition" }, { "id": "1512.03385_all_53", "text": " Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix. ", "title": "Deep Residual Learning for Image Recognition" } ]
Is weak supervision a subset or type of regular supervised learning?
In this work, fine-tuning an LM only on labeled examples is considered to be supervised learning [41].
[ 41 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets Sharif Razavian et al. (2014); Long et al. (2015a); He et al. (2016); Huang et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_1", "text": " Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection Jindal and Liu (2007); Ngai et al. (2011); Chu et al. (2012), emergency response Caragea et al. (2011), and commercial document classification, such as for legal discovery Roitblat et al. (2010). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_2", "text": " While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer Blitzer et al. (2007). For inductive transfer, fine-tuning pretrained word embeddings Mikolov et al. (2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers Peters et al. (2017); McCann et al. (2017); Peters et al. (2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_3", "text": " In light of the benefits of pretraining Erhan et al. (2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via fine-tuning has been unsuccessful for NLP Mou et al. (2016). Dai and Le (2015) first proposed fine-tuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_4", "text": " We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_5", "text": " We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture—with the same hyperparameters and no additions other than tuned dropout hyperparameters—outperforms highly engineered models and transfer learning approaches on six widely studied text classification tasks. On IMDb, with 100100100 labeled examples, ULMFiT matches the performance of training from scratch with 10×10\\times and—given 505050k unlabeled examples—with 100×100\\times more data. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_6", "text": " Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_7", "text": " Features in deep neural networks in CV have been observed to transition from general to task-specific from the first to the last layer Yosinski et al. (2014). For this reason, most work in CV focuses on transferring the first layers of the model Long et al. (2015b). Sharif Razavian et al. (2014) achieve state-of-the-art results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last Donahue et al. (2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_8", "text": " In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns Hariharan et al. (2015) in CV333A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning Long et al. (2015a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_9", "text": " A related direction is multi-task learning (MTL) Caruana (1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the task-specific objective functions Chen et al. (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_10", "text": " Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA Min et al. (2017), for distantly supervised sentiment analysis Severyn and Moschitti (2015), or MT domains Sennrich et al. (2015) but has been shown to fail between unrelated ones Mou et al. (2016). Dai and Le (2015) also fine-tune a language model, but overfit with 101010k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel fine-tuning techniques to prevent overfitting even with only 100100100 labeled examples and achieves state-of-the-art results also on small datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_11", "text": " We are interested in the most general inductive transfer learning setting for NLP Pan and Yang (2010): Given a static source task 𝒯Ssubscript𝒯𝑆\\mathcal{T}_{S} and any target task 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T} with 𝒯S≠𝒯Tsubscript𝒯𝑆subscript𝒯𝑇\\mathcal{T}_{S}\\neq\\mathcal{T}_{T}, we would like to improve performance on 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T}. Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies Linzen et al. (2016), hierarchical relations Gulordava et al. (2018), and sentiment Radford et al. (2017). In contrast to tasks like MT McCann et al. (2017) and entailment Conneau et al. (2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space ℋℋ\\mathcal{H} that should be useful for many other NLP tasks Vapnik and Kotz (1982); Baxter (2000). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_12", "text": " We propose Universal Language Model Fine-tuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_13", "text": " In our experiments, we use the state-of-the-art language model AWD-LSTM Merity et al. (2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higher-performance language models in the future. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_14", "text": " ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_15", "text": " An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 Merity et al. (2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100100100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_16", "text": " No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_17", "text": " As different layers capture different types of information Yosinski et al. (2014), they should be fine-tuned to different extents. To this end, we propose a novel fine-tuning method, discriminative fine-tuning444 An unrelated method of the same name exists for deep Boltzmann machines Salakhutdinov and Hinton (2009).. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_18", "text": " Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ𝜃\\theta at time step t𝑡t looks like the following Ruder (2016): θt=θt−1−η⋅∇θJ​(θ)subscript𝜃𝑡subscript𝜃𝑡1⋅𝜂subscript∇𝜃𝐽𝜃\\theta_{t}=\\theta_{t-1}-\\eta\\cdot\\nabla_{\\theta}J(\\theta) (1) where η𝜂\\eta is the learning rate and ∇θJ​(θ)subscript∇𝜃𝐽𝜃\\nabla_{\\theta}J(\\theta) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ𝜃\\theta into {θ1,…,θL}superscript𝜃1…superscript𝜃𝐿\\{\\theta^{1},\\ldots,\\theta^{L}\\} where θlsuperscript𝜃𝑙\\theta^{l} contains the parameters of the model at the l𝑙l-th layer and L𝐿L is the number of layers of the model. Similarly, we obtain {η1,…,ηL}superscript𝜂1…superscript𝜂𝐿\\{\\eta^{1},\\ldots,\\eta^{L}\\} where ηlsuperscript𝜂𝑙\\eta^{l} is the learning rate of the l𝑙l-th layer. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_19", "text": " The SGD update with discriminative fine-tuning is then the following: θtl=θt−1l−ηl⋅∇θlJ​(θ)superscriptsubscript𝜃𝑡𝑙superscriptsubscript𝜃𝑡1𝑙⋅superscript𝜂𝑙subscript∇superscript𝜃𝑙𝐽𝜃\\theta_{t}^{l}=\\theta_{t-1}^{l}-\\eta^{l}\\cdot\\nabla_{\\theta^{l}}J(\\theta) (2) We empirically found it to work well to first choose the learning rate ηLsuperscript𝜂𝐿\\eta^{L} of the last layer by fine-tuning only the last layer and using ηl−1=ηl/2.6superscript𝜂𝑙1superscript𝜂𝑙2.6\\eta^{l-1}=\\eta^{l}/2.6 as the learning rate for lower layers. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_20", "text": " For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: c​u​t=⌊T⋅c​u​t​_​f​r​a​c⌋p={t/c​u​t,if​t<c​u​t1−t−c​u​tc​u​t⋅(1/c​u​t​_​f​r​a​c−1),otherwiseηt=ηm​a​x⋅1+p⋅(r​a​t​i​o−1)r​a​t​i​o𝑐𝑢𝑡⋅𝑇𝑐𝑢𝑡_𝑓𝑟𝑎𝑐𝑝cases𝑡𝑐𝑢𝑡if𝑡𝑐𝑢𝑡1𝑡𝑐𝑢𝑡⋅𝑐𝑢𝑡1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐1otherwisesubscript𝜂𝑡⋅subscript𝜂𝑚𝑎𝑥1⋅𝑝𝑟𝑎𝑡𝑖𝑜1𝑟𝑎𝑡𝑖𝑜\\begin{split}cut&=\\lfloor T\\cdot cut\\_frac\\rfloor\\\\ p&=\\begin{cases}t/cut,&\\text{if}\\ t<cut\\\\ 1-\\frac{t-cut}{cut\\cdot(1/cut\\_frac-1)},&\\text{otherwise}\\end{cases}\\\\ \\eta_{t}&=\\eta_{max}\\cdot\\frac{1+p\\cdot(ratio-1)}{ratio}\\end{split} (3) where T𝑇T is the number of training iterations555In other words, the number of epochs times the number of updates per epoch., c​u​t​_​f​r​a​c𝑐𝑢𝑡_𝑓𝑟𝑎𝑐cut\\_frac is the fraction of iterations we increase the LR, c​u​t𝑐𝑢𝑡cut is the iteration when we switch from increasing to decreasing the LR, p𝑝p is the fraction of the number of iterations we have increased or will decrease the LR respectively, r​a​t​i​o𝑟𝑎𝑡𝑖𝑜ratio specifies how much smaller the lowest LR is from the maximum LR ηm​a​xsubscript𝜂𝑚𝑎𝑥\\eta_{max}, and ηtsubscript𝜂𝑡\\eta_{t} is the learning rate at iteration t𝑡t. We generally use c​u​t​_​f​r​a​c=0.1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐0.1cut\\_frac=0.1, r​a​t​i​o=32𝑟𝑎𝑡𝑖𝑜32ratio=32 and ηm​a​x=0.01subscript𝜂𝑚𝑎𝑥0.01\\eta_{max}=0.01. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_21", "text": " STLR modifies triangular learning rates Smith (2017) with a short increase and a long decay period, which we found key for good performance.666We also credit personal communication with the author. In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV Loshchilov and Hutter (2017).777While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_22", "text": " Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization Ioffe and Szegedy (2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_23", "text": " The signal in text classification tasks is often contained in a few words, which may occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step 𝐡Tsubscript𝐡𝑇\\mathbf{h}_{T} of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory 𝐇={𝐡1,…,𝐡T}𝐇subscript𝐡1…subscript𝐡𝑇\\mathbf{H}=\\{\\mathbf{h}_{1},\\ldots,\\mathbf{h}_{T}\\}: 𝐡c=(𝐡T,𝚖𝚊𝚡𝚙𝚘𝚘𝚕​(𝐇),𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕​(𝐇))subscript𝐡𝑐subscript𝐡𝑇𝚖𝚊𝚡𝚙𝚘𝚘𝚕𝐇𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕𝐇\\mathbf{h}_{c}=(\\mathbf{h}_{T},\\mathtt{maxpool}(\\mathbf{H}),\\mathtt{meanpool}(\\mathbf{H})) (4) where ()() is concatenation. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_24", "text": " Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative fine-tuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_25", "text": " Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge Yosinski et al. (2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we fine-tune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ Felbo et al. (2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_26", "text": " While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_27", "text": " Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixed-length batches of size b𝑏b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_28", "text": " Similar to existing work Peters et al. (2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_29", "text": " While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important real-world applications. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_30", "text": " We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches Johnson and Zhang (2017); McCann et al. (2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_31", "text": " For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset Maas et al. (2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_32", "text": " We use the six-class version of the small TREC dataset Voorhees and Tice (1999) dataset of open-domain, fact-based questions divided into broad semantic categories. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_33", "text": " For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_34", "text": " We use the same pre-processing as in earlier work Johnson and Zhang (2017); McCann et al. (2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_35", "text": " We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model Merity et al. (2017a) with an embedding size of 400400400, 333 layers, 115011501150 hidden activations per layer, and a BPTT batch size of 707070. We apply dropout of 0.40.40.4 to layers, 0.30.30.3 to RNN layers, 0.40.40.4 to input embedding layers, 0.050.050.05 to embedding layers, and weight dropout of 0.50.50.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 505050. We use Adam with β1=0.7subscript𝛽10.7\\beta_{1}=0.7 instead of the default β1=0.9subscript𝛽10.9\\beta_{1}=0.9 and β2=0.99subscript𝛽20.99\\beta_{2}=0.99, similar to Dozat and Manning (2017). We use a batch size of 646464, a base learning rate of 0.0040.0040.004 and 0.010.010.01 for fine-tuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task888On small datasets such as TREC-6, we fine-tune the LM only for 151515 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 505050 epochs to be a good default for fine-tuning the classifier.. We otherwise use the same practices used in Merity et al. (2017a). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_36", "text": " For each task, we compare against the current state-of-the-art. For the IMDb and TREC-6 datasets, we compare against CoVe McCann et al. (2017), a state-of-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_37", "text": " For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing state-of-the-art requires complex architectures Peters et al. (2018), multiple forms of attention McCann et al. (2017) and sophisticated embedding schemes Johnson and Zhang (2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of real-world datasets: Its documents are generally a few paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_38", "text": " On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6—to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_39", "text": " We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-of-the-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_40", "text": " In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC-6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10%percent1010\\% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 505050 epochs and train all methods but ULMFiT with early stopping. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_41", "text": " One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_42", "text": " On IMDb and AG, supervised ULMFiT with only 100100100 labeled examples matches the performance of training from scratch with 10×10\\times and 20×20\\times more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (505050k for IMDb, 100100100k for AG), at 100100100 labeled examples, we match the performance of training from scratch with 50×50\\times and 100×100\\times more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_43", "text": " We compare using no pretraining with pretraining on WikiText-103 Merity et al. (2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_44", "text": " In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout999To avoid overfitting, we only train the vanilla LM classifier for 555 epochs and keep dropout of 0.40.40.4 in the classifier. with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_45", "text": " We compare no fine-tuning against fine-tuning the full model Erhan et al. (2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_46", "text": " We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) Donahue et al. (2014), ‘Chain-thaw’ Felbo et al. (2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) Loshchilov and Hutter (2017). We use a learning rate ηL=0.01superscript𝜂𝐿0.01\\eta^{L}=0.01 for ‘Discr’, learning rates of 0.0010.0010.001 and 0.00010.00010.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in Felbo et al. (2017), and a learning rate of 0.0010.0010.001 otherwise. We show the results in Table 7. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_47", "text": " Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 00. ‘Chain-thaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_48", "text": " While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful Mou et al. (2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_49", "text": " On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_50", "text": " At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.50.50.5–0.70.70.7. On IMDb we lower the test error from 5.305.305.30 of a single model to 4.584.584.58 for the bidirectional model. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_51", "text": " While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches Conneau et al. (2017); McCann et al. (2017); Peters et al. (2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_52", "text": " Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop Huh et al. (2016), while recent work shows that an alignment between source and target task label sets is important Mahajan et al. (2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion Caruana (1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies Linzen et al. (2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_53", "text": " Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. ", "title": "Universal Language Model Fine-tuning for Text Classification" }, { "id": "1801.06146_all_54", "text": " We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the state-of-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. ", "title": "Universal Language Model Fine-tuning for Text Classification" } ]
What is 'cumulative depth up to a specific stage' ?
The total number of blocks starting from the very first bloci in the network up to the last block in a specific stage [17].
[ 17 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a challenging problem in various areas. A popular hardware solution is to develop a hardware accelerator, called neural processing unit (NPU), that achieves higher performance per watt than CPUs or GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_1", "text": " For a given hardware platform, several software techniques have been proposed to accelerate CNNs by approximate computing since deep learning applications can tolerate a certain range of computation inaccuracy. Some examples in this software approach are filter pruning (Li et al., 2016), quantization (Park et al., 2017), low-rank approximation (Kim et al., 2015). Accelerating CNNs is helpful to improve the accuracy by running a more compute-intensive CNN with higher accuracy within a given time budget. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_2", "text": " On the other hand, various algorithmic solutions have been proposed to improve the CNN architecture by introducing new operations, optimizing the hyper-parameters, or searching for better network architecture. New operations such as depth-wise convolution(DWConv) (Chollet, 2017) and mobile inverted bottleneck (MBConv) (Sandler et al., 2018) have been developed to replace the regular full convolution. Recently, automated neural architecture search (NAS) emerges as the default technique to find a CNN architecture with higher accuracy than manually-designed architectures, particularly image classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_3", "text": " A NAS technique explores a predefined search space and estimates the performance for each candidate architecture to find an optimal one with the highest accuracy under a given latency constraint. Thus there are three factors that affect the performance of NAS, as shown in Figure 1: search space, search strategy, and performance estimation. The search space of a NAS technique is usually restricted by a supernet that defines the topology of the largest network to explore. Since the performance of a network depends on the hardware platform, the NAS technique needs to be customized to a given hardware platform. While numerous NAS techniques have been proposed with various search strategies recently, their assumed hardware platforms are mostly GPUs. In this paper, we present a customized NAS technique for an NPU, which produces a CNN architecture with a better accuracy-latency tradeoff than existing models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_4", "text": " One of the most closely related work is the recently proposed NAS technique tailored for Google’s Edge-TPU (Gupta and Akin, 2020). While MBConv is widely used for GPU-aware NAS techniques, they prefer to use a single full convolution by fusing expansion layer and DWConv layer in some parts of the network, observing that the Edge-TPU runs the fused full convolution faster even though the required number of MAC (multiply-accumulate) operations is much larger. It confirms that the number of MAC operations is not a proper measure of latency, and platform-specific performance estimation is required. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_5", "text": " Since an NPU is much faster than a GPU, it enables us to explore the wider search space for NAS under a given latency constraint. Since there are many factors to define the search space, such as the number of layers, channels, kernel sizes, and so on, the search space grows exponentially as the allowed computation complexity grows. Hence, reducing the search space, as well as the search time, is very challenging for NPU-aware NAS techniques. While the aforementioned work for Google’s Edge TPU trains each architecture candidate from scratch to estimate the performance, it is not computationally efficient. In contrast, we adopt a fast differentiable hardware-aware One-Shot NAS, called Single-Path NAS (Stamoulis et al., 2019), in order to reduce the search time. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_6", "text": " Figure 2 shows an overview of the proposed NAS methodology that consists of three steps. In the first step, we change the supernet structure of the Single-Path NAS, which has a hierarchical structure based on MobileNetV2 (Sandler et al., 2018): A supernet structure consists of a series of stages that contain a series of blocks containing an MBConv micro-architecture inside. Since the network accuracy depends on the supernet structure, we make two extensions on the supernet structure to widen the search space. First, we allow stages to have a different number of blocks, called depth of the stage, considering the effect of stage depth on the accuracy and the latency. Second, we add parallel layers with different kernel sizes in each block, adopting the idea of mixed depthwise convolution (Tan and Le, 2019b) (MixConv). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_7", "text": " With the extended supernet structure, we apply the Single-Path NAS, which is also extended to support the extended supernet structure. In this step, we assume a shorter latency constraint than the required to reduce the search space and the search time. The last step is to scale up the baseline CNN adopting the compound scaling technique proposed in  (Tan and Le, 2019a) until the latency constraint is met. The proposed NAS methodology is named as S3NAS since it consists of 3 steps: Supernet design, SinglePath NAS, and Scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_8", "text": " For accurate latency estimation, an analytical latency estimator is devised, based on a cycle-level NPU simulator that runs an entire CNN considering the memory access overhead accurately. Since the NPU assumed in this paper can execute depth-wise separable convolution (DWConv), squeeze-and-excitation (SE), and h-swish activation function efficiently, the proposed supernet prefers DWConv to regular convolution. Observing that the accuracy is improved by around 1% if SE and h-swish activation function are used, we add a post-processing phase after a CNN network is found by NAS to add SE layers and to replace ReLU to h-swish activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_9", "text": " Experiments show that the proposed NAS technique could improve the accuracy-latency tradeoff over existing SoTA CNN models. Our best model achieves 82.72% top-1 accuracy on ImageNet with 11.66ms latency without any special data augmentation. Note that the latency is estimated by cycle-accurate simulation. For a fair comparison with the related work, the latency of each compared network is also estimated with the same simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_10", "text": " After an automated NAS technique based on reinforcement learning successfully found a better CNN architecture than manually-designed architectures (Zoph and Le, 2016), extensive research has been conducted to develop various NAS techniques based on reinforcement learning (Zoph et al., 2018; Tan et al., 2019). However, these NAS techniques are computationally intensive because they train each candidate architectures from scratch to estimate the goodness of it. Thus, one-shot neural architecture search approach (Pham et al., 2018) was introduced to reduce the search cost. In this approach, an over-parameterized super-model network is defined, and architecture search is performed by parameter optimization to reduce the complexity of the network. Gradient-based differentiable search has gained increasing popularity, and various NAS techniques have been proposed with different super-models and hyper-parameters (Pham et al., 2018; Guo et al., 2019; Chu et al., 2019; Liu et al., 2018; Cai et al., 2018). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_11", "text": " Among diverse techniques to decrease the search cost, Single-Path NAS (Stamoulis et al., 2019) was recently proposed to find a good architecture faster than the existing differentiable NAS techniques. This technique is extended to broaden the search space by including the squeeze-and-excitation (SE) block in the search space (Stamoulis et al., 2020). Our work is grounded on the original Single-Path NAS technique. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_12", "text": " Finding a hardware-friendly neural architecture has been facilitated as NAS algorithm improved. MNASNet (Tan et al., 2019) added a latency term in the objective function to discover better architectures with a given latency constraint on their target hardware platform. EfficientNet (Tan and Le, 2019a), whose search method is similar to MNASNet, introduced a novel scaling method, called compound scaling, to find more accurate networks as the latency constraint or FLOPS increases. Instead of finding a network directly for a given long latency constraint, they scale up the depth and the width of a small network with shorter latency and the input image size in a balanced way. They could achieve a set of networks with state-of-the-art performance over a range of latency constraints. They removed SE blocks and swish activation function from their search space for hardware platforms that do not support them efficiently to name the resultant network as EfficientNet-lite. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_13", "text": " While EfficientNet searches a set of networks over a range of latency constraints by scaling up, Once-For-All (Cai et al., 2019) network takes an opposite approach, scaling down. They first train a super-graph architecture by a novel method called progressive shrinking and search a sub-graph network that achieves good accuracy for a given latency constraint without re-training but cheap fine-tuning. They claim that a scaled-down network from the super-graph gives better accuracy than a network that is trained from scratch. They could find more accurate networks than EfficientNet for small latency constraints. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_14", "text": " To explore more efficient neural architectures on specific hardware, some NAS methods have proposed to define the design space of architecture exploration, tailored for the hardware platform. Gupta et al. (Gupta and Akin, 2020) devised a building block named fused inverted bottleneck convolution block and showed that this block is often more efficient than MBConv on their target NPU, Edge-TPU. They adopted compound scaling method to find high-performing architectures on Edge-TPU. Our work is closely related to this method. We devise a building block that consists of parallel DWConv layers with different kernel sizes, based on a preliminary experiment to find that it is better than the other alternative building blocks in terms of performance per latency (Tan and Le, 2019b). And we increase the search space by allowing stages to have a different number of blocks in the baseline supernet. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_15", "text": " A neural network typically consists of multiple stages, a sequence of blocks with the same number of output channels (width). There are studies on how to assign the number of blocks (depth) to each stage. Meng et al. (Meng et al., 2020) observed that the way of assigning depth to each stage affects the accuracy. Moreover, they argued that the good depth assignment of each stage could be inherited from the shallow ones as the total depth is increased, and proposed a layer-growing NAS method that could significantly reduce the search space. Furthermore, Radosavovic et al. (Radosavovic et al., 2020) discovered that among neural architectures with similar computational complexity, the ones whose stage width and depth have a quantized linear relationship tend to have higher accuracy. Based on similar observations, we apply this design principle to change the structure of the conventional One-Shot NAS supernet. In addition, we argue that placing more blocks in a stage with a larger width is beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_16", "text": " While the original DWConv block uses a single kernel size for depthwise convolution, mixing multiple kernel sizes for depthwise convolution was recently proposed, named as MixConv (Tan and Le, 2019b). Mixing multiple kernel sizes can be understood as having parallel branches inside a block. It is shown that MixConv is more efficient than ordinary DWConv (Tan and Le, 2019b). There exist some recent NAS methods (Mei et al., 2019; Chu et al., 2020) that also broaden their search space using DWConv with multiple kernel sizes to find better neural architectures. We adopt this approach in the supernet and formulate a differentiable latency model of this operation, enabling a latency-aware differentiable One-Shot NAS with MixConv. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_17", "text": " In this section, we will briefly review the Single-Path NAS technique and our target NPU. Before going further, we define some terminologies used in this paper, as shown in Figure 3. A neural architecture consists of stages at the top level. A stage consists of a sequence of blocks whose output feature maps have the same dimension. In the proposed supernet, a block is defined as MBConv that typically starts with 1×1 conv (expansion layer) and ends with 1×1 conv. Adopting the MixConv approach, the depthwise convolution layer consists of parallel superkernels whose kernel size will be determined during the NAS process. The width of block denotes the number of channels in the final output feature map of the block, and the width of stage is the width of the final block in the stage. We will call the total number of blocks starting from the very first block in the network up to the last block in a specific stage S, as the cumulative depth up to stage S. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_18", "text": " Differentiable NAS methods usually define architecture parameters to choose which convolution layer to use in the block, training each convolution layer independently. Single-Path NAS (Stamoulis et al., 2019) reduce the search cost by decreasing the number of trainable parameters by sharing the kernel weights between convolution layers. The key idea is designing an over-parameterized depthwise convolution kernel named superkernel, and letting each depthwise convolution kernel of candidate MBConvs directly inherit the weights of this superkernel. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_19", "text": " Let 𝐰k,esubscript𝐰𝑘𝑒\\mathbf{w}_{k,e} denote the depthwise convolution kernel of candidate MBConv with kernel size k and expansion ratio e (MBConvk,e). First, they introduce a large 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6}, which is the DWConv kernel of MBConv5,6. Then, the inner core of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} can be considered as 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6}, a DWConv kernel of MBConv3,6. A superkernel containing these two kernel size options can be expressed as Figure 4: (1) 𝐰∗,6=𝐰3,6+𝟙​(use​kernel​size​ 5)⋅𝐰5\\3,6subscript𝐰6subscript𝐰36⋅1usekernelsize5subscript𝐰\\536\\mathbf{w}_{*,6}=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\rm{use\\leavevmode\\nobreak\\ kernel\\leavevmode\\nobreak\\ size\\leavevmode\\nobreak\\ 5})\\cdot\\mathbf{w}_{5\\backslash 3,6} where 𝐰5\\3,esubscript𝐰\\53𝑒\\mathbf{w}_{5\\backslash 3,e} means the outer part, 𝐰5,e−𝐰3,esubscript𝐰5𝑒subscript𝐰3𝑒\\mathbf{w}_{5,e}-\\mathbf{w}_{3,e}. Next, they formulate conditions to determine the kernel size. They define a certain threshold value t𝑡t and compare the norm of the kernel weights with the threshold. If the norm of a subset weight is larger than the threshold, it remains in the supernet. To this end, Eq. (1) is changed as follows: (2) 𝐰∗,6​(tk=5)=𝐰3,6+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅𝐰5\\3,6subscript𝐰6subscript𝑡𝑘5subscript𝐰36⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscript𝐰\\536\\mathbf{w}_{*,6}(t_{k=5})=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot\\mathbf{w}_{5\\backslash 3,6} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_20", "text": " The threshold value is also trainable to be automatically chosen during training. To enable back-propagation, they relax 𝟙​(x>t)1𝑥𝑡\\mathbbm{1}(x>t) to σ​(x−t)𝜎𝑥𝑡\\sigma(x-t) when computing gradients. In addition, they optimize kernel weights and threshold values simultaneously. For a given tight search time, this method is shown to be more effective than the other methods (Stamoulis et al., 2020). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_21", "text": " Moreover, we can vary the number of channels by varying the expansion ratio of each block: we can use only the first half channels of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} and 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6} as 𝐰5,3subscript𝐰53\\mathbf{w}_{5,3} and 𝐰3,3subscript𝐰33\\mathbf{w}_{3,3}, respectively. By defining another set of trainable thresholds, the following formula is defined to determine the expansion ratio: (3) 𝐰∗,∗​(te=3,te=6,tk=5)=𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝐰∗,3​(tk=5)+𝟙​(∥𝐰∗,3​(tk=5)∥2>te=3)⋅𝟙​(∥𝐰∗,6\\3​(tk=5)∥2>te=6)⋅𝐰∗,6\\3​(tk=5)subscript𝐰subscript𝑡𝑒3subscript𝑡𝑒6subscript𝑡𝑘5⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒3subscript𝐰3subscript𝑡𝑘5⋅⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒31superscriptdelimited-∥∥subscript𝐰\\63subscript𝑡𝑘52subscript𝑡𝑒6subscript𝐰\\63subscript𝑡𝑘5\\mathbf{w}_{*,*}(t_{e=3},t_{e=6},t_{k=5})=\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbf{w}_{*,3}(t_{k=5})+\\\\ \\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}(t_{k=5})\\rVert^{2}>t_{e=6})\\cdot\\mathbf{w}_{*,6\\backslash 3}(t_{k=5}) where 𝐰k,6\\3subscript𝐰𝑘\\63\\mathbf{w}_{k,6\\backslash 3} means the remaining half of channels, 𝐰k,6−𝐰k,3subscript𝐰𝑘6subscript𝐰𝑘3\\mathbf{w}_{k,6}-\\mathbf{w}_{k,3}. Note that if te=3subscript𝑡𝑒3t_{e=3} is sufficiently large, all channels can be removed to make the block a plain skip connection. Thus, they replace the original depthwise convolution kernel of MBConv5,6 with 𝐰∗,∗subscript𝐰\\mathbf{w}_{*,*}, yielding a differentiable and searchable MBConv with respect to the kernel size and expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_22", "text": " They also design a differentiable latency-aware loss function to consider hardware latency in the search algorithm. To this end, they define a function to estimate latency as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_23", "text": " (4) Lel=𝟙(∥𝐰∗,3∥2>te=3)⋅(P5,3l+𝟙(∥𝐰∗,6\\3∥2>te=6)⋅(P5,6l−P5,3l))subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰32subscript𝑡𝑒3subscriptsuperscript𝑃𝑙53⋅1superscriptdelimited-∥∥subscript𝐰\\632subscript𝑡𝑒6subscriptsuperscript𝑃𝑙56subscriptsuperscript𝑃𝑙53\\begin{split}L^{l}_{e}=&\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}\\rVert^{2}>t_{e=3})\\cdot(P^{l}_{5,3}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}\\rVert^{2}>t_{e=6})\\cdot(P^{l}_{5,6}-P^{l}_{5,3}))\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_24", "text": " (5) Ll=P3,6l/P5,6l⋅Lel+𝟙​(∥𝐰5\\3,6∥2>tk=5)⋅Lel⋅(1−P3,6l/P5,6l)superscript𝐿𝑙⋅subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscriptsuperscript𝐿𝑙𝑒1subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56\\begin{split}L^{l}=&P^{l}_{3,6}/P^{l}_{5,6}\\cdot L^{l}_{e}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot L^{l}_{e}\\cdot(1-P^{l}_{3,6}/P^{l}_{5,6})\\end{split} where Pk,elsubscriptsuperscript𝑃𝑙𝑘𝑒P^{l}_{k,e} is a profiled latency value for MBConvk,e for the l𝑙lth block in the supernet. Note that they used P3,6lsubscriptsuperscript𝑃𝑙36P^{l}_{3,6}, P5,3lsubscriptsuperscript𝑃𝑙53P^{l}_{5,3}, and P5,6lsubscriptsuperscript𝑃𝑙56P^{l}_{5,6} only to formulate Llsuperscript𝐿𝑙L^{l}, and the latency for MBConv3,3 is approximated using these values. Here is the latency-aware loss function designed: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_25", "text": " (6) C​E+λ⋅l​o​g​(∑lLl)𝐶𝐸⋅𝜆𝑙𝑜𝑔subscript𝑙superscript𝐿𝑙CE+\\lambda\\cdot log(\\sum_{l}L^{l}) Finally, they search for a neural architecture in two phases. First, they train the supernet by randomly choosing one of the candidate subgraphs in each training step. In this phase, they use CrossEntropy loss only. Next, they enable latency-aware loss function and train the supernet with the loss function, to decide the threshold values. By doing this, they could get a high-quality neural architecture with only eight epochs of ImageNet training set.111In our implementation, we changed the probability of selecting each candidate MBConvs to be equal. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_26", "text": " Even though the proposed methodology can be applied to any type of NPU, the current implementation is made for an adder-tree type NPU, called MIDAP (Kang et al., 2019). It has a fully-pipelined micro-architecture that consists of separate hardware modules and memory modules for convolution, activation function, and various reduction operations. Since it enables us to make a fully static schedule of operations without resource contention in the data path, we can estimate the end-to-end latency of a CNN quite accurately analytically. Unexpected delay may incur from off-chip DRAM delay that is not fully hidden by double buffering. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_27", "text": " Another good feature of MIDAP is that it efficiently supports the following operations that would lower the MAC (multiply-accumulate) utilization in other NPUs that have many MAC units: pooling, DWConv, and squeeze-and-excitation (SE). For DWConv operation, it does not use an adder tree but an alternative hardware logic that consists of a set of individual accumulators connected to the multiply units. For pooling and SE operations, reduction logic is included in the pipeline. Note that MIDAP has not been implemented as a real hardware chip yet but as a virtual prototype with a cycle-accurate simulator. Thanks to the cycle-accurate simulator that considers the DRAM access contention and parametrized DRAM access delay, we could build an accurate analytical model for end-to-end latency estimation, based on the profiling result with the simulator. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_28", "text": " Inverted bottleneck with depth-wise convolution (MBConv) (Sandler et al., 2018) is a popular building block in recent mobile-friendly networks. However, it is not efficiently supported in existing NPUs that do not have specialized hardware units for DWConv (Gholami et al., 2018; Gupta and Akin, 2020). Thus Gupta et al. (Gupta and Akin, 2020) replaced an MBConv block with a fused building block that fuses an expansion layer and DWConv in MBConv into a single full convolution. Even though the fused block increases the number of multiplications significantly, it improves the MAC utilization larger so that the fused block is observed faster than MBConv on their target NPU, EdgeTPU. By adding this building block to their search space, they could successfully obtain different neural architectures for EdgeTPU from those for GPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_29", "text": " Since DWConv is efficiently supported in MIDAP, however, the improvement of MAC utilization by fusing does not outweigh the increased computation complexity, which is observed in preliminary experiments. The experiment setup is similar to main experiment setup that will be explained in section 5.2. The experimental result is shown in Table 1. The latency constraint for fused block experiment is set to 7.0ms, while others are set to 2.15ms. In the combined experiment, we use the fused block in the 1st and the 2nd stages, and MBConv for the remaining stages since the latency gap between two building blocks is too high. As shown in the table, MBConv block shows the best tradeoff between accuracy and latency. Hence we prefer MBConv to the fused building block as the basic building block in the supernet for MIDAP. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_30", "text": " In this section, we explain the proposed S3NAS methodology that consists of three steps as displayed in Figure 2. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_31", "text": " The number of blocks is one of the key parameters in neural networks. It is observed that the total number of blocks affects the accuracy of neural architecture (He et al., 2016; Tan and Le, 2019a). In conventional One-Shot NAS methods, each stage in the supernet has the same number of blocks (Cai et al., 2018; Stamoulis et al., 2019; Wu et al., 2019). On the other hand, some recent studies (Meng et al., 2020; Radosavovic et al., 2020) report that the way of assigning the number of blocks in each stage has a noticeable impact on the accuracy, even with the same number of blocks in total. Hence we allow stages in the supernet to have a different number of blocks. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_32", "text": " We investigate the impact of assigning the number of blocks in the supernet with another preliminary experiment. We construct a network based on MobileNetV2, which has four blocks in every stage, and observe the change of accuracy as we reduce two blocks in a different stage in each experiment. Figure 5 shows that MBConvs with larger width has more impact on accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_33", "text": " As the number of multiplications in a DWConv is W×H×C×K2𝑊𝐻𝐶superscript𝐾2W\\times H\\times C\\times K^{2}, the later stage of DWConv tends to have shorter latency since the reduction of H×W𝐻𝑊H\\times W is larger than the increase of C𝐶C. Thus the impact on the latency by increasing the number of blocks in a later stage is not significant as displayed in Figure 5. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_34", "text": " Thus, we place more blocks to stages with larger width in the supernet, making the cumulative depth up to a specific stage is proportional to the width of the stage, which is similar to PyramidNet (Han et al., 2017). A recent study (Radosavovic et al., 2020) also claims that neural architectures with a linear relationship between the cumulative depth and the width tend to have higher accuracy with a similar amount of computation complexity. Our experiment shows that our modification to supernet enhances the efficiency of the search result in terms of accuracy as well as latency (Table 4). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_35", "text": " Another feature of the proposed supernet is to use mixed convolution (MixConv) that mixes different kernel sizes in the depth-wise convolution layer (Tan and Le, 2019b). Some recent NAS methods (Mei et al., 2019; Chu et al., 2020) also broaden their search space using DWConv with various kernel sizes and could find better neural architectures. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_36", "text": " Figure 6 depicts our building block structure. This block starts and ends with 1×1 convolution, with N𝑁N searchable superkernels in the middle. Each searchable superkernel is designed similarly to Eq. (3), while we may use different threshold values in each superkernel. The kernel sizes and expansion ratios are selected among predetermined values. If the j𝑗j-th searchable superkernel chooses an expansion ratio ejsubscript𝑒𝑗e_{j}, the j𝑗j-th kernel has ejsubscript𝑒𝑗e_{j} times more channels than the first 1×1 convolution. Compared with the original MixConv suggested in (Tan and Le, 2019b), the proposed building block supports more diverse combinations of kernel sizes and expansion ratios. It enhances the efficiency of search results on our target NPU (Table 5). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_37", "text": " We finish this subsection by highlighting the merit of Single-Path NAS on building a MixConv-based differentiable NAS. Conventional multi-path NAS methods would have difficulties when adding inverted bottleneck convolution with MixConv to their search space. Since the number of possible choices of such blocks grows proportionally to the partition number, multi-path NAS methods would introduce a significant increase in memory requirements and the search time. On the contrary, MixConv can be efficiently supported in Single-Path NAS, as explained below. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_38", "text": " We use a different latency estimation model, and a loss formula from the original SinglePath NAS technique explained in section 3.1. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_39", "text": " Suppose we concatenate N𝑁N searchable superkernels to build a MixConv-based building block, and let k→=(k1,⋯,kN),e→=(e1,⋯,eN)formulae-sequence→𝑘subscript𝑘1⋯subscript𝑘𝑁→𝑒subscript𝑒1⋯subscript𝑒𝑁\\vec{k}=(k_{1},\\cdots,k_{N}),\\vec{e}=(e_{1},\\cdots,e_{N}) where kj,ejsubscript𝑘𝑗subscript𝑒𝑗k_{j},e_{j} denote the kernel size and the expansion ratio of the j𝑗jth searchable superkernel. The estimated latency of a DWConv operation depends on the kernel size and the expansion ratio. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_40", "text": " For latency formulation, we first define two condition variables, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} and Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}}, that denote whether the j𝑗jth searchable superkernel chooses the kernel size kjsubscript𝑘𝑗k_{j} and the expansion ratio ejsubscript𝑒𝑗e_{j}, respectively; For example, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} is 1 if and only if the j𝑗jth searchable superkernel chooses kjsubscript𝑘𝑗k_{j}, and 0 otherwise. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_41", "text": " Let κ1<⋯<κKsubscript𝜅1⋯subscript𝜅𝐾\\kappa_{1}<\\cdots<\\kappa_{K} be the candidate kernel sizes, and 0=ϵ1<⋯<ϵE0subscriptitalic-ϵ1⋯subscriptitalic-ϵ𝐸0=\\epsilon_{1}<\\cdots<\\epsilon_{E} denote the candidate expansion ratios of the j𝑗jth searchable superkernel, respectively. Suppose kj=κcsubscript𝑘𝑗subscript𝜅𝑐k_{j}=\\kappa_{c}, then Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} can be formulated as follows: ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_42", "text": " (7) Fj,kj=(∏2≤i≤c𝟙​(∥𝐰j,κi\\κi−1,ϵE∥2>tj,κi))⋅fj,kj​, wherefj,kj={𝟙​(∥𝐰j,κc+1\\κc,ϵE∥2<tj,κc+1),if ​c<K1,if ​c=Ksubscript𝐹𝑗subscript𝑘𝑗⋅subscriptproduct2𝑖𝑐1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑖subscript𝜅𝑖1subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑖subscript𝑓𝑗subscript𝑘𝑗, wheresubscript𝑓𝑗subscript𝑘𝑗cases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑐1subscript𝜅𝑐subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑐1if 𝑐𝐾1if 𝑐𝐾\\begin{split}F_{j,k_{j}}&=\\left(\\prod_{2\\leq i\\leq c}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{i}\\backslash\\kappa_{i-1},\\epsilon_{E}}\\rVert^{2}>t_{j,\\kappa_{i}})\\right)\\cdot f_{j,k_{j}}\\text{, where}\\\\ f_{j,k_{j}}&=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{c+1}\\backslash\\kappa_{c},\\epsilon_{E}}\\rVert^{2}<t_{j,\\kappa_{c+1}}),&\\text{if }c<K\\\\ 1,&\\text{if }c=K\\end{cases}\\end{split} ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_43", "text": " Figure 7 depicts an example of this formula when the j𝑗jth searchable superkernel that has four candidate kernel sizes κ1<⋯<κ4subscript𝜅1⋯subscript𝜅4\\kappa_{1}<\\cdots<\\kappa_{4} chooses κ2subscript𝜅2\\kappa_{2} as the kernel size: kj=κ2subscript𝑘𝑗subscript𝜅2k_{j}=\\kappa_{2}. It means that weight 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} and 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}} are used, but the remaining weights starting from 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} are not used. Since 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} is always used, it is not included in the formula. To use 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}}, the norm of it has to be larger than tj,κ2subscript𝑡𝑗subscript𝜅2t_{j,\\kappa_{2}} while the norm of 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} should not be larger than tj,κ3subscript𝑡𝑗subscript𝜅3t_{j,\\kappa_{3}} to avoid the use of larger kernel sizes. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_44", "text": " We can formulate Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}} similarly: Gj,ejsubscript𝐺𝑗subscript𝑒𝑗\\displaystyle G_{j,e_{j}} =(∏2≤i≤d𝟙​(∥𝐰j,∗,ϵi\\ϵi−1∥2>tj,ϵi))⋅gj,ej​, whereabsent⋅subscriptproduct2𝑖𝑑1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑖subscriptitalic-ϵ𝑖12subscript𝑡𝑗subscriptitalic-ϵ𝑖subscript𝑔𝑗subscript𝑒𝑗, where\\displaystyle=\\left(\\prod_{2\\leq i\\leq d}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{i}\\backslash\\epsilon_{i-1}}\\rVert^{2}>t_{j,\\epsilon_{i}})\\right)\\cdot g_{j,e_{j}}\\text{, where} gj,ejsubscript𝑔𝑗subscript𝑒𝑗\\displaystyle g_{j,e_{j}} ={𝟙​(∥𝐰j,∗,ϵd+1\\ϵd∥2<tj,ϵd+1),if ​d<E1,if ​d=Eabsentcases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑑1subscriptitalic-ϵ𝑑2subscript𝑡𝑗subscriptitalic-ϵ𝑑1if 𝑑𝐸1if 𝑑𝐸\\displaystyle=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{d+1}\\backslash\\epsilon_{d}}\\rVert^{2}<t_{j,\\epsilon_{d+1}}),&\\text{if }d<E\\\\ 1,&\\text{if }d=E\\end{cases} when ej=ϵdsubscript𝑒𝑗subscriptitalic-ϵ𝑑e_{j}=\\epsilon_{d}. Then the condition for a MixConv-based building block to choose k→,e→→𝑘→𝑒\\vec{k},\\vec{e} can be expressed as ∏jNFj,kj​Gj,ejsuperscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_45", "text": " Now, the estimated latency of a single block is formulated as follows: (8) L=∑k→,e→(P​(k→,e→)​∏jNFj,kj​Gj,ej)𝐿subscript→𝑘→𝑒𝑃→𝑘→𝑒superscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗L=\\sum_{\\vec{k},\\vec{e}}(P(\\vec{k},\\vec{e})\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}) where P​(k→,e→)𝑃→𝑘→𝑒P(\\vec{k},\\vec{e}) denotes the profiled latency value of a MixConv-based building block corresponding to k→,e→→𝑘→𝑒\\vec{k},\\vec{e}. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_46", "text": " Unlike the original Single-Path NAS that approximates the latency in Eq. (5) in some cases, we use the profiled latency value in all cases. Note that an expansion ratio can be zero, and if only one superkernel has a nonzero expansion ratio, the MixConv block is reduced to a plain MBConv block. Finally, we can estimate the latency by summing up these estimated latencies for all superkernels in the block, ∑L𝐿\\sum L. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_47", "text": " Since each superkernel is treated independently, some superkernels may have the same kernel size and expansion ratio. Then, even if two superkernel configurations express an equivalent block, as illustrated in Figure 8, they may have different estimated latency values, which is an artifact of the proposed profiling-based latency estimation method. To avoid this artifact, we enforce that there is only one kernel for each kernel size in the MixConv block. That is, we merge two kernels of the same size into one; For instance, the left MixConv is translated to the right MixConv in Figure 8 before latency estimation. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_48", "text": " Figure 9 shows the estimated latency and simulated latency of randomly generated 100 models on our search space. It validates the accuracy of the proposed latency model, whose mean absolute percentage error(MAPE) is about 0.16%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_49", "text": " The existing hardware-aware differentiable NAS methods mostly define some hyperparameters to balance between accuracy and latency, including SinglePath NAS, whose loss function is defined as Eq. (6). Since there is no information on the target latency in the loss function, in case there is a strict latency constraint, they have to pay additional search costs for the hyperparameters to let the final architecture have no larger latency than the constraint. In addition, this process needs to be repeated whenever the target latency is changed. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_50", "text": " We propose to modify the loss function to activate the latency-aware loss term only when the estimated latency is larger than the latency constraint as follows: (9) C​E+λ1⋅l​o​g​(1+λ2⋅R​e​L​U​((∑L)−T))𝐶𝐸⋅subscript𝜆1𝑙𝑜𝑔1⋅subscript𝜆2𝑅𝑒𝐿𝑈𝐿𝑇CE+\\lambda_{1}\\cdot log(1+\\lambda_{2}\\cdot ReLU((\\sum L)-T)) Although this is not a panacea, this modification significantly eases the search process, which will be discussed in section 5.2 with various experiments. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_51", "text": " In the second step, we intentionally use shorter latency to reduce the search space for the baseline network. After finding the baseline network with a shorter latency, we apply compound scaling to find an architecture with the final latency constraint. In this step, we conduct post-processing to add SE block and h-swish activation function if beneficial. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_52", "text": " It is well known that increasing depth (He et al., 2016), width (Zagoruyko and Komodakis, 2016), or input image size improves accuracy while it increases latency. However, if only one of these three factors is increased, the accuracy improvement is quickly saturated. Observing this fact, Tan et al. (Tan and Le, 2019a) proposed a compound scaling method that increases all three factors together. A scaling coefficient is defined for each factor. By judiciously assigning the scaling coefficients in a balanced fashion, they could improve the accuracy much larger than scaling a single factor only. Adopting this approach, we apply the compound scaling to the baseline architecture obtained in the previous step. Based on the ratio between the true latency constraint and the assumed latency constraint in the second step, we find the scaling coefficients considering the estimated latency increment. To keep the linear relationship between the width and cumulative depth, we use the same scaling coefficient for width and depth, differently from (Tan and Le, 2019a). Note that how to realize scaling depends on the baseline architecture. While the baseline architecture assumed in (Tan and Le, 2019a) has a series of identical blocks in each stage, a stage consists of heterogeneous blocks in our baseline architecture. Thus depth scaling is not realized by merely adding new blocks in each stage. We need to choose what types of blocks to add in each stage. We increase the number of blocks with more parameters first. To compute how many blocks to add in a stage, we multiply the depth of the stage by depth coefficient and round the multiplication result. Width scaling is applied to all blocks equally. Finally, we consider latency when we scale. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_53", "text": " In addition to compound scaling, we add two components in the post-processing step: h-swish activation function and squeeze-and-excitation (SE) block. A recent study (Park and Yoo, 2020) reports that SE and the h-swish activation function are no hurdles for 8-bit quantization. They could quantize a network with SE and h-swish without noticeable accuracy loss. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_54", "text": " Extensive studies have been conducted to find a better activation function than ReLU, and the swish activation function (Ramachandran et al., 2017) was found. Several neural networks (Tan and Le, 2019b; Mei et al., 2019; Tan and Le, 2019a) use swish activation function instead of ReLU to improve accuracy. Howard et al. (Howard et al., 2019) proposed a quantization-friendly version of the swish activation function called h-swish that has a similar impact on accuracy. So, we replace ReLU with h-swish (Howard et al., 2019) activation function. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_55", "text": " Squeeze-and-Excitation(SE) is a lightweight operation which is shown to be beneficial to accuracy (Hu et al., 2018). Figure 10 depicts the structure of a SE block. For a given input feature map, it first computes the importance of the feature channels a representative value for global spatial information of each feature channel by global average pooling. After such squeeze operation generates channel-wise statistics, excitation operation captures channel-wise dependencies by two cascaded fully-connected layers to produce activation values, which represents the importance of each feature channel. Finally, channel-wise multiplication is performed between the activation values induced by the excitation operation and the input feature map for each channel. SE block is used in many recent architectures (Tan and Le, 2019a; Howard et al., 2019; Radosavovic et al., 2020). By adding SE blocks to the baseline network, we also observe the accuracy improvement. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_56", "text": " Figure 11 depicts an example distribution of activation values produced by two different SE blocks for three different images. The authors of the original paper (Hu et al., 2018) conjectured that if such distribution from a SE block does not differ widely between image classes, the SE block is not important. Thus, after training, they obtained averaged activation values of a SE block over multiple images in the same class. They compared the distributions of the averaged values over different image classes. They observed that removing the SE blocks that have similar distributions over different image classes incurs only a marginal loss in accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_57", "text": " Inspired by this observation, we propose to remove SE blocks selectively to minimize the additional computation cost caused by SE blocks. We obtain activation values from a SE block for each input image and measure how the distribution of activation values varies over different input images. For each channel c, we calculate the standard deviation σcsubscript𝜎𝑐\\sigma_{c} of activation values over different images. If σcsubscript𝜎𝑐\\sigma_{c} is small in most channels, the activation values from the SE block does not differ much over images. Conceptually, it implies that the SE block does not help to discriminate further which channel is more influential. From the engineering perspective, it means that channel-wise multiplication of a SE block is similar to constant multiplication, which can be handled by the following convolutional layer. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_58", "text": " We define a metric as the average of standard deviation values σcsubscript𝜎𝑐\\sigma_{c} over all channels that represent the diverseness of the activation distribution over different images. If the metric value is small, we remove the SE block. For example, in Figure 11, our metric of the SE block on the left side has a value of 0.021, while the right side has a value of 0.118, more than 5x larger than the left side; The left side is a better candidate for SE block removal. When we remove SE blocks according to this metric, the accuracy is found to be similar, while the latency got shorter (Table 6). ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_59", "text": " We evaluate the proposed NAS technique for image classification with the ImageNet dataset. The current implementation is made for MIDAP (Kang et al., 2019) that can perform DWConv and SE operations efficiently so that MBConv is preferred to full 3-D convolution as the basic building block, as explained above. Latencies on the target NPU are obtained with the cycle-accurate simulator222https://github.com/cap-lab/MidapSim. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_60", "text": " A superkernel has two parameters to search: expansion ratio and kernel size. To limit the search space, we choose the expansion ratio among 0, 2, 4, and 6, and the kernel size between 3 and 5 when MBConv or full convolution is used as the building block. In the case of the MixConv-based building block, we use N𝑁N=3 superkenels whose expansion ratio is 0 or 2; The sum of the expansion ratio of three superkernels has the same range as the expansion ratio of a single MBConv block. To allow three superkernels to have different kernel sizes, we let one of three superkernels be able to have 7 as the kernel size. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_61", "text": " In the first phase of the neural architecture search, we train the supernet by randomly choosing one of the candidate subgraphs in each training step. We train the supernet for 8 epochs, with λ1=0subscript𝜆10\\lambda_{1}=0 in the loss function of Eq. 9, focusing only on the accuracy. We decrease the learning rate by 0.97 every 2.4 epochs, starting from 0.064. The other setting for network training is displayed in Table 4. Gradient clipping with a value of 10 is used in this phase. In the second phase, we set λ1=15,λ2=100formulae-sequencesubscript𝜆115subscript𝜆2100\\lambda_{1}=15,\\lambda_{2}=100 to consider latency in the loss function, and optimize the weights and threshold values of supernet for 2 epochs. After this second phase finishes, the final architecture topology is decided. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_62", "text": " Next, we train the final architecture again to determine the filter weights for 350 epochs with the ImageNet again, using the same setting described in Table 4. Unlike the search phase, the learning rate is increased from 0 to 0.064 in the first 5 epochs, then decayed by 0.97 every 2.4 epochs. Since we observed that the batch size is critical to accuracy when using the EfficientNet training code, we use a large batch size. Both network architecture search and final training are conducted on Google Cloud TPUs. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_63", "text": " In the proposed NAS technique, two major extensions are made to the supernet, compared with the original SinglePath NAS technique. Table 3 shows the proposed supernet architecture with configuration parameters, block types and depths. It starts with a 7x7 convolution layer, followed by 5 stages that have a different number of blocks for feature extraction and 2 fully-connected networks for classification. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_64", "text": " The first extension is to allow stages to have a different number of blocks. To verify the goodness of this extension, we design two kinds of MBConv-based supernet with 20 blocks in total: a supernet with constant depth(baseline), a supernet with linear depth where the cumulative depth up to a specific stage is proportional to the width of the stage. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_65", "text": " As shown in Table 4, a supernet with linear depth outperforms a supernet with constant depth in terms of accuracy with similar latency. It confirms that this simple change of block assignment in supernet gives notable accuracy boost with the same latency constraint, without any additional optimization techniques. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_66", "text": " The second extension is to use multiple parallel superkernels in an MBConv block. To verify the benefit of it, we compare two different supernets with the same number of blocks in each stage. The accuracy and latency performance of the baseline supernet is the same as the previous experimental result shown in Table 4. Table 5 shows that the extended supernet with MixConv-based building blocks gives a better accuracy-latency tradeoff. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_67", "text": " We apply the proposed NAS method with the supernet architecture described above. The depth of 5 stages is set to 3,4,7,4,113474113,4,7,4,11, respectively. The latency constraint is set to 2.5 ms that corresponds to the latency of EfficientNet-B1 on our target NPU, MIDAP. Table 6 compares our search results with the state-of-the-art models: EdgeTPU (Gupta and Akin, 2020), EfficientNet (Tan and Le, 2019a), Once-For-All (Cai et al., 2019). The latency of the other models is obtained by running the network on the MIDAP cycle-accurate simulator. We compare the accuracy without quantization, assuming that quantization effects will be similar to all models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_68", "text": " As shown in Table 6, the baseline model, ours-M, found by the proposed NAS technique has higher accuracy than the other models on our target NPU; ours-M achieves more than 1.7% higher top-1 accuracy than EfficientNet-lite2 with similar latency. Moreover, it is 0.5% higher than EfficientNet-B1, even without using SE and h-swish activation function. Note that the number of parameters and the number of FLOPS in ours-M is larger than EfficientNet-B1. It implies that the complexity of the network is not a direct indicator of the end-to-end latency of the network. The end-to-end latency depends on the NPU architecture, and the proposed NAS technique could find a larger network with shorter latency by adding the latency factor to the loss function directly. The main benefit comes from different block assignment to stages. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_69", "text": " We improve the baseline network by adding the h-swish activation function and squeeze-and-excitation(SE) block to get the ours-M+ model. Figure 12 shows the topology of ours-M+ architecture in which the height of each block is proportional to the expansion ratio of the block. Compared with the baseline network, ours-M, we achieve around 1% accuracy boost with ours-M+, paying the cost of 16% latency increase. This model outperforms the other models, 0.5% higher accuracy and 14% faster than EfficientNet-B2. Since EfficientNet-B2 is too large to run with the default configuration on MIDAP, we increase the memory size for filter weights. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_70", "text": " Next, we applied compound scaling (Tan and Le, 2019a) to ours-M+ to obtain ours-L+ and ours-XL+. When we determine scaling coefficients, we keep the linear relationship between the cumulative depth and width of each stage, and scale the input image size more aggressively than (Tan and Le, 2019a). We make the number of filters to be multiples of 16 to maximize the MAC unit utilization on MIDAP. When we train our scaled model, we set the dropout ratio to 0.4, similar to EfficientNet-B4 training. The accuracy of ours-L+ is higher than EfficientNet-B3 and EfficientNet-lite4, while the accuracy of ours-XL+ is similar to EfficientNet-B4. Note that the difference between the searched network and the EfficientNet decreases as the network size increases. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_71", "text": " Finally, we selectively removed SE blocks from ours-XL+, resulting in ours-XL-rmSE+. We collected the activation values using randomly sampled 10K images from the training dataset and calculated the metric explained in Sec. 4.3.3. After removing SE blocks from ours-XL+ based on the metric, only about 60% of the blocks in the network have SE blocks. As a result, we could make the latency shorter, while the accuracy was slightly improved than ours-XL+. This model achieves 82.72% top-1 accuracy with only 11.66ms latency. It is much better than EfficientNet-EdgeTPU-L (Gupta and Akin, 2020) that achieves 80.62% FP32 top-1 accuracy with more than 20ms on EdgeTPU. Our architecture on MIDAP is about 2 times faster with 2.1% higher accuracy. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_72", "text": " Finally, we compare the search time. Since the TPU is faster than GPU, we report the wall clock time and the estimated GPU time (in parenthesis) that is 10 times longer than the wall clock time in the last column of Table 6 Our method takes 3 hours, which is much faster than the other methods. Note that we compare the total time to get one architecture from scratch without trained weights. Once-For-All (Cai et al., 2019) would require only short fine-tuning time after a neural architecture is searched. In contrast, we need to train the network after a network architecture is found. It took 40 hours on TPUv3 to train ours-M+. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_73", "text": " While most NAS techniques are not compared with a random search method, the authors (Li and Talwalkar, 2019) reported that a random search method is highly competitive. So we conducted an experiment to compare the proposed NAS technique with two random search methods, exploring the same search space defined by the supernet structure of ours-M. First, we designed a simple random search method that has the similar time complexity of the proposed technique. In this method, we randomly generate 15 models having a similar latency with ours-M, from the same search space. Then we train each of them for 1 epoch with cosine learning rate decay. After evaluating each of them, we choose the architecture with the topmost top-1 accuracy and fully train it. In the second method, called random selection, we randomly generate 20 models having a similar latency with ours-M and train them fully and take the architecture with the highest top-1 accuracy. Since the random selection method performs search and training simultaneously, it is slower than the proposed technique by the number of randomly generated models. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_74", "text": " Comparison results are reported in Table 6. It is confirmed that both random selection and random search are quite competitive, but noticeably inferior to ours-M in terms of accuracy. In detail, the worst case of random selection showed 0.8% lower accuracy than ours-M. The best performance obtained from 20 randomly generated models is 79.19%, still lower than the accuracy of ours-M. Note that random search and random selection show similar performance that is no smaller than the other networks. It means that the search space defined by the supernet architecture has a more significant effect on the accuracy than the search method. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_75", "text": " There are two methods to find an architecture with a loose latency constraint. One is to use compound scaling that scales a small network with shorter latency, and the other is to search a network directly. To compare these two methods, we first scaled ours-M using the same scaling coefficients that we used to scale ours-M+ to ours-L+ and trained it. When conducting a direct search, we scaled the depth and width of the supernet and the input image size first and applied the proposed NAS technique for the scaled supernet. We used batch size 512 instead of 1024 during the architecture search due to the memory limitation of TPU. The comparison result is shown in Table 7 in terms of top-1 accuracy(%) and the latency on the target NPU(ms). Two results were similar while direct search needed 10 hours on TPUv3; It means that compound scaling is an effective method to find a large network fast. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_76", "text": " To examine how SE and h-swish impact accuracy individually, we compare four combinations as displayed in Table 8. The baseline is ours-M that does not use SE and h-swish activation function. Replacing ReLU with h-swish gives a marginal improvement on accuracy while adding SE blocks improves the accuracy noticeably. Adding both SE and h-swish activation function improves the accuracy by around 1%. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" }, { "id": "2009.02009_all_77", "text": " In this work, we propose a fast NPU-aware NAS methodology extending the Single-Path NAS technique (Stamoulis et al., 2019). We modify the supernet architecture by varying the number of blocks in stages and adding mixed depthwise convolution (Tan and Le, 2019b) to the search space. By modifying the loss function to directly include the target latency estimated by a cycle-accurate simulator of the target NPU, we could find a better baseline architecture with a shorter latency than the latency constraint. Using a tight latency constraint, we can reduce the search space to find the baseline network fast. Afterward, we apply compound scaling to find a larger network than the baseline network, and add SE blocks and h-swish activation functions in the post-processing step. Through the proposed NAS methodology, we could obtain a network with 82.72% accuracy with 11.66ms latency on our target NPU, without special data augmentation in training. It dominates the existing network models on the target NPU. It confirms the importance of supernet architecture design for a given NPU and effectiveness of the three-step approach in the proposed NAS methodology: supernet design, SinglePath NAS with a tighter latency constraint, and compound scaling and post-processing. ", "title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology" } ]
What are the metrics used to compare the efficiency of different methods which compute the adversarial perturbations?
The metrics that are used to compare different methods of finding adversarial perturbations are: the average robustness of the model estimated in some type of norm (2-norm or infinity-norm in the paper); and the average running time needed to find the estimated minimal perturbation [15].
[ 15 ]
[ { "id": "1511.04599_all_0", "text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance in classification tasks, they have recently been shown to be particularly unstable to adversarial perturbations of the data . In fact, very small and often imperceptible perturbations of the data samples are sufficient to fool state-of-the-art classifiers and result in incorrect classification. (e.g., Figure 1). Formally, for a given classifier, we define an adversarial perturbation as the minimal perturbation 𝒓𝒓\\bm{r} that is sufficient to change the estimated label k^​(𝒙)^𝑘𝒙\\hat{k}(\\bm{x}): Δ​(𝒙;k^):=min𝒓⁡‖𝒓‖2​ subject to ​k^​(𝒙+𝒓)≠k^​(𝒙),assignΔ𝒙^𝑘subscript𝒓subscriptnorm𝒓2 subject to ^𝑘𝒙𝒓^𝑘𝒙\\displaystyle\\Delta(\\bm{x};\\hat{k}):=\\min_{\\bm{r}}\\|\\bm{r}\\|_{2}\\text{ subject to }\\hat{k}(\\bm{x}+\\bm{r})\\neq\\hat{k}(\\bm{x}), (1) where 𝒙𝒙\\bm{x} is an image and k^​(𝒙)^𝑘𝒙\\hat{k}(\\bm{x}) is the estimated label. We call Δ​(𝒙;k^)Δ𝒙^𝑘\\Delta(\\bm{x};\\hat{k}) the robustness of k^^𝑘\\hat{k} at point 𝒙𝒙\\bm{x}. The robustness of classifier k^^𝑘\\hat{k} is then defined as ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_1", "text": " ρadv​(k^)=𝔼𝒙​Δ​(𝒙;k^)‖𝒙‖2,subscript𝜌adv^𝑘subscript𝔼𝒙Δ𝒙^𝑘subscriptnorm𝒙2\\rho_{\\text{adv}}(\\hat{k})=\\mathbb{E}_{\\bm{x}}\\frac{\\Delta(\\bm{x};\\hat{k})}{\\|\\bm{x}\\|_{2}}, (2) where 𝔼𝒙subscript𝔼𝒙\\mathbb{E}_{\\bm{x}} is the expectation over the distribution of data. The study of adversarial perturbations helps us understand what features are used by a classifier. The existence of such examples is seemingly in contradiction with the generalization ability of the learning algorithms. While deep networks achieve state-of-the-art performance in image classification tasks, they are not robust at all to small adversarial perturbations and tend to misclassify minimally perturbed data that looks visually similar to clean samples. Though adversarial attacks are specific to the classifier, it seems that the adversarial perturbations are generalizable across different models . This can actually become a real concern from a security point of view. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_2", "text": " An accurate method for finding the adversarial perturbations is thus necessary to study and compare the robustness of different classifiers to adversarial perturbations. It might be the key to a better understanding of the limits of current architectures and to design methods to increase robustness. Despite the importance of the vulnerability of state-of-the-art classifiers to adversarial instability, no well-founded method has been proposed to compute adversarial perturbations and we fill this gap in this paper. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_3", "text": " Our main contributions are the following: • We propose a simple yet accurate method for computing and comparing the robustness of different classifiers to adversarial perturbations. • We perform an extensive experimental comparison, and show that 1) our method computes adversarial perturbations more reliably and efficiently than existing methods 2) augmenting training data with adversarial examples significantly increases the robustness to adversarial perturbations. • We show that using imprecise approaches for the computation of adversarial perturbations could lead to different and sometimes misleading conclusions about the robustness. Hence, our method provides a better understanding of this intriguing phenomenon and of its influence factors. We now review some of the relevant work. The phenomenon of adversarial instability was first introduced and studied in . The authors estimated adversarial examples by solving penalized optimization problems and presented an analysis showing that the high complexity of neural networks might be a reason explaining the presence of adversarial examples. Unfortunately, the optimization method employed in is time-consuming and therefore does not scale to large datasets. In , the authors showed that convolutional networks are not invariant to some sort of transformations based on the experiments done on Pascal3D+ annotations. Recently, Tsai et al. provided a software to misclassify a given image in a specified class, without necessarily finding the smallest perturbation. Nguyen et al. generated synthetic unrecognizable images, which are classified with high confidence. The authors of also studied a related problem of finding the minimal geometric transformation that fools image classifiers, and provided quantitative measure of the robustness of classifiers to geometric transformations. Closer to our work, the authors of introduced the “fast gradient sign” method, which computes the adversarial perturbations for a given classifier very efficiently. Despite its efficiency, this method provides only a coarse approximation of the optimal perturbation vectors. In fact, it performs a unique gradient step, which often leads to sub-optimal solutions. Then in an attempt to build more robust classifiers to adversarial perturbations, introduced a smoothness penalty in the training procedure that allows to boost the robustness of the classifier. Notably, the method in was applied in order to generate adversarial perturbations. We should finally mention that the phenomenon of adversarial instability also led to theoretical work in that studied the problem of adversarial perturbations on some families of classifiers, and provided upper bounds on the robustness of these classifiers. A deeper understanding of the phenomenon of adversarial instability for more complex classifiers is however needed; the method proposed in this work can be seen as a baseline to efficiently and accurately generate adversarial perturbations in order to better understand this phenomenon. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_4", "text": " The rest of paper is organized as follows. In Section 2, we introduce an efficient algorithm to find adversarial perturbations in a binary classifier. The extension to the multiclass problem is provided in Section 3. In Section 4, we propose extensive experiments that confirm the accuracy of our method and outline its benefits in building more robust classifiers. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_5", "text": " As a multiclass classifier can be viewed as aggregation of binary classifiers, we first propose the algorithm for binary classifiers. That is, we assume here k^​(𝒙)=sign​(f​(𝒙))^𝑘𝒙sign𝑓𝒙\\hat{k}(\\bm{x})=\\text{sign}(f(\\bm{x})), where f𝑓f is an arbitrary scalar-valued image classification function f:ℝn→ℝ:𝑓→superscriptℝ𝑛ℝf:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}. We also denote by ℱ≜{𝒙:f​(𝒙)=0}≜ℱconditional-set𝒙𝑓𝒙0\\mathscr{F}\\triangleq\\{\\bm{x}:f(\\bm{x})=0\\} the level set at zero of f𝑓f. We begin by analyzing the case where f𝑓f is an affine classifier f​(𝒙)=𝒘T​𝒙+b𝑓𝒙superscript𝒘𝑇𝒙𝑏f(\\bm{x})=\\bm{w}^{T}\\bm{x}+b, and then derive the general algorithm, which can be applied to any differentiable binary classifier f𝑓f. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_6", "text": " In the case where the classifier f𝑓f is affine, it can easily be seen that the robustness of f𝑓f at point 𝒙0subscript𝒙0\\bm{x}_{0}, Δ​(𝒙0;f)Δsubscript𝒙0𝑓\\Delta(\\bm{x}_{0};f)222From now on, we refer to a classifier either by f𝑓f or its corresponding discrete mapping k^^𝑘\\hat{k}. Therefore, ρadv​(k^)=ρadv​(f)subscript𝜌adv^𝑘subscript𝜌adv𝑓\\rho_{\\text{adv}}(\\hat{k})=\\rho_{\\text{adv}}(f) and Δ​(𝒙;k^)=Δ​(𝒙;f)Δ𝒙^𝑘Δ𝒙𝑓\\Delta(\\bm{x};\\hat{k})=\\Delta(\\bm{x};f)., is equal to the distance from 𝒙0subscript𝒙0\\bm{x}_{0} to the separating affine hyperplane ℱ={𝒙:𝒘T​𝒙+b=0}ℱconditional-set𝒙superscript𝒘𝑇𝒙𝑏0\\mathscr{F}=\\{\\bm{x}:\\bm{w}^{T}\\bm{x}+b=0\\} (Figure 2). The minimal perturbation to change the classifier’s decision corresponds to the orthogonal projection of 𝒙0subscript𝒙0\\bm{x}_{0} onto ℱℱ\\mathscr{F}. It is given by the closed-form formula: 𝒓∗​(𝒙0)subscript𝒓subscript𝒙0\\displaystyle\\bm{r}_{*}(\\bm{x}_{0}) :=arg​min⁡‖𝒓‖2assignabsentargminsubscriptnorm𝒓2\\displaystyle:=\\operatorname*{arg\\,min}\\|\\bm{r}\\|_{2} (3) subject to  sign ​(f​(𝒙0+𝒓))≠ sign​(f​(𝒙0))subject to  sign 𝑓subscript𝒙0𝒓 sign𝑓subscript𝒙0\\displaystyle\\text{ subject to }\\text{ sign }(f(\\bm{x}_{0}+\\bm{r}))\\neq\\text{ sign}(f(\\bm{x}_{0})) =−f​(𝒙0)‖𝒘‖22​𝒘.absent𝑓subscript𝒙0superscriptsubscriptnorm𝒘22𝒘\\displaystyle=-\\frac{f(\\bm{x}_{0})}{\\|\\bm{w}\\|_{2}^{2}}\\bm{w}. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_7", "text": " Assuming now that f𝑓f is a general binary differentiable classifier, we adopt an iterative procedure to estimate the robustness Δ​(𝒙0;f)Δsubscript𝒙0𝑓\\Delta(\\bm{x}_{0};f). Specifically, at each iteration, f𝑓f is linearized around the current point 𝒙isubscript𝒙𝑖\\bm{x}_{i} and the minimal perturbation of the linearized classifier is computed as arg​min𝒓i⁡‖𝒓i‖2​ subject to ​f​(𝒙i)+∇f​(𝒙i)T​𝒓i=0.subscriptargminsubscript𝒓𝑖subscriptnormsubscript𝒓𝑖2 subject to 𝑓subscript𝒙𝑖∇𝑓superscriptsubscript𝒙𝑖𝑇subscript𝒓𝑖0\\displaystyle\\operatorname*{arg\\,min}_{\\bm{r}_{i}}\\|\\bm{r}_{i}\\|_{2}\\text{ subject to }f(\\bm{x}_{i})+\\nabla f(\\bm{x}_{i})^{T}\\bm{r}_{i}=0. (4) The perturbation 𝒓isubscript𝒓𝑖\\bm{r}_{i} at iteration i𝑖i of the algorithm is computed using the closed form solution in Eq. (3), and the next iterate 𝒙i+1subscript𝒙𝑖1\\bm{x}_{i+1} is updated. The algorithm stops when 𝒙i+1subscript𝒙𝑖1\\bm{x}_{i+1} changes sign of the classifier. The DeepFool algorithm for binary classifiers is summarized in Algorithm 1 and a geometric illustration of the method is shown in Figure 3. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_8", "text": " In practice, the above algorithm can often converge to a point on the zero level set ℱℱ\\mathscr{F}. In order to reach the other side of the classification boundary, the final perturbation vector 𝒓^^𝒓\\hat{\\bm{r}} is multiplied by a constant 1+η1𝜂1+\\eta, with η≪1much-less-than𝜂1\\eta\\ll 1. In our experiments, we have used η=0.02𝜂0.02\\eta=0.02. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_9", "text": " We now extend the DeepFool method to the multiclass case. The most common used scheme for multiclass classifiers is one-vs-all. Hence, we also propose our method based on this classification scheme. In this scheme, the classifier has c𝑐c outputs where c𝑐c is the number of classes. Therefore, a classifier can be defined as f:ℝn→ℝc:𝑓→superscriptℝ𝑛superscriptℝ𝑐f:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}^{c} and the classification is done by the following mapping: k^​(𝒙)=arg​maxk⁡fk​(𝒙),^𝑘𝒙subscriptargmax𝑘subscript𝑓𝑘𝒙\\hat{k}(\\bm{x})=\\operatorname*{arg\\,max}_{k}f_{k}(\\bm{x}), (5) where fk​(𝒙)subscript𝑓𝑘𝒙f_{k}(\\bm{x}) is the output of f​(𝒙)𝑓𝒙f(\\bm{x}) that corresponds to the kthsuperscript𝑘thk^{\\text{th}} class. Similarly to the binary case, we first present the proposed approach for the linear case and then we generalize it to other classifiers. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_10", "text": " Let f​(𝒙)𝑓𝒙f(\\bm{x}) be an affine classifier, i.e., f​(𝒙)=𝐖⊤​𝒙+𝒃𝑓𝒙superscript𝐖top𝒙𝒃f(\\bm{x})=\\mathbf{W}^{\\top}\\bm{x}+\\bm{b} for a given 𝐖𝐖\\mathbf{W} and 𝒃𝒃\\bm{b}. Since the mapping k^^𝑘\\hat{k} is the outcome of a one-vs-all classification scheme, the minimal perturbation to fool the classifier can be rewritten as follows arg​min𝒓⁡‖𝒓‖2s.t. ​∃k:𝒘k⊤​(𝒙0+𝒓)+bk≥𝒘k^​(𝒙0)⊤​(𝒙0+𝒓)+bk^​(𝒙0),:subscriptargmin𝒓subscriptdelimited-∥∥𝒓2s.t. 𝑘superscriptsubscript𝒘𝑘topsubscript𝒙0𝒓subscript𝑏𝑘superscriptsubscript𝒘^𝑘subscript𝒙0topsubscript𝒙0𝒓subscript𝑏^𝑘subscript𝒙0\\begin{split}&\\operatorname*{arg\\,min}_{\\bm{r}}\\|\\bm{r}\\|_{2}\\\\ &\\text{s.t. }\\exists k:\\bm{w}_{k}^{\\top}(\\bm{x}_{0}+\\bm{r})+b_{k}\\geq\\bm{w}_{\\hat{k}(\\bm{x}_{0})}^{\\top}(\\bm{x}_{0}+\\bm{r})+b_{\\hat{k}(\\bm{x}_{0})},\\end{split} (6) where 𝒘ksubscript𝒘𝑘\\bm{w}_{k} is the kthsuperscript𝑘thk^{\\text{th}} column of 𝐖𝐖\\mathbf{W}. Geometrically, the above problem corresponds to the computation of the distance between 𝒙0subscript𝒙0\\bm{x}_{0} and the complement of the convex polyhedron P𝑃P, P=⋂k=1c{𝒙:fk^​(𝒙0)​(𝒙)≥fk​(𝒙)},𝑃superscriptsubscript𝑘1𝑐conditional-set𝒙subscript𝑓^𝑘subscript𝒙0𝒙subscript𝑓𝑘𝒙\\displaystyle P=\\bigcap_{k=1}^{c}\\{\\bm{x}:f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x})\\geq f_{k}(\\bm{x})\\}, (7) where 𝒙0subscript𝒙0\\bm{x}_{0} is located inside P𝑃P. We denote this distance by dist​(𝒙0,Pc)distsubscript𝒙0superscript𝑃𝑐\\text{{dist}}(\\bm{x}_{0},P^{c}). The polyhedron P𝑃P defines the region of the space where f𝑓f outputs the label k^​(𝒙0)^𝑘subscript𝒙0\\hat{k}(\\bm{x}_{0}). This setting is depicted in Figure 4. The solution to the problem in Eq. (6) can be computed in closed form as follows. Define l^​(𝒙0)^𝑙subscript𝒙0\\hat{l}(\\bm{x}_{0}) to be the closest hyperplane of the boundary of P𝑃P (e.g. l^​(𝒙0)=3^𝑙subscript𝒙03\\hat{l}(\\bm{x}_{0})=3 in Figure 4). Formally, l^​(𝒙0)^𝑙subscript𝒙0\\hat{l}(\\bm{x}_{0}) can be computed as follows l^​(𝒙0)=arg​mink≠k^​(𝒙0)⁡|fk​(𝒙0)−fk^​(𝒙0)​(𝒙0)|‖𝒘k−𝒘k^​(𝒙0)‖2.^𝑙subscript𝒙0subscriptargmin𝑘^𝑘subscript𝒙0subscript𝑓𝑘subscript𝒙0subscript𝑓^𝑘subscript𝒙0subscript𝒙0subscriptnormsubscript𝒘𝑘subscript𝒘^𝑘subscript𝒙02\\hat{l}(\\bm{x}_{0})=\\operatorname*{arg\\,min}_{k\\neq{\\hat{k}(\\bm{x}_{0})}}\\frac{\\left|f_{k}(\\bm{x}_{0})-f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{0})\\right|}{\\|\\bm{w}_{k}-\\bm{w}_{\\hat{k}(\\bm{x}_{0})}\\|_{2}}. (8) The minimum perturbation 𝒓∗​(𝒙0)subscript𝒓subscript𝒙0\\bm{r}_{*}(\\bm{x}_{0}) is the vector that projects 𝒙0subscript𝒙0\\bm{x}_{0} on the hyperplane indexed by l^​(𝒙0)^𝑙subscript𝒙0\\hat{l}(\\bm{x}_{0}), i.e., 𝒓∗​(𝒙0)=|fl^​(𝒙0)​(𝒙0)−fk^​(𝒙0)​(𝒙0)|‖𝒘l^​(𝒙0)−𝒘k^​(𝒙0)‖22​(𝒘l^​(𝒙0)−𝒘k^​(𝒙0)).subscript𝒓subscript𝒙0subscript𝑓^𝑙subscript𝒙0subscript𝒙0subscript𝑓^𝑘subscript𝒙0subscript𝒙0superscriptsubscriptnormsubscript𝒘^𝑙subscript𝒙0subscript𝒘^𝑘subscript𝒙022subscript𝒘^𝑙subscript𝒙0subscript𝒘^𝑘subscript𝒙0\\bm{r}_{*}(\\bm{x}_{0})=\\frac{\\left|f_{\\hat{l}(\\bm{x}_{0})}(\\bm{x}_{0})-f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{0})\\right|}{\\|\\bm{w}_{\\hat{l}(\\bm{x}_{0})}-\\bm{w}_{\\hat{k}(\\bm{x}_{0})}\\|_{2}^{2}}(\\bm{w}_{\\hat{l}(\\bm{x}_{0})}-\\bm{w}_{\\hat{k}(\\bm{x}_{0})}). (9) In other words, we find the closest projection of 𝒙0subscript𝒙0\\bm{x}_{0} on faces of P𝑃P. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_11", "text": " We now extend the DeepFool algorithm to the general case of multiclass differentiable classifiers. For general non-linear classifiers, the set P𝑃P in Eq. (7) that describes the region of the space where the classifier outputs label k^​(𝒙0)^𝑘subscript𝒙0\\hat{k}(\\bm{x}_{0}) is no longer a polyhedron. Following the explained iterative linearization procedure in the binary case, we approximate the set P𝑃P at iteration i𝑖i by a polyhedron P~isubscript~𝑃𝑖\\tilde{P}_{i} P~i=⋂k=1c{\\displaystyle\\tilde{P}_{i}=\\bigcap_{k=1}^{c}\\Big{\\{} 𝒙:fk​(𝒙i)−fk^​(𝒙0)​(𝒙i):𝒙subscript𝑓𝑘subscript𝒙𝑖subscript𝑓^𝑘subscript𝒙0subscript𝒙𝑖\\displaystyle\\bm{x}:f_{k}(\\bm{x}_{i})-f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{i}) (10) +∇fk(𝒙i)⊤𝒙−∇fk^​(𝒙0)(𝒙i)⊤𝒙≤0}.\\displaystyle+\\nabla f_{k}(\\bm{x}_{i})^{\\top}\\bm{x}-\\nabla f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{i})^{\\top}\\bm{x}\\leq 0\\Big{\\}}. We then approximate, at iteration i𝑖i, the distance between 𝒙isubscript𝒙𝑖\\bm{x}_{i} and the complement of P𝑃P, dist​(𝒙i,Pc)distsubscript𝒙𝑖superscript𝑃𝑐\\text{{dist}}(\\bm{x}_{i},P^{c}), by dist​(𝒙i,P~ic)distsubscript𝒙𝑖superscriptsubscript~𝑃𝑖𝑐\\text{{dist}}(\\bm{x}_{i},\\tilde{P}_{i}^{c}). Specifically, at each iteration of the algorithm, the perturbation vector that reaches the boundary of the polyhedron P~isubscript~𝑃𝑖\\tilde{P}_{i} is computed, and the current estimate updated. The method is given in Algorithm 2. It should be noted that the proposed algorithm operates in a greedy way and is not guaranteed to converge to the optimal perturbation in (1). However, we have observed in practice that our algorithm yields very small perturbations which are believed to be good approximations of the minimal perturbation. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_12", "text": " It should be noted that the optimization strategy of DeepFool is strongly tied to existing optimization techniques. In the binary case, it can be seen as Newton’s iterative algorithm for finding roots of a nonlinear system of equations in the underdetermined case . This algorithm is known as the normal flow method. The convergence analysis of this optimization technique can be found for example in . Our algorithm in the binary case can alternatively be seen as a gradient descent algorithm with an adaptive step size that is automatically chosen at each iteration. The linearization in Algorithm 2 is also similar to a sequential convex programming where the constraints are linearized at each step. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_13", "text": " In this paper, we have measured the perturbations using the ℓ2subscriptℓ2\\ell_{2} norm. Our framework is however not limited to this choice, and the proposed algorithm can simply be adapted to find minimal adversarial perturbations for any ℓpsubscriptℓ𝑝\\ell_{p} norm (p∈(1,∞)𝑝1p\\in(1,\\infty)). To do so, the update steps in line 10 and 11 in Algorithm 2 must be respectively substituted by the following updates l^^𝑙\\displaystyle\\hat{l} ←arg​mink≠k^​(𝒙0)⁡|fk′|‖𝒘k′‖q,←absentsubscriptargmin𝑘^𝑘subscript𝒙0subscriptsuperscript𝑓′𝑘subscriptnormsubscriptsuperscript𝒘′𝑘𝑞\\displaystyle\\leftarrow\\operatorname*{arg\\,min}_{k\\neq{\\hat{k}(\\bm{x}_{0})}}\\frac{\\left|f^{\\prime}_{k}\\right|}{\\|\\bm{w}^{\\prime}_{k}\\|_{q}}, (11) 𝒓isubscript𝒓𝑖\\displaystyle\\bm{r}_{i} ←|fl^′|‖𝒘l^′‖qq​|𝒘l^′|q−1⊙sign​(𝒘l^′),←absentdirect-productsubscriptsuperscript𝑓′^𝑙superscriptsubscriptnormsubscriptsuperscript𝒘′^𝑙𝑞𝑞superscriptsubscriptsuperscript𝒘′^𝑙𝑞1signsubscriptsuperscript𝒘′^𝑙\\displaystyle\\leftarrow\\frac{|f^{\\prime}_{\\hat{l}}|}{\\|\\bm{w}^{\\prime}_{\\hat{l}}\\|_{q}^{q}}|\\bm{w}^{\\prime}_{\\hat{l}}|^{q-1}\\odot\\text{sign}(\\bm{w}^{\\prime}_{\\hat{l}}), (12) where ⊙direct-product\\odot is the pointwise product and q=pp−1𝑞𝑝𝑝1q=\\frac{p}{p-1}.333To see this, one can apply Holder’s inequality to obtain a lower bound on the ℓpsubscriptℓ𝑝\\ell_{p} norm of the perturbation. In particular, when p=∞𝑝p=\\infty (i.e., the supremum norm ℓ∞subscriptℓ\\ell_{\\infty}), these update steps become l^^𝑙\\displaystyle\\hat{l} ←arg​mink≠k^​(𝒙0)⁡|fk′|‖𝒘k′‖1,←absentsubscriptargmin𝑘^𝑘subscript𝒙0subscriptsuperscript𝑓′𝑘subscriptnormsubscriptsuperscript𝒘′𝑘1\\displaystyle\\leftarrow\\operatorname*{arg\\,min}_{k\\neq{\\hat{k}(\\bm{x}_{0})}}\\frac{\\left|f^{\\prime}_{k}\\right|}{\\|\\bm{w}^{\\prime}_{k}\\|_{1}}, (13) 𝒓isubscript𝒓𝑖\\displaystyle\\bm{r}_{i} ←|fl^′|‖𝒘l^′‖1​sign​(𝒘l^′).←absentsubscriptsuperscript𝑓′^𝑙subscriptnormsubscriptsuperscript𝒘′^𝑙1signsubscriptsuperscript𝒘′^𝑙\\displaystyle\\leftarrow\\frac{|f^{\\prime}_{\\hat{l}}|}{\\|\\bm{w}^{\\prime}_{\\hat{l}}\\|_{1}}\\text{sign}(\\bm{w}^{\\prime}_{\\hat{l}}). (14) ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_14", "text": " We now test our DeepFool algorithm on deep convolutional neural networks architectures applied to MNIST, CIFAR-10, and ImageNet image classification datasets. We consider the following deep neural network architectures: • MNIST: A two-layer fully connected network, and a two-layer LeNet convoluational neural network architecture . Both networks are trained with SGD with momentum using the MatConvNet package. • CIFAR-10: We trained a three-layer LeNet architecture, as well as a Network In Network (NIN) architecture . • ILSVRC 2012: We used CaffeNet and GoogLeNet pre-trained models. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_15", "text": " In order to evaluate the robustness to adversarial perturbations of a classifier f𝑓f, we compute the average robustness ρ^adv​(f)subscript^𝜌adv𝑓\\hat{\\rho}_{\\text{adv}}(f), defined by ρ^adv​(f)=1|𝒟|​∑𝒙∈𝒟‖𝒓^​(𝒙)‖2‖𝒙‖2,subscript^𝜌adv𝑓1𝒟subscript𝒙𝒟subscriptnorm^𝒓𝒙2subscriptnorm𝒙2\\hat{\\rho}_{\\text{adv}}(f)=\\frac{1}{|\\mathscr{D}|}\\sum_{\\bm{x}\\in\\mathscr{D}}\\frac{\\|\\hat{\\bm{r}}(\\bm{x})\\|_{2}}{\\|\\bm{x}\\|_{2}}, (15) where 𝒓^​(𝒙)^𝒓𝒙\\hat{\\bm{r}}(\\bm{x}) is the estimated minimal perturbation obtained using DeepFool, and 𝒟𝒟\\mathscr{D} denotes the test set444For ILSVRC2012, we used the validation data.. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_16", "text": " We compare the proposed DeepFool approach to state-of-the-art techniques to compute adversarial perturbations in and . The method in solves a series of penalized optimization problems to find the minimal perturbation, whereas estimates the minimal perturbation by taking the sign of the gradient 𝒓^​(𝒙)=ϵ​sign​(∇𝒙J​(𝜽,𝒙,y)),^𝒓𝒙italic-ϵsignsubscript∇𝒙𝐽𝜽𝒙𝑦\\displaystyle\\hat{\\bm{r}}(\\bm{x})=\\epsilon\\,\\text{sign}\\left(\\nabla_{\\bm{x}}J(\\bm{\\theta},\\bm{x},y)\\right), with J𝐽J the cost used to train the neural network, 𝜽𝜽\\bm{\\theta} is the model parameters, and y𝑦y is the label of 𝒙𝒙\\bm{x}. The method is called fast gradient sign method. In practice, in the absence of general rules to choose the parameter ϵitalic-ϵ\\epsilon, we chose the smallest ϵitalic-ϵ\\epsilon such that 90%percent9090\\% of the data are misclassified after perturbation.555Using this method, we observed empirically that one cannot reach 100%percent100100\\% misclassification rate on some datasets. In fact, even by increasing ϵitalic-ϵ\\epsilon to be very large, this method can fail in misclassifying all samples. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_17", "text": " We report in Table 1 the accuracy and average robustness ρ^advsubscript^𝜌adv\\hat{\\rho}_{\\text{adv}} of each classifier computed using different methods. We also show the running time required for each method to compute one adversarial sample. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_18", "text": " It can be seen that DeepFool estimates smaller perturbations (hence closer to minimal perturbation defined in (1)) than the ones computed using the competitive approaches. For example, the average perturbation obtained using DeepFool is 555 times lower than the one estimated with . On the ILSVRC2012 challenge dataset, the average perturbation is one order of magnitude smaller compared to the fast gradient method. It should be noted moreover that the proposed approach also yields slightly smaller perturbation vectors than the method in . The proposed approach is hence more accurate in detecting directions that can potentially fool neural networks. As a result, DeepFool can be used as a valuable tool to accurately assess the robustness of classifiers. On the complexity aspect, the proposed approach is substantially faster than the standard method proposed in . In fact, while the approach involves a costly minimization of a series of objective functions, we observed empirically that DeepFool converges in a few iterations (i.e., less than 333) to a perturbation vector that fools the classifier. Hence, the proposed approach reaches a more accurate perturbation vector compared to state-of-the-art methods, while being computationally efficient. This makes it readily suitable to be used as a baseline method to estimate the robustness of very deep neural networks on large-scale datasets. In that context, we provide the first quantitative evaluation of the robustness of state-of-the-art classifiers on the large-scale ImageNet dataset. It can be seen that despite their very good test accuracy, these methods are extremely unstable to adversarial perturbations: a perturbation that is 100010001000 smaller in magnitude than the original image is sufficient to fool state-of-the-art deep neural networks. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_19", "text": " We illustrate in Figure 1 perturbed images generated by the fast gradient sign and DeepFool. It can be observed that the proposed method generates adversarial perturbations which are hardly perceptible, while the fast gradient sign method outputs a perturbation image with higher norm. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_20", "text": " It should be noted that, when perturbations are measured using the ℓ∞subscriptℓ\\ell_{\\infty} norm, the above conclusions remain unchanged: DeepFool yields adversarial perturbations that are smaller (hence closer to the optimum) compared to other methods for computing adversarial examples. Table 2 reports the ℓ∞subscriptℓ\\ell_{\\infty} robustness to adversarial perturbations measured by ρ^adv∞​(f)=1|𝒟|​∑𝒙∈𝒟‖𝒓^​(𝒙)‖∞‖𝒙‖∞superscriptsubscript^𝜌adv𝑓1𝒟subscript𝒙𝒟subscriptnorm^𝒓𝒙subscriptnorm𝒙\\hat{\\rho}_{\\text{adv}}^{\\infty}(f)=\\frac{1}{|\\mathscr{D}|}\\sum_{\\bm{x}\\in\\mathscr{D}}\\frac{\\|\\hat{\\bm{r}}(\\bm{x})\\|_{\\infty}}{\\|\\bm{x}\\|_{\\infty}}, where 𝒓^​(𝒙)^𝒓𝒙\\hat{\\bm{r}}(\\bm{x}) is computed respectively using DeepFool (with p=∞𝑝p=\\infty, see Section 3.3), and the Fast gradient sign method for MNIST and CIFAR-10 tasks. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_21", "text": " Fine-tuning using adversarial examples ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_22", "text": " In this section, we fine-tune the networks of Table 1 on adversarial examples to build more robust classifiers for the MNIST and CIFAR-10 tasks. Specifically, for each network, we performed two experiments: (i) Fine-tuning the network on DeepFool’s adversarial examples, (ii) Fine-tuning the network on the fast gradient sign adversarial examples. We fine-tune the networks by performing 5 additional epochs, with a 50%percent5050\\% decreased learning rate only on the perturbed training set. For each experiment, the same training data was used through all 555 extra epochs. For the sake of completeness, we also performed 555 extra epochs on the original data. The evolution of ρ^advsubscript^𝜌adv\\hat{\\rho}_{\\text{adv}} for the different fine-tuning strategies is shown in Figures 6(a) to 6(d), where the robustness ρ^advsubscript^𝜌adv\\hat{\\rho}_{\\text{adv}} is estimated using DeepFool, since this is the most accurate method, as shown in Table 1. Observe that fine-tuning with DeepFool adversarial examples significantly increases the robustness of the networks to adversarial perturbations even after one extra epoch. For example, the robustness of the networks on MNIST is improved by 50% and NIN’s robustness is increased by about 40%. On the other hand, quite surprisingly, the method in can lead to a decreased robustness to adversarial perturbations of the network. We hypothesize that this behavior is due to the fact that perturbations estimated using the fast gradient sign method are much larger than minimal adversarial perturbations. Fine-tuning the network with overly perturbed images decreases the robustness of the networks to adversarial perturbations. To verify this hypothesis, we compare in Figure 7 the adversarial robustness of a network that is fine-tuned with the adversarial examples obtained using DeepFool, where norms of perturbations have been deliberately multiplied by α=1,2,3𝛼123\\alpha=1,2,3. Interestingly, we see that by magnifying the norms of the adversarial perturbations, the robustness of the fine-tuned network is decreased. This might explain why overly perturbed images decrease the robustness of MNIST networks: these perturbations can really change the class of the digits, hence fine-tuning based on these examples can lead to a drop of the robustness (for an illustration, see Figure 8). This lends credence to our hypothesis, and further shows the importance of designing accurate methods to compute minimal perturbations. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_23", "text": " Table 3 lists the accuracies of the fine-tuned networks. It can be seen that fine-tuning with DeepFool can improve the accuracy of the networks. Conversely, fine-tuning with the approach in has led to a decrease of the test accuracy in all our experiments. This confirms the explanation that the fast gradient sign method outputs overly perturbed images that lead to images that are unlikely to occur in the test data. Hence, it decreases the performance of the method as it acts as a regularizer that does not represent the distribution of the original data. This effect is analogous to geometric data augmentation schemes, where large transformations of the original samples have a counter-productive effect on generalization.666While the authors of reported an increased generalization performance on the MNIST task (from 0.94%percent0.940.94\\% to 0.84%percent0.840.84\\%) using adversarial regularization, it should be noted that the their experimental setup is significantly different as trained the network based on a modified cost function, while we performed straightforward fine-tuning. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_24", "text": " To emphasize the importance of a correct estimation of the minimal perturbation, we now show that using approximate methods can lead to wrong conclusions regarding the adversarial robustness of networks. We fine-tune the NIN classifier on the fast gradient sign adversarial examples. We follow the procedure described earlier but this time, we decreased the learning rate by 90%. We have evaluated the adversarial robustness of this network at different extra epochs using DeepFool and the fast gradient sign method. As one can see in Figure 9, the red plot exaggerates the effect of training on the adversarial examples. Moreover, it is not sensitive enough to demonstrate the loss of robustness at the first extra epoch. These observations confirm that using an accurate tool to measure the robustness of classifiers is crucial to derive conclusions about the robustness of networks. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_25", "text": " In this work, we proposed an algorithm, DeepFool, to compute adversarial examples that fool state-of-the-art classifiers. It is based on an iterative linearization of the classifier to generate minimal perturbations that are sufficient to change classification labels. We provided extensive experimental evidence on three datasets and eight classifiers, showing the superiority of the proposed method over state-of-the-art methods to compute adversarial perturbations, as well as the efficiency of the proposed approach. Due to its accurate estimation of the adversarial perturbations, the proposed DeepFool algorithm provides an efficient and accurate way to evaluate the robustness of classifiers and to enhance their performance by proper fine-tuning. The proposed approach can therefore be used as a reliable tool to accurately estimate the minimal perturbation vectors, and build more robust classifiers. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" }, { "id": "1511.04599_all_26", "text": " This work has been partly supported by the Hasler Foundation, Switzerland, in the framework of the CORA project. ", "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks" } ]
Why are the gains of instruction-tuning higher for larger models?
The authors theorize that their approach probably works better for larger models due to it's dependence on language models' inductive biases, but they offer no extra explanation or detail to clarify or explain their stance [43].
[ 43 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These developments are powered by two key components: large pretrained language models (LM) and human-written instruction data (e.g., PromptSource (Bach et al., 2022) and Super-NaturalInstructions (Wang et al., 2022, SuperNI for short)). However, collecting such instruction data is costly and often suffers limited diversity given that most human generations tend to be popular NLP tasks, falling short of covering a true variety of tasks and different ways to describe them. Continuing to improve the quality and coverage of instruction-tuned models necessitates the development of alternative approaches for supervising the instruction tuning process. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_1", "text": " In this work, we introduce Self-Instruct, a semi-automated process for instruction-tuning a pretrained LM using instructional signals from the model itself. The overall process is an iterative bootstrapping algorithm (see Figure 2), which starts off with a limited (e.g., 175 in our study) seed set of manually-written tasks that are used to guide the overall generation. In the first phase, the model is prompted to generate instructions for new tasks. This step leverages the existing collection of instructions to create more broad-coverage instructions that define (often new) tasks. Given the newly-generated set of instructions, the framework also creates input-output instances for them, which can be later used for supervising the instruction tuning. Finally, various heuristics are used to automatically filter low-quality or repeated instructions, before adding the remaining valid tasks to the task pool. This process can be repeated for many iterations until reaching a large number of tasks. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_2", "text": " To evaluate Self-Instruct empirically, we run this framework on GPT3 Brown et al. (2020), which is a vanilla LM (§3). The iterative Self-Instruct process on this model leads to about 52k instructions, paired with about 82K instance inputs and target outputs. We observe that the resulting data provides a diverse range of creative tasks, as is demonstrated by examples in Figure 1. These generated tasks deviate from the distribution of typical NLP tasks, and also have fairly small overlap with the seed tasks (§3.2). On this resulting data, we build GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} by finetuning GPT3 (i.e., the same model used for generating the instruction data). We evaluate GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} in comparison to various other models on both typical NLP tasks included in SuperNI Wang et al. (2022), and a set of new instructions that are created for novel usage of instruction-following models (§4). The results indicate that GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} outperforms GPT3 (the original model) by a large margin (+33.1%) and nearly matches the performance of InstructGPT001subscriptInstructGPT001\\text{InstructGPT}_{\\text{001}}. Moreover, our human evaluation on the newly-created instruction set shows that GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} demonstrates a broad range of instruction following ability, outperforming models trained on other publicly available instruction datasets and leaving only a 5% gap behind InstructGPT001subscriptInstructGPT001\\text{InstructGPT}_{\\text{001}}. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_3", "text": " In summary, our contributions are: (1) we introduce Self-Instruct, a method for inducing instruction following capabilities with minimal human-labeled data; (2) we demonstrate its effectiveness via extensive instruction-tuning experiments; and (3) we release a large synthetic dataset of 52K instructions and a set of manually-written novel tasks for building and evaluating future instruction-following models. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_4", "text": " Annotating large-scale instruction data can be challenging for humans because it requires 1) creativity to come up with novel tasks and 2) expertise for writing the solutions to each task. Here, we detail our process for Self-Instruct, which refers to the pipeline of generating tasks with a vanilla pretrained language model itself, filtering the generated data, and then conducting instruction tuning with this generated data in order to align the LM to follow instructions better. This pipeline is depicted in Figure 2. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_5", "text": " The instruction data we want to generate contains a set of instructions {It}subscript𝐼𝑡\\{I_{t}\\}, each of which defines a task t𝑡t in natural language. Task t𝑡t has nt≥1subscript𝑛𝑡1n_{t}\\geq 1 input-output instances {(Xt,i,Yt,i)}i=1ntsuperscriptsubscriptsubscript𝑋𝑡𝑖subscript𝑌𝑡𝑖𝑖1subscript𝑛𝑡\\{(X_{t,i},Y_{t,i})\\}_{i=1}^{n_{t}}. A model M𝑀M is expected to produce the output, given the task instruction and the corresponding input: M​(It,Xt,i)=Yt,i𝑀subscript𝐼𝑡subscript𝑋𝑡𝑖subscript𝑌𝑡𝑖M(I_{t},X_{t,i})=Y_{t,i}, for i∈{1,…,nt}𝑖1…subscript𝑛𝑡i\\in\\{1,\\ldots,n_{t}\\}. Note that the instruction and instance input does not have a strict boundary in many cases. For example, “write an essay about school safety” can be a valid instruction that we expect models to respond to directly, while it can also be formulated as “write an essay about the following topic” as the instruction, and “school safety” as an instance input. To encourage the diversity of the data format, we allow such instructions that do not require additional input (i.e., X𝑋X is empty). ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_6", "text": " Our pipeline for data generation consists of four steps: 1) generating task instructions, 2) determining if the instruction represents a classification task, 3) instance generation with either an input-first or output-first approach, and 4) filtering low-quality data. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_7", "text": " At the first step, Self-Instruct generates new instructions from a small set of seed human-written instructions in a bootstrapping fashion. We initiate the task pool with 175 tasks (1 instruction and 1 instance for each task).333These tasks were newly written by the authors and their labmates at UW, without reference to existing datasets or the test set used in this work. We provide more details about these tasks and analyze their similarity to the test tasks in Appendix §A.1. For every step, we sample 8 task instructions from this pool as in-context examples. Of the 8 instructions, 6 are from the human-written tasks, and 2 are from the model-generated tasks in previous steps to promote diversity. The prompting template is shown in Table 5. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_8", "text": " Because we need two different approaches for classification and non-classification tasks, we next identify whether the generated instruction represents a classification task or not.444More concretely, we regard tasks that have a small limited output label space as classification tasks. We prompt the LM in a few-shot way to determine this, using 12 classification instructions and 19 non-classification instructions from the seed tasks. The prompting template is shown in Table 6. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_9", "text": " Given the instructions and their task type, we generate instances for each instruction independently. This is challenging because it requires the model to understand what the target task is, based on the instruction, figure out what additional input fields are needed and generate them, and finally complete the task by producing the output. We found that pretrained LMs can achieve this to a large extent when prompted with instruction-input-output in-context examples from other tasks. A natural way to do this is the Input-first Approach, where we can ask an LM to come up with the input fields first based on the instruction, and then produce the corresponding output. This generation order is similar to how models are used to respond to instruction and input, but here with in-context examples from other tasks. The prompting template is shown in Table 7. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_10", "text": " However, we found that this approach can generate inputs biased toward one label, especially for classification tasks (e.g., for grammar error detection, it usually generates grammatical input). Therefore, we additionally propose an Output-first Approach for classification tasks, where we first generate the possible class labels, and then condition the input generation on each class label. The prompting template is shown in Table 8.555In this work, we use a fixed set of seed tasks for prompting the instance generation, and thus only generate a small number of instances per task in one round. Future work can use randomly sampled tasks to prompt the model to generate a larger number of instances in multiple rounds. We apply the output-first approach to the classification tasks identified in the former step, and the input-first approach to the remaining non-classification tasks. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_11", "text": " To encourage diversity, a new instruction is added to the task pool only when its ROUGE-L similarity with any existing instruction is less than 0.7. We also exclude instructions that contain some specific keywords (e.g., image, picture, graph) that usually can not be processed by LMs. When generating new instances for each instruction, we filter out instances that are exactly the same or those with the same input but different outputs. Invalid generations are identified and filtered out based on heuristics (e.g., instruction is too long or too short, instance output is a repetition of the input). ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_12", "text": " After creating large-scale instruction data, we use it to finetune the original LM (i.e., Self-Instruct). To do this, we concatenate the instruction and instance input as a prompt and train the model to generate the instance output in a standard supervised way. To make the model robust to different formats, we use multiple templates to encode the instruction and instance input together. For example, the instruction can be prefixed with “Task:” or not, the input can be prefixed with “Input:” or not, “Output:” can be appended at the end of the prompt or not, and different numbers of break lines can be put in the middle, etc. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_13", "text": " In this section, we apply our method for inducing instruction data to GPT3 as a case study. We use the largest GPT3 LM (“davinci” engine) accessed through the OpenAI API.666https://openai.com/api/ The parameters for making queries are described in Appendix A.2. Here we present an overview of the generated data. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_14", "text": " Table 1 describes the basic statistics of the generated data. We generate a total of over 52K instructions and more than 82K instances corresponding to these instructions after filtering. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_15", "text": " To study what types of instructions are generated and how diverse they are, we identify the verb-noun structure in the generated instructions. We use the Berkeley Neural Parser777https://parser.kitaev.io/ (Kitaev and Klein, 2018; Kitaev et al., 2019) to parse the instructions and then extract the verb that is closest to the root as well as its first direct noun object. 26,559 out of the 52,445 instructions contain such structure; other instructions usually contain more complex clauses (e.g., “Classify whether this tweet contains political content or not.”) or are framed as questions (e.g., “Which of these statements are true?”). We plot the top 20 most common root verbs and their top 4 direct noun objects in Figure 5, which account for 14% of the entire set. Overall, we see quite diverse intents and textual formats in these instructions. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_16", "text": " We further study how the generated instructions differ from the seed instructions used to prompt the generation. For each generated instruction, we compute its highest ROUGE-L overlap with the 175 seed instructions. We plot the distribution of these ROUGE-L scores in Figure 5. The results indicate a decent number of new instructions were generated, which do not have much overlap with the seeds. We also demonstrate diversity in the length of the instructions, instance inputs, and instance outputs in Figure 5. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_17", "text": " So far, we have shown the quantity and diversity of the generated data, but its quality remains uncertain. To investigate this, we randomly sample 200 instructions and randomly select 1 instance per instruction. We asked an expert annotator (author of this work) to label whether each instance is correct or not, in terms of the instruction, the instance input, and the instance output. Evaluation results in Table 2 show that most of the generated instructions are meaningful, while the generated instances may contain more noise (to a reasonable extent). However, we found that even though the generations may contain errors, most of them are still in the correct format or partially correct, which can provide useful guidance for training models to follow instructions. We listed a number of good examples and bad examples in Table 10 and 11, respectively. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_18", "text": " We conduct experiments to measure and compare the performance of models under various instruction tuning setups. We first describe our models and other baselines, followed by our experiments. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_19", "text": " Given the instruction-generated instruction data, we conduct instruction tuning with the GPT3 model itself (“davinci” engine). As described in §2.3, we use various templates to concatenate the instruction and input, and train the model to generate the output. This finetuning is done through the OpenAI finetuning API.888See OpenAI’s documentation on finetuning. We use the default hyper-parameters, except that we set the prompt loss weight to 0, and we train the model for 2 epochs. We refer the reader to Appendix A.3 for additional finetuning details. The resulting model is denoted by GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}}. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_20", "text": " We evaluate T5-LM Lester et al. (2021); Raffel et al. (2020) and GPT3 Brown et al. (2020) as the vanilla LM baselines (only pretraining, no additional finetuning). These baselines will indicate the extent to which off-the-shelf LMs are capable of following instructions naturally immediately after pretraining. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_21", "text": " T00 and Tk𝑘k-Instruct are two instruction-tuned models proposed in Sanh et al. (2022) and Wang et al. (2022), respectively, and are demonstrated to be able to follow instructions for many NLP tasks. Both of these models are finetuned from the T5 Raffel et al. (2020) checkpoints and are publicly available.999 T00 is available at here and Tk𝑘k-Instruct is here. For both of these models, we use their largest version with 11B parameters. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_22", "text": " We evaluate InstructGPTsubscriptInstructGPT\\text{InstructGPT}_{\\text{}} Ouyang et al. (2022), which is developed by OpenAI based on GPT3 to follow human instructions better and has been found by the community to have impressive zero-shot abilities. There are various generations of these models, where newer ones use more expansive data or algorithmic novelties.101010 See OpenAI’s documentation on their models. For our SuperNI experiments in §4.3, we only compare with their text-davinci-001 engine, because their newer engines are trained with the latest user data and are likely to have already seen the SuperNI test set. For our human evaluation on newly written instructions, we include their 001, 002 and 003 engines for completeness. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_23", "text": " Additionally, to compare Self-Instruct training with other publicly available instruction tuning data, we further finetune GPT3 model with data from PromptSource and SuperNI, which are used to train the T00 and Tk𝑘k-Instruct models. We call them T00 training and SuperNI training for short, respectively. To save the training budget, we sampled 50K instances (but covering all their instructions) for each dataset, which has a comparable size to the instruction data we generated. Based on the findings from Wang et al. (2022) and our early experiments, reducing the number of instances per training task does not degrade the model’s generalization performance to unseen tasks. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_24", "text": " We first evaluate the models’ ability to follow instructions on typical NLP tasks in a zero-shot fashion. We use the evaluation set of SuperNI  Wang et al. (2022), which consists of 119 tasks with 100 instances in each task. In this work, we mainly focus on the zero-shot setup, i.e., the model is prompted with the definition of the tasks only, without in-context demonstration examples. For all our requests to the GPT3 variants, we use the deterministic generation mode (temperature as 0 and no nucleus sampling) without specific stop sequences. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_25", "text": " We make the following observations from the results in Table 3. Self-Instruct boosts the instruction-following ability of GPT3 by a large margin. The vanilla GPT3 model basically cannot follow human instructions at all. Upon manual analysis, we find that it usually generates irrelevant and repetitive text, and does not know when to stop generation. Compared with other models that are not specifically trained for SuperNI, GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} achieves better performance than T00 or the GPT3 finetuned on the T00 training set, which takes tremendous human labeling efforts. Notably, GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} also nearly matches the performance of InstructGPT001subscriptInstructGPT001\\text{InstructGPT}_{\\text{001}}, which is trained with private user data and human-annotated labels. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_26", "text": " Models trained on the SuperNI training set still achieve better performance on its evaluation set, which we attribute to the similar instruction style and formatting. However, we show that Self-Instruct still brings in additional gains when combined with the SuperNI training set, proving its value as complementary data. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_27", "text": " Despite the comprehensiveness of SuperNI in collecting existing NLP tasks, most of these NLP tasks were proposed for research purposes and skewed toward classification. To better access the practical value of instruction-following models, a subset of the authors curate a new set of instructions motivated by user-oriented applications. We first brainstorm various domains where large LMs may be useful (e.g., email writing, social media, productivity tools, entertainment, programming), then craft instructions related to each domain along with an input-output instance (again, input is optional). We aim to diversify the styles and formats of these tasks (e.g., instructions may be long or short; input/output may take the form of bullet points, tables, codes, equations, etc.). In total, we create 252 instructions with 1 instance per instruction. We believe it can serve as a testbed for evaluating how instruction-based models handle diverse and unfamiliar instructions. subsection B.3 presents a small portion of them. The entire set is available in our GitHub repository. We analyze the overlap between this set set and the seed instructions in §A.1. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_28", "text": " Evaluating models’ performance on this evaluation set of diverse tasks is extremely challenging because different tasks require different expertise. Indeed, many of these tasks cannot be measured by automatic metrics or even be judged by normal crowdworkers (e.g., writing a program, or converting first-order logic into natural language). To get a more faithful evaluation, we asked the authors of the instructions to judge model predictions. Details on how we set up this human evaluation are described in Appendix B. The evaluators were asked to rate the output based on whether it accurately and effectively completes the task. We implemented a four-level rating system for categorizing the quality of the models’ outputs: • Rating-A: The response is valid and satisfying. • Rating-B: The response is acceptable but has minor errors or imperfections. • Rating-C: The response is relevant and responds to the instruction, but it has significant errors in the content. For example, GPT3 might generate a valid output first, but continue to generate other irrelevant things. • Rating-D: The response is irrelevant or completely invalid. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_29", "text": " Figure 6 shows the performance of GPT3 model and its instruction-tuned counterparts on this newly written instruction set (w. inter-rater agreement κ=0.57𝜅0.57\\kappa=0.57 on the 4-class categorical scale, see Appendix B for details). As anticipated, the vanilla GPT3 LM is largely unable to respond to instructions, and all instruction-tuned models demonstrate comparatively higher performance. Nonetheless, GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} (i.e., GPT3 model finetuned with Self-Instruct) outperforms those counterparts trained on T00 or SuperNI data by a large margin, demonstrating the value of the generated data despite the noise. Compared with InstructGPT001subscriptInstructGPT001\\text{InstructGPT}_{\\text{001}}, GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} is quite close in performance—if we count acceptable response with minor imperfections (Rating-B) as valid, GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} is only 5% behind InstructGPT001subscriptInstructGPT001\\text{InstructGPT}_{\\text{001}}. Lastly, our evaluation confirms the impressive instruction-following ability of InstructGPT002subscriptInstructGPT002\\text{InstructGPT}_{\\text{002}} and InstructGPT003subscriptInstructGPT003\\text{InstructGPT}_{\\text{003}}. Although there are many factors behind this success, we conjecture that future work can largely benefit from improving the quality of our generated data by using human annotators or training a reward model to select better generations, similar to the algorithm used by Ouyang et al. (2022). ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_30", "text": " Self-Instruct provides a way to grow instruction data at a low cost with almost no human labeling; could more of this generated data lead to better instruction-following ability? We conduct an analysis of the size of generated data by subsampling different numbers of instructions from the generated dataset, finetuning GPT3 on the sampled subsets, and evaluating how the resulting models perform on the 252 user-oriented instruction set. We conduct the same human evaluation as in §4.4. Figure 7 presents the performance of GPT3Self-InstSelf-Inst{}_{\\textsc{Self-Inst}} models finetuned with different sizes of generated data. Overall, we see consistent improvement as we grow the data size. However, this improvement almost plateaus after 16K. This is in-line with the data scaling experiments in Wang et al. (2022, Fig. 5). Interestingly, when evaluating on SuperNI we found the model’s performance gain plateaus earlier at around hundreds of instructions. This may be due to the fact that the new generated data is distinct from typical NLP tasks in SuperNI, indicating that future research may benefit from using a combination of different instruction data for better performance on various types of tasks. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_31", "text": " Another direction to improve the model’s performance is to take our generated data and get better supervision (with less noise). We explore this idea by using InstructGPT003subscriptInstructGPT003\\text{InstructGPT}_{\\text{003}} (the best available general-purpose model) to regenerate the output field of all our instances given the instruction and input. We then use this improved version of our data to finetune GPT3. This can be regarded as a distillation of InstructGPT003subscriptInstructGPT003\\text{InstructGPT}_{\\text{003}} with our data. As is shown in Figure 7, the resulting model outperforms the counterpart trained with the original data by 10%, which suggests big room for future work on using our generation pipeline to get initial data and then improving the data quality with human experts or distillation from better models. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_32", "text": " A series of works have found evidence that vanilla LMs can be effective at following general language instructions if tuned with annotated “instructional” data—datasets containing language instructional commands and their desired outcomes based on human annotation (Weller et al., 2020; Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022, i.a.). Additionally, they show a direct correlation between the size and diversity of the “instructional” data and the generalizability of resulting models to unseen tasks (Wang et al., 2022; Chung et al., 2022). However, since these developments largely focus on existing NLP tasks and depend on human-annotated instructions, this poses a bottleneck for progress toward more generalizable models (e.g., see Fig. 5a in Wang et al., 2022). Our work aims to move beyond classical NLP tasks and tackle the challenges of creating diverse instruction data by employing pretrained LMs. InstructGPTsubscriptInstructGPT\\text{InstructGPT}_{\\text{}} Ouyang et al. (2022) shares a similar goal as ours in building more general-purpose LMs, and has demonstrated remarkable performance in following diverse user instructions. However, as a commercial system, their construction process still remains quite opaque. In particular, the role of data has remained understudied due to limited transparency and the private user data they used in their study. Addressing such challenges necessitates the creation of a large-scale, public dataset covering a broad range of tasks. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_33", "text": " A variety of works have proposed using LMs for data generation Schick and Schütze (2021); Wang et al. (2021); Liu et al. (2022); Meng et al. (2023) or augmentation Feng et al. (2021); Yang et al. (2020); Mekala et al. (2022). Our work differs from this line in that it is not specific to a particular task (say, QA or NLI). In contrast, a distinct motivation for Self-Instruct is to bootstrap new task definitions that may not have been defined before by NLP practitioners (though potentially still important for real users). In parallel with our work, Honovich et al. (2022a) also propose to generate large-scale instruction data (so-called Unnatural Instructions) with GPT3 models. The major differences are that 1) they use tasks in SuperNI (Wang et al., 2022) as their seed tasks, resulting in a different distribution of generated tasks; 2) they employ InstructGPT002subscriptInstructGPT002\\text{InstructGPT}_{\\text{002}} for generating the data, in which sense they are distilling knowledge from an already instruction-tuned model, while we solely rely on the vanilla LM; 3) the detailed generation pipeline and templates are different. Nevertheless, we believe that both efforts in expanding instruction data are complementary, and the community will benefit from these diverse datasets. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_34", "text": " A series of recent works Zhou et al. (2022b); Ye et al. (2022); Singh et al. (2022); Honovich et al. (2022b) generate instructions of a task given a few examples. While Self-Instruct also involves instruction generation, a major difference in our case is it is task-agnostic; we generate new tasks (instructions along with instances) from scratch. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_35", "text": " A typical self-training framework  He et al. (2019); Xie et al. (2020); Du et al. (2021); Amini et al. (2022); Huang et al. (2022) uses trained models to assign labels to unlabeled data and then leverages the newly labeled data to improve the model. In a similar line, Zhou et al. (2022a) use multiple prompts to specify a single task and propose to regularize via prompt consistency, encouraging consistent predictions over the prompts. This allows either finetuning the model with extra unlabeled training data, or direct application at inference time. While Self-Instruct has similarities with the self-training literature, most self-training methods assume a specific target task as well as unlabeled examples under it; in contrast, Self-Instruct produces a variety of tasks from scratch. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_36", "text": " Knowledge distillation Hinton et al. (2015); Sanh et al. (2019); West et al. (2021); Magister et al. (2022) often involves the transfer of knowledge from larger models to smaller ones. Self-Instruct can also be viewed as a form of “knowledge distillation\", however, it differs from this line in the following ways: (1) the source and target of distillation are the same, i.e., a model’s knowledge is distilled to itself; (2) the content of distillation is in the form of an instruction task (i.e., instructions that define a task, and a set of examples that instantiate it). ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_37", "text": " A series of recent works use language models to bootstrap some inferences using specialized methods. NPPrompt Zhao et al. (2022) provides a method to generate predictions for semantic labels without any finetuning. It uses a model’s own embeddings to automatically find words relevant to the label of the data sample and hence reduces the dependency on manual mapping from model prediction to label (verbalizers). STAR Zelikman et al. (2022) iteratively leverages a small number of rationale examples and a large dataset without rationales, to bootstrap a model’s ability to perform reasoning. Self-Correction Welleck et al. (2023) decouples an imperfect base generator (model) from a separate corrector that learns to iteratively correct imperfect generations and demonstrates improvement over the base generator. Our work instead focuses on bootstrapping new tasks in the instruction paradigm. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_38", "text": " Instruction-following models have also been of interest in the multi-modal learning literature Fried et al. (2018); Shridhar et al. (2020); Min et al. (2022); Weir et al. (2022). Self-Instruct, as a general approach to expanding data, can potentially also be helpful in those settings, which we leave to future work. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_39", "text": " We introduce Self-Instruct, a method to improve the instruction-following ability of LMs via their own generation of instruction data. On experimenting with vanilla GPT3, we automatically construct a large-scale dataset of 52K instructions for diverse tasks, and finetuning GPT3 on this data leads to a 33% absolute improvement on SuperNI over the original GPT3. Furthermore, we curate a set of expert-written instructions for novel tasks. Human evaluation on this set shows that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin and performs closely to InstructGPT001subscriptInstructGPT001\\text{InstructGPT}_{\\text{001}}. We hope Self-Instruct can serve as the first step to align pretrained LMs to follow human instructions, and future work can build on top of this data to improve instruction-following models. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_40", "text": " Beyond the immediate focus of this paper, we believe that Self-Instruct may help bring more transparency to what happens “behind the scenes” of widely-used instruction-tuned models like InstructGPTsubscriptInstructGPT\\text{InstructGPT}_{\\text{}} or ChatGPT. Unfortunately, such industrial models remain behind API walls as their datasets are not released, and hence there is little understanding of their construction and why they demonstrate impressive capabilities. The burden now falls on academia to better understand the source of success in these models and strive for better—and more open—models. We believe our findings in this paper demonstrate the importance of diverse instruction data, and our large synthetic dataset can be the first step toward higher-quality data for building better instruction-following models. At this writing, the central idea of this paper has been adopted in several follow-up works for such endeavors (Taori et al., 2023; Xu et al., 2023; Sun et al., 2023, i.a.). ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_41", "text": " Here, we discuss some limitations of this work to inspire future research in this direction. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_42", "text": " Self-Instruct depends on LMs, and it will inherit all the limitations that carry over with LMs. As recent studies have shown Razeghi et al. (2022); Kandpal et al. (2022), tail phenomena pose a serious challenge to the success of LMs. In other words, LMs’ largest gains correspond to the frequent uses of languages (head of the language use distribution), and there might be minimal gains in the low-frequency contexts. Similarly, in the context of this work, it would not be surprising if the majority of the gains by Self-Instruct are skewed toward tasks or instructions that present more frequently in the pretraining corpus. As a consequence, the approach might show brittleness with respect to uncommon and creative instructions. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_43", "text": " Because of Self-Instruct’s dependence on the inductive biases extracted from LMs, it might work best for larger models. If true, this may create barriers to access for those who may not have large computing resources. We hope future studies will carefully study the gains as a function of model size or various other parameters. It is worthwhile to note that instruction-tuning with human annotation also suffers from a similar limitation: gains of instruction-tuning are higher for larger models Wei et al. (2022). ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" }, { "id": "2212.10560_all_44", "text": " A point of concern for the authors is the unintended consequences of this iterative algorithm, such as the amplification of problematic social biases (stereotypes or slurs about gender, race, etc.). Relatedly, one observed challenge in this process is the algorithm’s difficulty in producing balanced labels, which reflected models’ prior biases. We hope future work will lead to better understanding of the pros and cons of the approach. ", "title": "Self-Instruct: Aligning Language Model with Self Generated Instructions" } ]
The authors claim that the brain MRI scan are often anisotropic, is that true ?
The authors state that most of the sequences within their TBI dataset are anisotropic [47].
[ 47 ]
[ { "id": "1603.05959_all_0", "text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient outcome. For a better understanding of the pathophysiology of diseases, quantitative imaging can reveal clues about the disease characteristics and effects on particular anatomical structures. For example, the associations of different lesion types, their spatial distribution and extent with acute and chronic sequelae after traumatic brain injury (TBI) are still poorly understood (Maas et al. (2015)). However, there is growing evidence that quantification of lesion burden may add insight into the functional outcome of patients (Ding et al. (2008); Moen et al. (2012)). Additionally, exact locations of injuries relate to particular deficits depending on the brain structure that is affected (Lehtonen et al. (2005); Warner et al. (2010); Sharp et al. (2011)). This is in line with estimates that functional deficits caused by stroke are associated with the extent of damage to particular parts of the brain (Carey et al. (2013)). Lesion burden is commonly quantified by means of volume and number of lesions, biomarkers that have been shown to be related to cognitive deficits. For example, volume of white matter lesions (WML) correlates with cognitive decline and increased risk of dementia (Ikram et al. (2010)). In clinical research on multiple sclerosis (MS), lesion count and volume are used to analyse disease progression and effectiveness of pharmaceutical treatment (Rovira and León (2008); Kappos et al. (2007)). Finally, accurate delineation of the pathology is important in the case of brain tumors, where estimation of the relative volume of a tumor’s sub-components is required for planning radiotherapy and treatment follow-up (Wen et al. (2010)). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_1", "text": " The quantitative analysis of lesions requires accurate lesion segmentation in multi-modal, three-dimensional images which is a challenging task for a number of reasons. The heterogeneous appearance of lesions including the large variability in location, size, shape and frequency make it difficult to devise effective segmentation rules. It is thus highly non-trivial to delineate contusions, edema and haemorrhages in TBI (Irimia et al. (2012)), or sub-components of brain tumors such as proliferating cells and necrotic core (Menze et al. (2015)). The arguably most accurate segmentation results can be obtained through manual delineation by a human expert which is tedious, expensive, time-consuming, impractical in larger studies, and introduces inter-observer variability. Additionally, for deciding whether a particular region is part of a lesion multiple image sequences with varying contrasts need to be considered, and the level of expert knowledge and experience are important factors that impact segmentation accuracy. Hence, in clinical routine often only qualitative, visual inspection, or at best crude measures like approximate lesion volume and number of lesions are used (Yuh et al. (2012); Wen et al. (2010)). In order to capture and better understand the complexity of brain pathologies it is important to conduct large studies with many subjects to gain the statistical power for drawing conclusions across a whole patient population. The development of accurate, automatic segmentation algorithms has therefore become a major research focus in medical image computing with the potential to offer objective, reproducible, and scalable approaches to quantitative assessment of brain lesions. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_2", "text": " Figure 1 illustrates some of the challenges that arise when devising a computational approach for the task of automatic lesion segmentation. The figure summarizes statistics and shows examples of brain lesions in the case of TBI, but is representative of other pathologies such as brain tumors and ischemic stroke. Lesions can occur at multiple sites, with varying shapes and sizes, and their image intensity profiles largely overlap with non-affected, healthy parts of the brain or lesions which are not in the focus of interest. For example, stroke and MS lesions have a similar hyper-intense appearance in FLAIR sequences as other WMLs (Mitra et al. (2014); Schmidt et al. (2012)). It is generally difficult to derive statistical prior information about lesion shape and appearance. On the other hand, in some applications there is an expectation on the spatial configuration of segmentation labels, for example there is a hierarchical layout of sub-components in brain tumors. Ideally, a computational approach is able to adjust itself to application specific characteristics by learning from a set of a few example images. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_3", "text": " A multitude of automatic lesion segmentation methods have been proposed over the last decade, and several main categories of approaches can be identified. One group of methods poses the lesion segmentation task as an abnormality detection problem, for example by employing image registration. The early work of Prastawa et al. (2004) and more recent ones by Schmidt et al. (2012) and Doyle et al. (2013) align the pathological scan to a healthy atlas and lesions are detected based on deviations in tissue appearance between the patient and the atlas image. Lesions, however, may cause large structural deformations that may lead to incorrect segmentation due to incorrect registration. Gooya et al. (2011); Parisot et al. (2012) alleviate this problem by jointly solving the segmentation and registration tasks. Liu et al. (2014) showed that registration together with a low-rank decomposition gives as a by-product the abnormal structures in the sparse components, although, this may not be precise enough for detection of small lesions. Abnormality detection has also been proposed within image synthesis works. Representative approaches are those of Weiss et al. (2013) using dictionary learning and Ye et al. (2013) using a patch-based approach. The idea is to synthesize pseudo-healthy images that when compared to the patient scan allow to highlight abnormal regions. In this context, Cardoso et al. (2015) present a generative model for image synthesis that yields a probabilistic segmentation of abnormalities. Another unsupervised technique is proposed by Erihov et al. (2015), a saliency-based method that exploits brain asymmetry in pathological cases. A common advantage of the above methods is that they do not require a training dataset with corresponding manual annotations. In general, these approaches are more suitable for detecting lesions rather than accurately segmenting them. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_4", "text": " Some of the most successful, supervised segmentation methods for brain lesions are based on voxel-wise classifiers, such as Random Forests. Representative work is that of Geremia et al. (2010) on MS lesions, employing intensity features to capture the appearance of the region around each voxel. Zikic et al. (2012) combine this with a generative Gaussian Mixture Model (GMM) to obtain tissue-specific probabilistic priors (Van Leemput et al. (1999)). This framework was adopted in multiple works, with representative pipelines for brain tumors by Tustison et al. (2013) and TBI by Rao et al. (2014). Both works incorporate morphological and contextual features to better capture the heterogeneity of lesions. Rao et al. (2014) also incorporate brain structure segmentation results obtained from a multi-atlas label propagation approach (Ledig et al. (2015)) to provide strong tissue-class priors to the Random Forests. Tustison et al. (2013) additionally use a Markov Random Field (MRF) to incorporate spatial regularization. MRFs are commonly used to encourage spatial continuity of the segmentation (Schmidt et al. (2012); Mitra et al. (2014)). Although those methods have been very successful, it appears that their modeling capabilities still have significant limitations. This is confirmed by the results of the most recent challenges 111links: http://braintumorsegmentation.org/, www.isles-challenge.org, and also by our own experience and experimentation with such approaches. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_5", "text": " At the same time, deep learning techniques have emerged as a powerful alternative for supervised learning with great model capacity and the ability to learn highly discriminative features for the task at hand. These features often outperform hand-crafted and pre-defined feature sets. In particular, Convolutional Neural Networks (CNNs) (LeCun et al. (1998); Krizhevsky et al. (2012)) have been applied with promising results on a variety of biomedical imaging problems. Ciresan et al. (2012) presented the first GPU implementation of a two-dimensional CNN for the segmentation of neural membranes. From the CNN based work that followed, related to our approach are the methods of Zikic et al. (2014); Havaei et al. (2015); Pereira et al. (2015), with the latter being the best performing automatic approach in the BRATS 2015 challenge (Menze et al. (2015)). These methods are based on 2D CNNs, which have been used extensively in computer vision applications on natural images. Here, the segmentation of a 3D brain scan is achieved by processing each 2D slice independently, which is arguably a non-optimal use of the volumetric medical image data. Despite the simplicity in the architecture, the promising results obtained by these methods indicate the potential of CNNs. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_6", "text": " Fully 3D CNNs come with an increased number of parameters and significant memory and computational requirements. Previous work discusses problems and apparent limitations when employing a 3D CNN on medical imaging data (Prasoon et al. (2013); Li et al. (2014); Roth et al. (2014)). To incorporate 3D contextual information, multiple works used 2D CNNs on three orthogonal 2D patches (Prasoon et al. (2013); Roth et al. (2014); Lyksborg et al. (2015)). In their work for structural brain segmentation, Brebisson and Montana (2015) extracted large 2D patches from multiple scales of the image and combined them with small single-scale 3D patches, in order to avoid the memory requirements of fully 3D networks. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_7", "text": " One of the reasons that discouraged the use of 3D CNNs is the slow inference due to the computationally expensive 3D convolutions. In contrast to the 2D/3D hybrid variants (Roth et al. (2014); Brebisson and Montana (2015)), 3D CNNs can fully exploit dense-inference (LeCun et al. (1998); Sermanet et al. (2014)), a technique that greatly decreases inference times and which we will further discuss in section 2.1. By employing dense-inference with 3D CNNs, Brosch et al. (2015) and Urban et al. (2014) reported computation times of a few seconds and approximately a minute respectively for the processing of a single brain scan. Even though the size of their developed networks was limited, a factor that is directly related to a network’s representational power, their results on MS and brain tumor segmentation respectively were very promising. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_8", "text": " Performance of CNNs is significantly influenced by the strategy for extracting training samples. A commonly adopted approach is training on image patches that are equally sampled from each class. This, however, biases the classifier towards rare classes and may result in over-segmentation. To counter this, Cireşan et al. (2013) proposes to train a second CNN on samples with a class distribution close to the real one, but oversample pixels that were incorrectly classified in the first stage. A secondary training stage was also suggested by Havaei et al. (2015), who retrain the classification layer on patches extracted uniformly from the image. In practice, two stage training schemes can be prone to overfitting and sensitive to the state of the first classifier. Alternatively, dense training (Long et al. (2015)) has been used to train a network on multiple or all voxels of a single image per optimisation step (Urban et al. (2014); Brosch et al. (2015); Ronneberger et al. (2015)). This can introduce severe class imbalance, similarly to uniform sampling. Weighted cost functions have been proposed in the two latter works to alleviate this problem. Brosch et al. (2015) manually adjusted the sensitivity of the network, but the method can become difficult to calibrate for multi-class problems. Ronneberger et al. (2015) first balance the cost from each class, which has an effect similar to equal sampling, and further adjust it for the specific task by estimating the difficulty of segmenting each pixel. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_9", "text": " We present a fully automatic approach for lesion segmentation in multi-modal brain MRI based on an 11-layers deep, multi-scale, 3D CNN with the following main contributions: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_10", "text": " 1. We propose an efficient hybrid training scheme, utilizing dense training (Long et al. (2015)) on sampled image segments, and analyze its behaviour in adapting to class imbalance of the segmentation problem at hand. 2. We analyze in depth the development of deeper, thus more discriminative, yet computationally efficient 3D CNNs. We exploit the utilization of small kernels, a design approach previously found beneficial in 2D networks (Simonyan and Zisserman (2014)) that impacts 3D CNNs even more, and present adopted solutions that enable training deeper networks. 3. We employ parallel convolutional pathways for multi-scale processing, a solution to efficiently incorporate both local and contextual information which greatly improves segmentation results. 4. We demonstrate the generalization capabilities of our system, which without significant modifications outperforms the state-of-the-art on a variety of challenging segmentation tasks, with top ranking results in two MICCAI challenges, ISLES and BRATS. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_11", "text": " Furthermore, a detailed analysis of the network reveals valuable insights into the powerful black box of deep learning with CNNs. For example, we have found that our network is capable of learning very complex, high level features that separate gray matter (GM), cerebrospinal fluid (CSF) and other anatomical structures to identify the image regions corresponding to lesions. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_12", "text": " Additionally, we have extended the fully-connected Conditional Random Field (CRF) model by Krähenbühl and Koltun (2011) to 3D which we use for final post-processing of the CNN’s soft segmentation maps. This CRF overcomes limitations of previous models as it can handle arbitrarily large neighborhoods while preserving fast inference times. To the best of our knowledge, this is the first use of a fully connected CRF on medical data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_13", "text": " To facilitate further research and encourage other researchers to build upon our results, the source code of our lesion segmentation method including the CNN and the 3D fully connected CRF is made publicly available on https://biomedia.doc.ic.ac.uk/software/deepmedic/. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_14", "text": " Our proposed lesion segmentation method consists of two main components, a 3D CNN that produces highly accurate, soft segmentation maps, and a fully connected 3D CRF that imposes regularization constraints on the CNN output and produces the final hard segmentation labels. The main contributions of our work are within the CNN component which we describe first in the following. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_15", "text": " CNNs produce estimates for the voxel-wise segmentation labels by classifying each voxel in an image independently taking the neighborhood, i.e. local and contextual image information, into account. This is achieved by sequential convolutions of the input with multiple filters at the cascaded layers of the network. Each layer l∈(1,L)𝑙1𝐿l\\in(1,L) consists of Clsubscript𝐶𝑙C_{l} feature maps (FMs), also referred to as channels. Every FM is a group of neurons that detects a particular pattern, i.e. a feature, in the channels of the previous layer. The pattern is defined by the kernel weights associated with the FM. If the neurons of the m𝑚m-th FM in the l𝑙l-th layer are arranged in a 3D grid, their activations constitute the image 𝐲lm=f​(∑n=1Cl−1𝐤lm,n⋆𝐲l−1n+blm)subscriptsuperscript𝐲𝑚𝑙𝑓superscriptsubscript𝑛1subscript𝐶𝑙1⋆subscriptsuperscript𝐤𝑚𝑛𝑙subscriptsuperscript𝐲𝑛𝑙1subscriptsuperscript𝑏𝑚𝑙\\mathbf{y}^{m}_{l}=f(\\sum_{n=1}^{C_{l-1}}{\\mathbf{k}^{m,n}_{l}\\star\\mathbf{y}^{n}_{l-1}}+b^{m}_{l}). This is the result of convolving each of the previous layer’s channels with a 3-dimensional kernel 𝐤lm,nsubscriptsuperscript𝐤𝑚𝑛𝑙\\mathbf{k}^{m,n}_{l}, adding a learned bias blmsubscriptsuperscript𝑏𝑚𝑙b^{m}_{l} and applying a non-linearity f𝑓f. Each kernel is a matrix of learned hidden weights 𝐖lm,nsubscriptsuperscript𝐖𝑚𝑛𝑙\\mathbf{W}^{m,n}_{l}. The images 𝐲0nsubscriptsuperscript𝐲𝑛0\\mathbf{y}^{n}_{0}, input to the first layer, correspond to the channels of the original input image, for instance a multi-sequence 3D MRI scan of the brain. The concatenation of the kernels 𝐤l=(𝐤lm,1,…,𝐤lm,Cl−1)subscript𝐤𝑙subscriptsuperscript𝐤𝑚1𝑙…subscriptsuperscript𝐤𝑚subscript𝐶𝑙1𝑙\\mathbf{k}_{l}=(\\mathbf{k}^{m,1}_{l},...,\\mathbf{k}^{m,C_{l-1}}_{l}) can be viewed as a 4-dimensional kernel convolving the concatenated channels 𝐲l−1=(𝐲l−11,…,𝐲l−1Cl−1)subscript𝐲𝑙1subscriptsuperscript𝐲1𝑙1…subscriptsuperscript𝐲subscript𝐶𝑙1𝑙1\\mathbf{y}_{l-1}=(\\mathbf{y}^{1}_{l-1},...,\\mathbf{y}^{C_{l-1}}_{l-1}), which then intuitively expresses that the neurons of higher layers combine the patterns extracted in previous layers, which results in the detection of increasingly more complex patterns. The activations of the neurons in the last layer L𝐿L correspond to particular segmentation class labels, hence this layer is also referred to as the classification layer. The neurons are thus grouped in CLsubscript𝐶𝐿C_{L} FMs, one for each of the segmentation classes. Their activations are fed into a position-wise softmax function that produces the predicted posterior pc​(𝐱)=exp⁡(𝐲Lc​(𝐱))/∑c=1CLexp⁡(𝐲Lc​(𝐱))subscript𝑝𝑐𝐱superscriptsubscript𝐲𝐿𝑐𝐱superscriptsubscript𝑐1subscript𝐶𝐿superscriptsubscript𝐲𝐿𝑐𝐱p_{c}(\\mathbf{x})=\\exp(\\mathbf{y}_{L}^{c}(\\mathbf{x}))/\\sum_{c=1}^{C_{L}}\\exp(\\mathbf{y}_{L}^{c}(\\mathbf{x})) for each class c𝑐c, which form soft segmentation maps with (pseudo-)probabilities. 𝐲Lc​(𝐱)superscriptsubscript𝐲𝐿𝑐𝐱\\mathbf{y}_{L}^{c}(\\mathbf{x}) is the activation of the c𝑐c-th classification FM at position 𝐱∈ℕ3𝐱superscriptℕ3\\mathbf{x}\\in\\mathbb{N}^{3}. This baseline network is depicted in Fig. 2. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_16", "text": " The neighborhood of voxels in the input that influence the activation of a neuron is its receptive field. Its size, 𝝋lsubscript𝝋𝑙\\bm{\\varphi}_{l}, increases at each subsequent layer l𝑙l and is given by the 3-dimensional vector: 𝝋l{x,y,z}=𝝋l−1{x,y,z}+(𝜿l{x,y,z}−1)​𝝉l{x,y,z}​ ,superscriptsubscript𝝋𝑙𝑥𝑦𝑧superscriptsubscript𝝋𝑙1𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑥𝑦𝑧1superscriptsubscript𝝉𝑙𝑥𝑦𝑧 ,\\bm{\\varphi}_{l}^{\\{x,y,z\\}}=\\bm{\\varphi}_{l-1}^{\\{x,y,z\\}}+(\\bm{\\kappa}_{l}^{\\{x,y,z\\}}-1)\\bm{\\tau}_{l}^{\\{x,y,z\\}}\\textrm{ ,} (1) where 𝜿l,𝝉l∈ℕ3subscript𝜿𝑙subscript𝝉𝑙superscriptℕ3\\bm{\\kappa}_{l},\\bm{\\tau}_{l}\\in\\mathbb{N}^{3} are vectors expressing the size of the kernels and stride of the receptive field at layer l𝑙l. 𝝉lsubscript𝝉𝑙\\bm{\\tau}_{l} is given by the product of the strides of kernels in layers preceding l𝑙l. In this work only unary strides are used, as larger strides downsample the FMs (Springenberg et al. (2014)), which is unwanted behaviour for accurate segmentation. Thus in our system 𝝉l=(1,1,1)subscript𝝉𝑙111\\bm{\\tau}_{l}=(1,1,1). The receptive field of a neuron in the classification layer corresponds to the image patch that influences the prediction for its central voxel. This is called the CNN’s receptive field, with 𝝋C​N​N=𝝋Lsubscript𝝋𝐶𝑁𝑁subscript𝝋𝐿\\bm{\\varphi}_{CNN}=\\bm{\\varphi}_{L}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_17", "text": " If input of size 𝜹i​nsubscript𝜹𝑖𝑛\\bm{\\delta}_{in} is provided, the dimensions of the FMs in layer l𝑙l are given by: 𝜹l{x,y,z}=⌊(𝜹i​n{x,y,z}−𝝋l{x,y,z})/𝝉l{x,y,z}+1⌋superscriptsubscript𝜹𝑙𝑥𝑦𝑧superscriptsubscript𝜹𝑖𝑛𝑥𝑦𝑧superscriptsubscript𝝋𝑙𝑥𝑦𝑧superscriptsubscript𝝉𝑙𝑥𝑦𝑧1\\bm{\\delta}_{l}^{\\{x,y,z\\}}=\\lfloor(\\bm{\\delta}_{in}^{\\{x,y,z\\}}-\\bm{\\varphi}_{l}^{\\{x,y,z\\}})/\\bm{\\tau}_{l}^{\\{x,y,z\\}}+1\\rfloor (2) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_18", "text": " In the common patch-wise classification setting, an input patch of size 𝜹i​n=𝝋C​N​Nsubscript𝜹𝑖𝑛subscript𝝋𝐶𝑁𝑁\\bm{\\delta}_{in}=\\bm{\\varphi}_{CNN} is provided and the network outputs a single prediction for its central voxel. In this case the classification layer consists of FMs with size 13superscript131^{3}. Networks that are implemented as fully-convolutionals are capable of dense-inference, which is performed when input of size greater than 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} is provided (Sermanet et al. (2014)). In this case, the dimensions of FMs increase according to Eq. (2). This includes the classification FMs which then output multiple predictions simultaneously, one for each stride of the CNN’s receptive field on the input (Fig. 2). All predictions are equally trustworthy, as long as the receptive field is fully contained within the input and captures only original content, i.e. no padding is used. This strategy significantly reduces the computational costs and memory loads since the otherwise repeated computations of convolutions on the same voxels in overlapping patches are avoided. Optimal performance is achieved if the whole image is scanned in one forward pass. If GPU memory constraints do not allow it, such as in the case of large 3D networks where a large number of FMs need to be cached, the volume is tiled in multiple image-segments, which are larger than individual patches, but small enough to fit into memory. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_19", "text": " Before analyzing how we exploit the above dense-inference technique for training, which is the first main contribution of our work, we present the commonly used setting in which CNNs are trained patch-by-patch. Random patches of size 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} are extracted from the training images. A batch is formed out of B𝐵B of these samples, which is then processed by the network for one training iteration of Stochastic Gradient Descent (SGD). This step aims to alter the network’s parameters 𝚯𝚯\\mathbf{\\Theta}, such as weights and biases, in order to maximize the log likelihood of the data or, equally, minimize the Cross Entropy via the cost function: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_20", "text": " J​(𝚯;𝐈i,ci)=−1B​∑i=1Blog⁡(P​(Y=ci|𝐈i,𝚯))=−1B​∑i=1Blog⁡(pci)​ ,𝐽𝚯superscript𝐈𝑖superscript𝑐𝑖1𝐵superscriptsubscript𝑖1𝐵𝑃𝑌conditionalsuperscript𝑐𝑖superscript𝐈𝑖𝚯1𝐵superscriptsubscript𝑖1𝐵subscript𝑝superscript𝑐𝑖 ,J(\\mathbf{\\Theta};\\mathbf{I}^{i},c^{i})=-\\frac{1}{B}\\sum_{i=1}^{B}\\log\\left(P(Y=c^{i}|\\mathbf{I}^{i},\\mathbf{\\Theta})\\right)=-\\frac{1}{B}\\sum_{i=1}^{B}\\log(p_{c^{i}})\\textrm{ ,} (3) where the pair (𝐈i,ci),∀i∈(1,B)superscript𝐈𝑖superscript𝑐𝑖for-all𝑖1𝐵(\\mathbf{I}^{i},c^{i}),\\forall{i}\\in{(1,B)} is the i𝑖i-th patch in the batch and the true label of its central voxel, while the scalar value pcisubscript𝑝superscript𝑐𝑖p_{c^{i}} is the predicted posterior for class cisuperscript𝑐𝑖c^{i}. Regularization terms were omitted for simplicity. Multiple sequential optimization steps over different batches gradually lead to convergence. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_21", "text": " Larger training batch sizes B𝐵B are preferred as they approximate the overall data more accurately and lead to better estimation of the true gradient by SGD. However, the memory requirement and computation time increase with the batch size. This limitation is especially relevant for 3D CNNs, where only a few dozens of patches can be processed within reasonable time on modern GPUs. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_22", "text": " To overcome this problem, we devise a training strategy that exploits the dense inference technique on image segments. Following from Eq. (2), if an image segment of size greater than 𝝋C​N​Nsubscript𝝋𝐶𝑁𝑁\\bm{\\varphi}_{CNN} is given as input to our network, the output is a posterior probability for multiple voxels V=∏i={x,y,z}𝜹L(i)𝑉subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜹𝐿𝑖V=\\prod_{i=\\{x,y,z\\}}{\\bm{\\delta}_{L}^{(i)}}. If the training batches are formed of B𝐵B segments extracted from the training images, the cost function (3) in the case of dense-training becomes: ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_23", "text": " JD​(𝚯;𝐈s,𝐜s)=−1B⋅V​∑s=1B∑v=1Vlog⁡(pcsv​(𝐱v))​ ,subscript𝐽𝐷𝚯subscript𝐈𝑠subscript𝐜𝑠1⋅𝐵𝑉superscriptsubscript𝑠1𝐵superscriptsubscript𝑣1𝑉subscript𝑝superscriptsubscript𝑐𝑠𝑣superscript𝐱𝑣 ,J_{D}(\\mathbf{\\Theta};\\mathbf{I}_{s},\\mathbf{c}_{s})=-\\frac{1}{B\\cdot V}\\sum_{s=1}^{B}\\sum_{v=1}^{V}\\log(p_{c_{s}^{v}}(\\mathbf{x}^{v}))\\textrm{ ,} (4) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_24", "text": " where 𝐈ssubscript𝐈𝑠\\mathbf{I}_{s} and 𝐜ssubscript𝐜𝑠\\mathbf{c}_{s} are the s𝑠s-th segment of the batch and the true labels of its V𝑉V predicted voxels respectively. csvsuperscriptsubscript𝑐𝑠𝑣c_{s}^{v} is the true label of the v𝑣v-th voxel, 𝐱vsuperscript𝐱𝑣\\mathbf{x}^{v} the corresponding position in the classification FMs and pcsvsubscript𝑝superscriptsubscript𝑐𝑠𝑣p_{c_{s}^{v}} the output of the softmax function. The effective batch size is increased by a factor of V𝑉V without a corresponding increase in computational and memory requirements, as earlier discussed in Sec. 2.1. Notice that this is a hybrid scheme between the commonly used training on individual patches and the dense training scheme on a whole image (Long et al. (2015)), with the latter being problematic to apply for training large 3D CNNs on volumes of high resolution due to memory limitations. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_25", "text": " An appealing consequence of this scheme is that the sampling of input segments provides a flexible and automatic way to balance the distribution of training samples from different segmentation classes which is an important issue that directly impacts the segmentation accuracy. Specifically, we build the training batches by extracting segments from the training images with 50% probability being centred on a foreground or background voxel, alleviating class-imbalance. Note that the predicted voxels V𝑉V in a segment do not have to be of the same class, something that occurs when a segment is sampled from a region near class boundaries (Fig. 3). Hence, the sampling rate of the proposed hybrid method adjusts to the true distribution of the segmentation task’s classes. Specifically, the smaller a labelled object, the more background voxels will be captured within segments centred on the foreground voxel. Implicitly, this yields a balance between sensitivity and specificity in the case of binary segmentation tasks. In multi-class problems, the rate at which different classes are captured within a segment centred on foreground reflects the real relative distribution of the foreground classes, while adjusting their frequency relatively to the background. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_26", "text": " Deeper networks have greater discriminative power due to the additional non-linearities and better quality of local optima (Choromanska et al. (2015)). However, convolutions with 3D kernels are computationally expensive in comparison to the 2D variants, which hampers the addition of more layers. Additionally, 3D architectures have a larger number of trainable parameters, with each layer adding Cl​Cl−1​∏i={x,y,z}𝜿l(i)subscript𝐶𝑙subscript𝐶𝑙1subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑖C_{l}C_{l-1}\\prod_{i=\\{x,y,z\\}}{\\bm{\\kappa}_{l}^{(i)}} weights to the model. Clsubscript𝐶𝑙C_{l} is the number of FMs in layer l𝑙l and 𝜿l{x,y,z}superscriptsubscript𝜿𝑙𝑥𝑦𝑧\\bm{\\kappa}_{l}^{\\{x,y,z\\}} the size of its kernel in the respective spatial dimension. Overall this makes the network increasingly prone to over-fitting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_27", "text": " In order to build a deeper 3D architecture, we adopt the sole use of small 33superscript333^{3} kernels that are faster to convolve with and contain less weights. This design approach was previously found beneficial for classification of natural images (Simonyan and Zisserman (2014)) but its effect is even more drastic on 3D networks. When compared to common kernel choices of 53superscript535^{3} (Zikic et al. (2014); Urban et al. (2014); Prasoon et al. (2013)) and in our baseline CNN, the smaller 33superscript333^{3} kernels reduce the element-wise multiplications by a factor of approximately 53/33≈4.6superscript53superscript334.65^{3}/3^{3}\\approx 4.6 while reducing the number of trainable parameters by the same factor. Thus deeper network variants that are implicitly regularised and more efficient can be designed by simply replacing each layer of common architectures with more layers that use smaller kernels (Fig. 4). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_28", "text": " However, deeper networks are more difficult to train. It has been shown that the forward (neuron activations) and backwards (gradients) propagated signal may explode or vanish if care is not given to retain its variance (Glorot and Bengio (2010)). This occurs because at every successive layer l𝑙l, the variance of the signal is multiplied by nli​n⋅v​a​r​(𝐖l)⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙n^{in}_{l}\\cdot var(\\mathbf{W}_{l}), where nli​n=Cl−1​∏i={x,y,z}𝜿l(i)subscriptsuperscript𝑛𝑖𝑛𝑙subscript𝐶𝑙1subscriptproduct𝑖𝑥𝑦𝑧superscriptsubscript𝜿𝑙𝑖n^{in}_{l}=C_{l-1}\\prod_{i=\\{x,y,z\\}}{\\bm{\\kappa}_{l}^{(i)}} is the number of weights through which a neuron of layer l𝑙l is connected to its input and v​a​r​(𝐖l)𝑣𝑎𝑟subscript𝐖𝑙var(\\mathbf{W}_{l}) is the variance of the layer’s weights. To better preserve the signal in the initial training stage we adopt a scheme recently derived for ReLu-based networks by He et al. (2015) and initialize the kernel weights of our system by sampling from the normal distribution 𝒩​(0,2/nli​n)𝒩02subscriptsuperscript𝑛𝑖𝑛𝑙\\mathcal{N}(0,\\sqrt{2/n^{in}_{l}}). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_29", "text": " A phenomenon of similar nature that hinders the network’s performance is the “internal covariate shift” (Ioffe and Szegedy (2015)). It occurs throughout training, because the weight updates to deeper layers result in a continuously changing distribution of signal at higher layers, which hinders the convergence of their weights. Specifically, at training iteration t𝑡t the weight updates may cause deviation ϵl,tsubscriptitalic-ϵ𝑙𝑡\\epsilon_{l,t} to the variance of the weights. At the next iteration the signal will be amplified by nli​n⋅v​a​r​(𝐖l,t+1)=nli​n⋅(v​a​r​(𝐖l,t)+ϵl,t)⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙𝑡1⋅subscriptsuperscript𝑛𝑖𝑛𝑙𝑣𝑎𝑟subscript𝐖𝑙𝑡subscriptitalic-ϵ𝑙𝑡n^{in}_{l}\\cdot var(\\mathbf{W}_{l,t+1})=n^{in}_{l}\\cdot(var(\\mathbf{W}_{l,t})+\\epsilon_{l,t}). Thus before influencing the signal, any deviation ϵl,tsubscriptitalic-ϵ𝑙𝑡\\epsilon_{l,t} is amplified by nli​nsubscriptsuperscript𝑛𝑖𝑛𝑙n^{in}_{l} which is exponential in the number of dimensions. For this reason the problem affects training of 3D CNNs more severely than conventional 2D systems. For countering it, we adopt the recently proposed Batch Normalisation (BN) technique to all hidden layers (Ioffe and Szegedy (2015)), which allows normalization of the FM activations at every optimization step in order to better preserve the signal. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_30", "text": " The segmentation of each voxel is performed by taking into account the contextual information that is captured by the receptive field of the CNN when it is centred on the voxel. The spatial context is providing important information for being able to discriminate voxels that otherwise appear very similar when considering only local appearance. From Eq. (1) follows that an increase of the CNN’s receptive field requires bigger kernels or more convolutional layers, which increases computation and memory requirements. An alternative would be the use of pooling (LeCun et al. (1998)), which however leads to loss of the exact position of the segmented voxel and thus can negatively impact accuracy. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_31", "text": " In order to incorporate both local and larger contextual information into our 3D CNN, we add a second pathway that operates on down-sampled images. Thus, our dual pathway 3D CNN simultaneously processes the input image at multiple scales (Fig. 5). Higher level features such as the location within the brain are learned in the second pathway, while the detailed local appearance of structures is captured in the first. As the two pathways are decoupled in this architecture, arbitrarily large context can be processed by the second pathway by simply adjusting the down-sampling factor FDsubscript𝐹𝐷F_{D}. The size of the pathways can be independently adjusted according to the computational capacity and the task at hand, which may require relatively more or less filters focused on the down-sampled context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_32", "text": " To preserve the capability of dense inference, spatial correspondence of the activations in the FMs of the last convolutional layers of the two pathways, L​1𝐿1L1 and L​2𝐿2L2, should be ensured. In networks where only unary kernel strides are used, such as the proposed architecture, this requires that for every FDsubscript𝐹𝐷F_{D} shifts of the receptive field 𝝋L​1subscript𝝋𝐿1\\bm{\\varphi}_{L1} over the normal resolution input, only one shift is performed by 𝝋L​2subscript𝝋𝐿2\\bm{\\varphi}_{L2} over the down-sampled input. Hence it is required that the dimensions of the FMs in L​2𝐿2L2 are 𝜹L​2{x,y,z}=⌈𝜹L​1{x,y,z}/FD⌉superscriptsubscript𝜹𝐿2𝑥𝑦𝑧superscriptsubscript𝜹𝐿1𝑥𝑦𝑧subscript𝐹𝐷\\bm{\\delta}_{L2}^{\\{x,y,z\\}}=\\lceil\\bm{\\delta}_{L1}^{\\{x,y,z\\}}/F_{D}\\rceil. From Eq. (2), the size of the input to the second pathway is 𝜹i​n​2{x,y,z}=𝝋L​2{x,y,z}+𝜹L​2{x,y,z}−1superscriptsubscript𝜹𝑖𝑛2𝑥𝑦𝑧superscriptsubscript𝝋𝐿2𝑥𝑦𝑧superscriptsubscript𝜹𝐿2𝑥𝑦𝑧1\\bm{\\delta}_{in2}^{\\{x,y,z\\}}=\\bm{\\varphi}_{L2}^{\\{x,y,z\\}}+\\bm{\\delta}_{L2}^{\\{x,y,z\\}}-1 and similar is the relation between 𝜹i​n​1subscript𝜹𝑖𝑛1\\bm{\\delta}_{in1} and 𝜹L​1subscript𝜹𝐿1\\bm{\\delta}_{L1}. These establish the relation between the required dimensions of the input segments from the two resolutions, which can then be extracted centered on the same image location. The FMs of L​2𝐿2L2 are up-sampled to match the dimensions of L​1𝐿1L1’s FMs and are then concatenated together. We add two more hidden layers for combining the multi-scale features before the final classification, as shown in Fig. 5. Integration of the multi-scale parallel pathways in architectures with non-unary strides is discussed in A. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_33", "text": " Combining multi-scale features has been found beneficial in other recent works (Long et al. (2015); Ronneberger et al. (2015)), in which whole 2D images are processed in the network by applying a few number of convolutions and then down-sampling the FMs for further processing at various scales. Our decoupled pathways allow arbitrarily large context to be provided while avoiding the need to load large parts of the 3D volume into memory. Additionally, our architecture extracts features completely independently from the multiple resolutions. This way, the features learned by the first pathway retain finest details, as they are not involved in processing low resolution context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_34", "text": " Because neighboring voxels share substantial spatial context, the soft segmentation maps produced by the CNN tend to be smooth, even though neighborhood dependencies are not modeled directly. However, local minima in training and noise in the input images can still result in some spurious outputs, with small isolated regions or holes in the predictions. We employ a fully connected CRF (Krähenbühl and Koltun (2011)) as a post-processing step to achieve more structured predictions. As we describe below, this CRF is capable of modeling arbitrarily large voxel-neighborhoods but is also computationally efficient, making it ideal for processing 3D multi-modal medical scans. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_35", "text": " For an input image 𝐈𝐈\\mathbf{I} and the label configuration (segmentation) 𝐳𝐳\\mathbf{z}, the Gibbs energy in a CRF model is given by ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_36", "text": " E​(𝐳)=∑iψu​(zi)+∑i​j,i≠jψp​(zi,zj)​ .𝐸𝐳subscript𝑖subscript𝜓𝑢subscript𝑧𝑖subscript𝑖𝑗𝑖𝑗subscript𝜓𝑝subscript𝑧𝑖subscript𝑧𝑗 .E(\\mathbf{z})=\\sum_{i}{\\psi_{u}(z_{i})}+\\sum_{ij,i\\neq j}{\\psi_{p}(z_{i},z_{j})}\\textrm{ .} (5) ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_37", "text": " The unary potential is the negative log-likelihood ψu​(zi)=−l​o​g​P​(zi|𝐈)subscript𝜓𝑢subscript𝑧𝑖𝑙𝑜𝑔𝑃conditionalsubscript𝑧𝑖𝐈\\psi_{u}(z_{i})=-logP(z_{i}|\\mathbf{I}), where in our case P​(zi|𝐈)𝑃conditionalsubscript𝑧𝑖𝐈P(z_{i}|\\mathbf{I}) is the CNN’s output for voxel i𝑖i. In a fully connected CRF, the pairwise potential is of form ψp​(zi,zj)=μ​(zi,zj)​k​(𝐟𝐢,𝐟𝐣)subscript𝜓𝑝subscript𝑧𝑖subscript𝑧𝑗𝜇subscript𝑧𝑖subscript𝑧𝑗𝑘subscript𝐟𝐢subscript𝐟𝐣\\psi_{p}(z_{i},z_{j})=\\mu(z_{i},z_{j})k(\\mathbf{f_{i}},\\mathbf{f_{j}}) between any pair of voxels, regardless of their spatial distance. The Pott’s Model is commonly used as the label compatibility function, giving μ​(zi,zj)=(zi≠zj)𝜇subscript𝑧𝑖subscript𝑧𝑗delimited-()subscript𝑧𝑖subscript𝑧𝑗\\mu(z_{i},z_{j})=(z_{i}\\neq z_{j}). The corresponding energy penalty is given by the function k𝑘k, which is defined over an arbitrary feature space, with 𝐟𝐢,𝐟𝐣subscript𝐟𝐢subscript𝐟𝐣\\mathbf{f_{i}},\\mathbf{f_{j}} being the feature vectors of the pair of voxels. Krähenbühl and Koltun (2011) observed that if the penalty function is defined as a linear combination of Gaussian kernels, k​(𝐟𝐢,𝐟𝐣)=∑m=1Mw(m)​k(m)​(𝐟𝐢,𝐟𝐣)𝑘subscript𝐟𝐢subscript𝐟𝐣superscriptsubscript𝑚1𝑀superscript𝑤𝑚superscript𝑘𝑚subscript𝐟𝐢subscript𝐟𝐣k(\\mathbf{f_{i}},\\mathbf{f_{j}})=\\sum_{m=1}^{M}{w^{(m)}k^{(m)}(\\mathbf{f_{i}},\\mathbf{f_{j}})}, the model lends itself for very efficient inference with mean field approximation, after expressing message passing as convolutions with the Gaussian kernels in the space of the feature vectors 𝐟𝐢,𝐟𝐣subscript𝐟𝐢subscript𝐟𝐣\\mathbf{f_{i}},\\mathbf{f_{j}}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_38", "text": " We extended the work of the original authors and implemented a 3D version of the CRF for processing multi-modal scans. We make use of two Gaussian kernels, which operate in the feature space defined by the voxel coordinates pi,dsubscript𝑝𝑖𝑑p_{i,d} and the intensities of the c𝑐c-th modality-channel Ii,csubscript𝐼𝑖𝑐I_{i,c} for voxel i𝑖i. The smoothness kernel, k(1)​(𝐟𝐢,𝐟𝐣)=e​x​p​(−∑d={x,y,z}|pi,d−pj,d|22​σα,d2)superscript𝑘1subscript𝐟𝐢subscript𝐟𝐣𝑒𝑥𝑝subscript𝑑𝑥𝑦𝑧superscriptsubscript𝑝𝑖𝑑subscript𝑝𝑗𝑑22superscriptsubscript𝜎𝛼𝑑2k^{(1)}(\\mathbf{f_{i}},\\mathbf{f_{j}})=exp\\Big{(}-\\sum_{d=\\{x,y,z\\}}{\\frac{|p_{i,d}-p_{j,d}|^{2}}{2\\sigma_{\\alpha,d}^{2}}}\\Big{)}, is defined by a diagonal covariance matrix with elements the configurable parameters σα,dsubscript𝜎𝛼𝑑\\sigma_{\\alpha,d}, one for each axis. These parameters express the size and shape of neighborhoods that homogeneous labels are encouraged. The appearance kernel k(2)​(𝐟𝐢,𝐟𝐣)=e​x​p​(−∑d={x,y,z}|pi,d−pj,d|22​σβ,d2−∑c=1C|Ii,c−Ij,c|22​σγ,c2)superscript𝑘2subscript𝐟𝐢subscript𝐟𝐣𝑒𝑥𝑝subscript𝑑𝑥𝑦𝑧superscriptsubscript𝑝𝑖𝑑subscript𝑝𝑗𝑑22superscriptsubscript𝜎𝛽𝑑2superscriptsubscript𝑐1𝐶superscriptsubscript𝐼𝑖𝑐subscript𝐼𝑗𝑐22superscriptsubscript𝜎𝛾𝑐2k^{(2)}(\\mathbf{f_{i}},\\mathbf{f_{j}})=exp\\Big{(}-\\sum_{d=\\{x,y,z\\}}{\\frac{|p_{i,d}-p_{j,d}|^{2}}{2\\sigma_{\\beta,d}^{2}}}-\\sum_{c=1}^{C}{\\frac{|I_{i,c}-I_{j,c}|^{2}}{2\\sigma_{\\gamma,c}^{2}}}\\Big{)} is defined similarly. The additional parameters σγ,csubscript𝜎𝛾𝑐\\sigma_{\\gamma,c} can be interpreted as how strongly to enforce homogeneous appearance in the C𝐶C input channels, when voxels in an area spatially defined by σβ,dsubscript𝜎𝛽𝑑\\sigma_{\\beta,d} are identically labelled. Finally, the configurable weights w(1),w(2)superscript𝑤1superscript𝑤2w^{(1)},w^{(2)} define the relative strength of the two factors. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_39", "text": " In this section we present a series of experiments in order to analyze the impact of each of the main contributions and to justify the choices made in the design of the proposed 11-layers, multi-scale 3D CNN architecture, referred to as the DeepMedic. Starting from the CNN baseline as discussed in Sec. 2.1, we first explore the benefit of our proposed dense training scheme (cf. Sec. 2.2), then investigate the use of deeper models (cf. Sec. 2.3) and then evaluate the influence of the multi-scale dual pathway (cf. Sec. 2.4). Finally, we compare our method with corresponding 2D variants to assess the benefit of processing 3D context. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_40", "text": " The following experiments are conducted using the TBI dataset with 61 multi-channel MRIs which is described in more detail later in Sec. 4.1. Here, the images are randomly split into a validation and training set, with 15 and 46 images each. The same sets are used in all analyses. To monitor the progress of segmentation accuracy during training, we extract 10k random patches at regular intervals, with equal numbers extracted from each of the validation images. The patches are uniformly sampled from the brain region in order to approximate the true distribution of lesions and healthy tissue. Full segmentation of the validation datasets is performed every five epochs and the mean Dice similarity coefficient (DSC) is determined. Details on the configuration of the networks are provided in B. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_41", "text": " We compare our proposed dense training method with two other commonly used training schemes on the 5-layers baseline CNN (see Fig. 2). The first common scheme trains on 173superscript17317^{3} patches extracted uniformly from the brain region, and the second scheme samples patches equally from the lesion and background class. We refer to these schemes as Puniuni{}_{\\text{uni}} and Peqeq{}_{\\text{eq}}. The results shown in Fig. 6 show a correlation of sensitivity and specificity with the percentage of training samples that come from the lesion class. Peqeq{}_{\\text{eq}} performs poorly because of over-segmentation (high sensitivity, low specificity). Puniuni{}_{\\text{uni}} has better classification on the background class (high specificity), which leads to high mean voxel-wise accuracy since the majority corresponds to background, but not particularly high DSC scores due to under-segmentation (low sensitivity). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_42", "text": " To evaluate our dense training scheme, we train multiple models with varying sized image segments, equally sampled from lesions and background. The tested sizes of the segments go from 193superscript19319^{3} upwards to 293superscript29329^{3}. The models are referred to as “S-d𝑑d”, where d𝑑d is the side length of the cubic segments. For fair comparison, the batch sizes in all the experiments are adjusted to have a similar memory footprint and lead to similar training times as compared to training on Puni and Peq222Dense training on a whole volume was inapplicable in these experimental settings due to memory limitations but was previously shown to give similar results as training on uniformly sampled patches (Long et al. (2015)).. We observe a great performance increase for model S-1919{19} over Peqeq{}_{\\text{eq}}. We account this partly to the efficient increase of the effective batch size (B⋅V⋅𝐵𝑉B\\cdot V in Eq. (4)), but also to the altered distribution of training samples. As we increase the size of the training segments further, we quickly reach a balance between the sensitivity of Peq and the specificity of Puni, which results in improved segmentation as expressed by the DSC. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_43", "text": " The segment size is a hyper-parameter in our model. We observe that the increase in performance with increasing segment size quickly levels off, and similar performance is obtained for a wide range of segment sizes, which allows for easy configuration. For the remaining experiments, all models were trained on segments of size 253superscript25325^{3}. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_44", "text": " The 5-layers baseline CNN (Fig. 2), here referred to as the “Shallow” model, is extended to 9-layers by replacing each convolutional layer that uses 53superscript535^{3} kernels with two layers that use 33superscript333^{3} kernels (Fig. 4). This model is referred to as “Deep”. Training the latter, however, utterly fails with the model making only predictions corresponding to the background class. This problem is related to the challenge of preserving the signal as it propagates through deep networks and its variance gets multiplied with the variance of the weights, as previously discussed in Sec. 2.3. One of the causes is that the weights of both models have been initialized with the commonly used scheme of sampling from the normal distribution 𝒩​(0,0.01)𝒩00.01\\mathcal{N}(0,0.01) (cf. Krizhevsky et al. (2012)). In comparison, the initialization scheme by He et al. (2015), derived for preserving the signal in the initial stage of training, results in higher values and overcomes this problem. Further preservation of the signal is obtained by employing Batch Normalization. This results in an enhanced 9-layers model which we refer to as “Deep+”, and using the same enhancements on the Shallow model yields “Shallow+”. The significant performance improvement of Deep+ over Shallow+, as shown in Fig. 7, is the result of the greater representational power of the deeper network. The two models need similar computational times, which highlights the benefits of utilizing small kernels in the design of 3D CNNs. Although the deeper model requires more sequential (layer by layer) computations on the GPU, those are faster due to the smaller kernel size. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_45", "text": " The final version of the proposed network architecture, referred to as “DeepMedic”, is built by extending the Deep+ model with a second convolutional pathway that is identical to the first one. Two hidden layers are added for combining the multi-scale features before the classification layer, resulting in a deep network of 11-layers (cf. Fig. 5). The input segments to the second pathway are extracted from the images down-sampled by a factor of three. Thus, the network is capable of capturing context in a 513superscript51351^{3} area of the original image through the 173superscript17317^{3} receptive field of the lower-resolution pathway, while only doubling the computational and memory requirements over the single pathway CNN. In comparison, the most recent 2D CNN systems proposed for lesion segmentation (Havaei et al. (2015); Pereira et al. (2015)) have a receptive field limited to 332superscript33233^{2} voxels. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_46", "text": " Figure 8 shows the improvement DeepMedic achieves over the single pathway model Deep+. In Fig. 9 we show two representative visual examples of this improvement when using the multi-scale CNN. Finally, we confirm that the performance increase can be accounted to the additional context and not the additional capacity of DeepMedic. To this end, we build a big single-scale model by doubling the FMs at each of the 9-layers of Deep+ and adding two hidden layers. This 11-layers deep and wide model, referred to as “BigDeep+”, has the same number of parameters as DeepMedic. The performance of the model is not improved, while showing signs of over-fitting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_47", "text": " Acquired brain MRI scans are often anisotropic. Such is the case for most sequences in our TBI dataset, which have been acquired with lower axial resolution, except for the isotropic MPRAGE. We perform a series of experiments to investigate the behaviour of 2D networks and assess the benefit of processing 3D context in this setting. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_48", "text": " DeepMedic can be converted to 2D by setting the third dimension of each kernel to one. This way only information from the surrounding context on the axial plane influences the classification of each voxel. If 2D segments are given as input, the dimensionality of the feature maps decreases and so does the memory required. This allows developing 2D variants with increased width, depth and size of training batch with similar requirements as the 3D version, which are valid candidates for model selection in practical scenarios. We assess various configurations and present some representatives in Table 1(b) along with their performance. Best segmentation among investigated 2D variants is achieved by a 19-layers, multi-scale network, reaching 61.5% average DSC on the validation fold. The decline from the 66.6% DSC achieved by the 3D version of DeepMedic indicates the importance of processing 3D context even in settings where most acquired sequences have low resolution along a certain axis. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_49", "text": " The proposed system consisting of the DeepMedic CNN architecture, optionally coupled with a fully connected CRF, is evaluated on three lesion segmentation tasks including challenging clinical data from patients with traumatic brain injuries, brain tumors, and ischemic stroke. Quantitative evaluation and comparisons with state-of-the-art are reported for each of the tasks. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_50", "text": " Sixty-six patients with moderate-to-severe TBI who required admission to the Neurosciences Critical Care Unit at Addenbrooke’s Hospital, Cambridge, UK, underwent imaging using a 3-Tesla Siemens Magnetom TIM Trio within the first week of injury. Ethical approval was obtained from the Local Research Ethics Committee (LREC 97/290) and written assent via consultee agreement was obtained for all patients. The structural MRI sequences that are used in this work are isotropic MPRAGE (1mm×mm\\times1mm×mm\\times1m​m𝑚𝑚mm), axial FLAIR, T2 and Proton Density (PD) (0.7mm×mm\\times0.7mm×mm\\times5m​m𝑚𝑚mm), and Gradient-Echo (GE) (0.86mm×mm\\times0.86mm×mm\\times5m​m𝑚𝑚mm). All visible lesions were manually annotated on the FLAIR and GE sequences with separate labeling for each lesion type. In nine patients the presence of hyperintense white matter lesions that were felt to be chronic in nature were also annotated. Artifacts, for example, signal loss secondary to intraparenchymal pressure probes, were also noted. For the purpose of this study we focus on binary segmentation of all abnormalities within the brain tissue. Thus, we merged all classes that correspond to intra-cerebral abnormalities into a single “lesion” label. Extra-cerebral pathologies such as epidural and subdural hematoma were treated as background. We excluded two datasets because of corrupted FLAIR images, two cases because no lesions were found and one case because of a major scanning artifact corrupting the images. This results in a total of 61 cases used for quantitative evaluation. Brain masks were obtained using the ROBEX tool (Iglesias et al. (2011)). All images were resampled to an isotropic 1​m​m31𝑚superscript𝑚31mm^{3} resolution, with dimensions 193×\\times229×\\times193 and affinely registered (Studholme et al. (1999)) to MNI space using the atlas by Grabner et al. (2006). No bias field correction was used as preliminary results showed that this can negatively affect lesion appearance. Image intensities were normalized to have zero-mean and unit variance, as it has been reported that this improves CNN results (Jarrett et al. (2009)). ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_51", "text": " Network configuration and training: The network architecture corresponds to the one described in Sec. 3.4, i.e. a dual-pathway, 11-layers deep CNN. The training data is augmented by adding images reflected along the sagittal axis. To make the network invariant to absolute intensities we also shift the intensities of each MR channel c𝑐c of every training segment by ic=rc​σcsubscript𝑖𝑐subscript𝑟𝑐subscript𝜎𝑐i_{c}=r_{c}\\sigma_{c}. rcsubscript𝑟𝑐r_{c} is sampled for every segment from 𝒩​(0,0.1)𝒩00.1\\mathcal{N}(0,0.1) and σcsubscript𝜎𝑐\\sigma_{c} is the standard deviation of intensities under the brain mask in the corresponding image. The network is regularized using dropout (Hinton et al. (2012)) with a rate of 2% on all convolutional layers, which is in addition to a 50% rate used on the last two layers. The network is evaluated with 5-fold cross-validation on the 61 subjects. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_52", "text": " CRF configuration: The parameters of the fully connected CRF are determined in a configuration experiment using random-search and 15 randomly selected subjects from the TBI database with predictions from a preliminary version of the corresponding model. The 15 subjects are reshuffled into the 5-folds used for subsequent evaluation. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_53", "text": " Random Forest baseline: We have done our best to set up a competitive baseline for comparison. We employ a context-sensitive Random Forest, similar to the model presented by Zikic et al. (2012) for brain tumors except that we apply the forest to the MR images without additional tissue specific priors. We train a forest with 50 trees and maximum depth of 30. Larger size did not improve results. Training data points are approximately equally sampled from lesion and background classes, with the optimal balance empirically chosen. Two hundred randomized cross-channel box features are evaluated at each split node with maximum offsets and box sizes of 20mm. The same folds of training and test sets are used as for our CNN approach. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_54", "text": " Table 1 summarizes the results on TBI. Our CNN significantly outperforms the Random Forest baseline, while the relatively overall low DSC values indicate the difficulty of the task. Due to randomness during training the local minima where a network converges are different between training sessions and some errors they produce differ (Choromanska et al. (2015)). To clear the unbiased errors of the network we form an ensemble of three similar networks, aggregating their output by averaging. This ensemble yields better performance in all metrics but also allows us to investigate the behaviour of our network focusing only on the biased errors. Fig. 10 shows the DSC obtained by the ensemble on each subject in relation to the manually segmented and predicted lesion volume. The network is capable of segmenting cases with very small lesions, although, performance is less robust in these cases as even small errors have large influence on the DSC metric. Investigation of the predicted lesion volume, which is an important biomarker for prognostication, shows that the network is neither biased towards the lesion nor background class, with promising results even on cases with very small lesions. Furthermore, we separately evaluate the influence of the post-processing with the fully connected CRF. As shown in Table 1, the CRF yields improvements over all classifiers. Effects are more prominent when the performance of the primary segmenter degrades, which shows the robustness of this regulariser. Fig. 11 shows three representative cases. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_55", "text": " For brain tumors, we evaluate our system on the data from the 2015 Brain Tumor Segmentation Challenge (BRATS) (Menze et al. (2015)). The training set consists of 220 cases with high grade (HG) and 54 cases with low grade (LG) glioma for which corresponding reference segmentations are provided. The segmentations include the following tumor tissue classes: 1) necrotic core, 2) edema, 3) non-enhancing and 4) enhancing core. The test set consists of 110 cases of both HG and LG but the grade is not revealed. Reference segmentations for the test set are hidden and evaluation is carried out via an online system. For evaluation, the four predicted labels are merged into different sets of whole tumor (all four classes), the core (classes 1,3,4), and the enhancing tumor (class 4)333For interpretation of the results note that, to the best of our knowledge, cases where the “enhancing tumor” class is not present in the manual segmentation are considered as zeros for the calculation of average performance by the evaluation platform, lowering the upper bound for this class.. For each subject, four MRI sequences are available, FLAIR, T1, T1-contrast and T2. The datasets are pre-processed by the organizers and provided as skull-stripped, registered to a common space and resampled to isotropic 1​m​m31𝑚superscript𝑚31mm^{3} resolution. Dimensions of each volume are 240×\\times240×\\times155. We add minimal pre-processing of normalizing the brain-tissue intensities of each sequence to have zero-mean and unit variance. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_56", "text": " Network configuration and training: We modify the DeepMedic architecture to handle multi-class problems by extending the classification layer to five feature maps (four tumor classes plus background). The rest of the configuration remains unchanged. We enrich the dataset with sagittal reflections. Opposite to the experiments on TBI, we do not employ the intensity perturbation and dropout on convolutional layers, because the network should not require as much regularisation with this large database. The network is trained on image segments extracted with equal probability centred on the whole tumor and healthy tissue. The distribution of the classes captured by our training scheme is provided in C. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_57", "text": " To examine our network’s behaviour, we first evaluate it on the training data of the challenge. For this, we run a 5-fold cross validation where each fold contains both HG and LG images. We then retrain the network using all training images, before applying it on the test data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_58", "text": " CRF configuration: For the multi-class problem it is challenging to find a global set of parameters for the CRF which can consistently improve the segmentation of all classes. So instead we merge the four predicted probability maps into a single “whole tumor” map for CRF post-processing. The CRF then only refines the boundaries between tumor and background and additionally removes isolated false positives. Similarly to the experiments on TBI, the CRF is configured on a random subset of 44 HG and 18 LG training images, which are then reshuffled into the subsequent 5-fold cross validation. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_59", "text": " Quantitative results from the application of the DeepMedic, the CRF and an ensemble of three similar networks on the training data are presented in Table 2. The latter two offer an improvement, albeit fairly small since the performance of DeepMedic is already rather high in this task. Also shown are results from previous works, as reported on the online evaluation platform. Various settings may vary among submissions, such as the pre-processing pipeline or the number of folds used for cross-validation. Still it appears that our system performs favourably compared to previous state-of-the-art, including the semi-automatic system of Bakas et al. (2015) (bakas1) who won the latest challenge and the method of Pereira et al. (2015) (peres1), which is based on grade-specific 2D CNNs and requires visual inspection of the tumor and identification of the grade by the user prior to segmentation. Examples of segmentations obtained with our method are shown in Fig. 12. DeepMedic behaves very well in preserving the hierarchical structure of the tumor, which we account to the large context processed by our multi-scale network. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_60", "text": " Table 3 shows the results of our method on the BRATS test data. Results of other submissions are not accessible. The decrease in performance is possibly due to the the inclusion of test images that vary significantly from the training data, such as cases acquired in clinical centers that did not provide any of the training images, something that was confirmed by the organisers. Note that performance gains obtained with the CRF are larger in this case. This indicates not only that its configuration has not overfitted to the training database but also that the CRF is robust to factors of variation between acquisition sites, which complements nicely the more sensitive CNN. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_61", "text": " We participated in the 2015 Ischemic Stroke Lesion Segmentation (ISLES) challenge, where our system achieved the best results among all participants on sub-acute ischemic stroke lesions (Maier et al. (2017)). In the training phase of the challenge, 28 datasets have been made available, along with manual segmentations. Each dataset included T1, T1-contrast, FLAIR and DWI sequences. All images were provided as skull-stripped and resampled to isotropic 1​m​m31𝑚superscript𝑚31mm^{3} voxel resolution. Each volume is of size 230×\\times230×\\times154. In the testing stage, teams were provided with 36 datasets for evaluation. The test data were acquired in two clinical centers, with one of them being the same that provided all training images. Corresponding expert segmentations were hidden and results had to be submitted to an online evaluation platform. Similar to BRATS, the only pre-processing that we applied is the normalization of each image to the zero-mean and unit variance. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_62", "text": " Network Configuration and Training: The configuration of the network employed is described in Kamnitsas et al. (2015). The main difference with the configuration used for TBI and tumors as employed above is the relatively smaller number of FMs in the low-resolution pathway. This choice should not significantly influence accuracy on the generally small SISS lesions but it allowed us to lower the computational cost. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_63", "text": " Similar to the other experiments, we evaluate our network with a 5-fold cross validation on the training datasets. We use data augmentation with sagittal reflections. For the testing phase of the challenge, we trained an ensemble of three networks on all training cases and aggregate their predictions by averaging. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_64", "text": " CRF configuration: The parameters of the CRF were configured via a random search on the whole training dataset. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_65", "text": " The performance of our system on the training data is shown in Table 4. Significant improvement is achieved by the structural regularisation offered by the CRF, although it could be partially accounted for by overfitting the training data during the CRF’s configuration. Examples for visual inspection are shown in Fig. 13. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_66", "text": " For the testing phase of the challenge we formed an ensemble of three networks, coupled with the fully connected CRF. Our submission ranked first, indicating superior performance on this challenging task among 14 submissions. Table 5 shows our results, along with the other two top entries (Feng et al. (2015); Halme et al. (2015)). Among the other participating methods was the CNN of Havaei et al. (2015) with 3 layers of 2D convolutions. That method perfomed less well on this challenging task (Maier et al. (2017)). This points out the advantage offered by 3D context, the large field of view of DeepMedic thanks to multi-scale processing and the representational power of deeper networks. It is important to note the decrease of performance in comparison to the training set. All methods performed worse on the data coming from the second clinical center, including the method of Feng et al. (2015) that is not machine-learning based. This highlights a general difficulty with current approaches when applied on multi-center data. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_67", "text": " Our CNN is implemented using the Theano library (Bastien et al. (2012)). Each training session requires approximately one day on an NVIDIA GTX Titan X GPU using cuDNN v5.0. The efficient architecture of DeepMedic also allows models to be trained on GPUs with only 3GB of memory. Note that although dimensions of the volumes in the processed databases do not allow dense training on whole volumes for this size of network, dense inference on a whole volume is still possible, as it requires only a forward-pass and thus less memory. In this fashion segmentation of a volume takes less than 30 seconds but requires 12 GB of GPU memory. Tiling the volume into multiple segments of size 353superscript35335^{3} allows inference on 3 GB GPUs in less than three minutes. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_68", "text": " Our 3D fully connected CRF is implemented by extending the original source code by Krähenbühl and Koltun (2011). A CPU implementation is fast, capable of processing a five-channel brain scan in under three minutes. Further speed-up could be achieved with a GPU implementation, but was not found necessary in the scope of this work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_69", "text": " We have presented DeepMedic, a 3D CNN architecture for automatic lesion segmentation that surpasses state-of-the-art on challenging data. The proposed novel training scheme is not only computationally efficient but also offers an adaptive way of partially alleviating the inherent class-imbalance of segmentation problems. We analyzed the benefits of using small convolutional kernels in 3D CNNs, which allowed us to develop a deeper and thus more discriminative network, without increasing the computational cost and number of trainable parameters. We discussed the challenges of training deep neural networks and the adopted solutions from the latest advances in deep learning. Furthermore, we proposed an efficient solution for processing large image context by the use of parallel convolutional pathways for multi-scale processing, alleviating one of the main computational limitations of previous 3D CNNs. Finally, we presented the first application of a 3D fully connected CRF on medical data, employed as a post-processing step to refine the network’s output, a method that has also been shown promising for processing 2D natural images (Chen et al. (2014)). The design of the proposed system is well suited for processing medical volumes thanks to its generic 3D nature. The capabilities of DeepMedic and the employed CRF for capturing 3D patterns exceed those of 2D networks and locally connected random fields, models that have been commonly used in previous work. At the same time, our system is very efficient at inference time, which allows its adoption in a variety of research and clinical settings. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_70", "text": " The generic nature of our system allows its straightforward application for different lesion segmentation tasks without major adaptations. To the best of our knowledge, our system achieved the highest reported accuracy on a cohort of patients with severe TBI. As a comparison, we improved over the reported performance of the pipeline in Rao et al. (2014). Important to note is that the latter work focused only on segmentation of contusions, while our system has been shown capable of segmenting even small and diffused pathologies. Additionally, our pipeline achieved state-of-the-art performance on both public benchmarks of brain tumors (BRATS 2015) and stroke lesions (SISS ISLES 2015). We believe performance can be further improved with task- and data-specific adjustments, for instance in the pre-processing, but our results show the potential of this generically designed segmentation system. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_71", "text": " When applying our pipeline to new tasks, a laborious process is the reconfiguration of the CRF. The model improved our system’s performance with statistical significance in all investigated tasks, most profoundly when the performance of the underlying classifier degrades, proving its flexibility and robustness. Finding optimal parameters for each task, however, can be challenging. This became most obvious on the task of multi-class tumor segmentation. Because the tumor’s substructures vary significantly in appearance, finding a global set of parameters that yields improvements on all classes proved difficult. Instead, we applied the CRF in a binary fashion. This CRF model can be configured with a separate set of parameters for each class. However the larger parameter space would complicate its configuration further. Recent work from Zheng et al. (2015) showed that this particular CRF can be casted as a neural network and its parameters can be learned with regular gradient descent. Training it in an end-to-end fashion on top of a neural network would alleviate the discussed problems. This will be explored as part of future work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_72", "text": " The discriminative power of the learned features is indicated by the success of recent CNN-based systems in matching human performance in domains where it was previously considered too ambitious (He et al. (2015); Silver et al. (2016)). Analysis of the automatically extracted information could potentially provide novel insights and facilitate research on pathologies for which little prior knowledge is currently available. In an attempt to illustrate this, we explore what patterns have been learned automatically for the lesion segmentation tasks. We visualize the activations of DeepMedic’s FMs when processing a subject from our TBI database. Many appearing patterns are difficult to interpret, especially in deeper layers. In Fig. 14 we provide some examples that have an intuitive explanation. One of the most interesting findings is that the network learns to identify the ventricles, CSF, white and gray matter. This reveals that differentiation of tissue type is beneficial for lesion segmentation. This is in line with findings in the literature, where segmentation performance of traditional classifiers was significantly improved by incorporation of tissue priors (Van Leemput et al. (1999); Zikic et al. (2012)). It is intuitive that different types of lesions affect different parts of the brain depending on the underlying mechanisms of the pathology. A rigorous analysis of spatial cues extracted by the network may reveal correlations that are not well defined yet. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_73", "text": " Similarly intriguing is the information extracted in the low-resolution pathway. As they process greater context, these neurons gain additional localization capabilities. The activations of certain FMs form fields in the surrounding areas of the brain. These patterns are preserved in the deepest hidden layers, which indicates they are beneficial for the final segmentation (see two last rows of Fig. 14). We believe these cues provide a spatial bias to the system, for instance that large TBI contusions tend to occur towards the front and sides of the brain (see Fig. 1(c)). Furthermore, the interaction of the multi-resolution features can be observed in FMs of the hidden layer that follows the concatenation of the pathways. The network learns to weight the output of the two pathways, preserving low resolution in certain parts and show fine details in others (bottom row of Fig. 14, first three FMs). Our assumption is that the low-resolution pathway provides a rough localization of large pathologies and brain areas that are challenging to segment, which reserves the rest of the network’s capacity for learning detailed patterns associated with the detection of smaller lesions, fine structures and ambiguous areas. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_74", "text": " The findings of the above exploration lead us to believe that great potential lies into fusing the discriminative power of the “deep black box” with the knowledge acquired over years of targeted biomedical research. Clinical knowledge is available for certain pathologies, such as spatial priors for white matter lesions. Previously engineered models have been proven effective in tackling fundamental imaging problems, such as brain extraction, tissue segmentation and bias field correction. We show that a network is capable of automatically extracting some of this information. It would be interesting, however, to investigate structured ways for incorporating such existing information as priors into the network’s feature space, which should simplify the optimization problem while letting a specialist guide the network towards an optimal solution. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_75", "text": " Although neural networks seem promising for medical image analysis, making the inference process more interpretable is required. This would allow understanding when the network fails, an important aspect in biomedical applications. Although the output is bounded in the (0,1)01(0,1) range and commonly referred to as probability for convenience, it is not a true probability in a Bayesian sense. Research towards Bayesian networks aims to alleviate this limitation. An example is the recent work of Gal and Ghahramani (2015) who show that model confidence can be estimated via sampling the dropout mask. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_76", "text": " A general point should be made about the performance drop observed when our system is applied on test datasets of BRATS and ISLES in comparison to its cross-validated performance on the training data. In both cases, subsets of the test images were acquired in clinical centers different from the ones of training datasets. Differences in scanner type and acquisition protocols have significant impact on the appearance of the images. The issue of multi-center data heterogeneity is considered a major bottleneck for enabling large-scale imaging studies. This is not specific to our approach, but a general problem in medical image analysis. One possible way of making the CNN invariant to the data heterogeneity is to learn a generative model for the data acquisition process, and use this model in the data augmentation step. This is a direction we explore as part of future work. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" }, { "id": "1603.05959_all_77", "text": " In order to facilitate further research in this area and to provide a baseline for future evaluations, we make the source code of the entire system publicly available. ", "title": "Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation" } ]
What is the effective value of small value k?
We find that optimal performance around k=3 for both k-subtree and k-subgraph extractors [39].
[ 39 ]
[ { "id": "2202.03036_all_0", "text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019), and so on. A large class of GNNs build multilayer models, where each layer operates on the previous layer to generate new representations using a message-passing mechanism (Gilmer et al., 2017) to aggregate local neighborhood information. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_1", "text": " While many different message-passing strategies have been proposed, some critical limitations have been uncovered in this class of GNNs. These include the limited expressiveness of GNNs (Xu et al., 2019; Morris et al., 2019), as well as known problems such as over-smoothing (Li et al., 2018, 2019; Chen et al., 2020; Oono & Suzuki, 2020) and over-squashing (Alon & Yahav, 2021). Over-smoothing manifests as all node representations converging to a constant after sufficiently many layers, while over-squashing occurs when messages from distant nodes are not effectively propagated through certain “bottlenecks” in a graph, since too many messages get compressed into a single fixed-length vector. Designing new architectures beyond neighborhood aggregation is thus essential to solve these problems. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_2", "text": " Transformers (Vaswani et al., 2017), which have proved to be successful in natural language understanding (Vaswani et al., 2017), computer vision (Dosovitskiy et al., 2020), and biological sequence modeling (Rives et al., 2021), offer the potential to address these issues. Rather than only aggregating local neighborhood information in the message-passing mechanism, the Transformer architecture is able to capture interaction information between any node pair via a single self-attention layer. Moreover, in contrast to GNNs, the Transformer avoids introducing any structural inductive bias at intermediate layers, addressing the expressivity limitation of GNNs. Instead, it encodes structural or positional information about nodes only into input node features, albeit limiting how much information it can learn from the graph structure. Integrating information about the graph structure into the Transformer architecture has thus gained growing attention in the graph representation learning field. However, most existing approaches only encode positional relationships between nodes, rather than explicitly encoding the structural relationships. As a result, they may not identify structural similarities between nodes and could fail to model the structural interaction between nodes (see Figure 1). This could explain why their performance was dominated by sparse GNNs in several tasks (Dwivedi et al., 2022). ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_3", "text": " In this work, we address the critical question of how to encode structural information into a Transformer architecture. Our principal contribution is to introduce a flexible structure-aware self-attention mechanism that explicitly considers the graph structure and thus captures structural interaction between nodes. The resulting class of Transformers, which we call the Structure-Aware Transformer (SAT), can provide structure-aware representations of graphs, in contrast to most existing position-aware Transformers for graph-structured data. Specifically: • We reformulate the self-attention mechanism in Vaswani et al. (2017) as a kernel smoother and extend the original exponential kernel on node features to also account for local structures, by extracting a subgraph representation centered around each node. • We propose several methods for automatically generating the subgraph representations, enabling the resulting kernel smoother to simultaneously capture structural and attributed similarities between nodes. The resulting representations are theoretically guaranteed to be at least as expressive as the subgraph representations. • We demonstrate the effectiveness of SAT models on five graph and node property prediction benchmarks by showing it achieves better performance than state-of-the-art GNNs and Transformers. Furthermore, we show how SAT can easily leverage any GNN to compute the node representations which incorporate subgraph information and outperform the base GNN, making it an effortless enhancer of any existing GNN. • Finally, we show that we can attribute the performance gains to the structure-aware aspect of our architecture, and showcase how SAT is more interpretable than the classic Transformer with an absolute encoding. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_4", "text": " We will present the related work and relevant background in Sections 2 and 3 before presenting our method in Section 4 and our experimental findings in Section 5. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_5", "text": " We present here the work most related to ours, namely the work stemming from message passing GNNs, positional representations on graphs, and graph Transformers. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_6", "text": " Message passing graph neural networks have recently been one of the leading methods for graph representation learning. An early seminal example is the GCN (Kipf & Welling, 2017), which was based on performing convolutions on the graph. Gilmer et al. (2017) reformulated the early GNNs into a framework of message passing GNNs, which has since then become the predominant framework of GNNs in use today, with extensive examples (Hamilton et al., 2017; Xu et al., 2019; Corso et al., 2020; Hu et al., 2020b; Veličković et al., 2018; Li et al., 2020a; Yang et al., 2022). However, as mentioned above, they suffer from problems of limited expressiveness, over-smoothing, and over-squashing. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_7", "text": " Because of the limited expressiveness of GNNs, there has been some recent research into the use of absolute encoding (Shaw et al., 2018), which consists of adding or concatenating positional or structural representations to the input node features. While it is often called an absolute positional encoding, we refer to it more generally as an absolute encoding to include both positional and structural encoding, which are both important in graph modeling. Absolute encoding primarily considers position or location relationships between nodes. Examples of position-based methods include the Laplacian positional encoding (Dwivedi & Bresson, 2021; Kreuzer et al., 2021), Weisfeiler–Lehman-based positional encoding (Zhang et al., 2020), and random walk positional encoding (RWPE) (Li et al., 2020b; Dwivedi et al., 2022), while distance-based methods include distances to a predefined set of nodes (You et al., 2019) and shortest path distances between pairs of nodes (Zhang et al., 2020; Li et al., 2020b). Dwivedi et al. (2022) extend these ideas by using a trainable absolute encoding. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_8", "text": " While the absolute encoding methods listed above can be used with message passing GNNs, they also play a crucial role in the (graph) Transformer architecture. Graph Transformer (Dwivedi & Bresson, 2021) provided an early example of how to generalize the Transformer architecture to graphs, using Laplacian eigenvectors as an absolute encoding and computing attention on the immediate neighborhood of each node, rather than on the full graph. SAN (Kreuzer et al., 2021) also used the Laplacian eigenvectors for computing an absolute encoding, but computed attention on the full graph, while distinguishing between true and created edges. Many graph Transformer methods also use a relative encoding (Shaw et al., 2018) in addition to absolute encoding. This strategy incorporates representations of the relative position or distances between nodes on the graph directly into the self-attention mechanism, as opposed to the absolute encoding which is only applied once to the input node features. Mialon et al. (2021) propose a relative encoding by means of kernels on graphs to bias the self-attention calculation, which is then able to incorporate positional information into Transformers via the choice of kernel function. Other recent work seeks to incorporate structural information into the graph Transformer, for example by encoding some carefully selected graph theoretic properties such as centrality measures and shortest path distances as positional representations (Ying et al., 2021) or by using GNNs to integrate the graph structure (Rong et al., 2020; Jain et al., 2021; Mialon et al., 2021; Shi et al., 2021). ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_9", "text": " In this work, we combine the best of both worlds from message passing GNNs and from the Transformer architecture. We incorporate both an absolute as well as a novel relative encoding that explicitly incorporates the graph structure, thereby designing a Transformer architecture that takes both local and global information into account. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_10", "text": " In the following, we refer to a graph as G=(V,E,𝐗)𝐺𝑉𝐸𝐗G=(V,E,\\mathbf{X}), where the node attributes for node u∈V𝑢𝑉u\\in V is denoted by xu∈𝒳⊂dsubscript𝑥𝑢𝒳superscript𝑑absentx_{u}\\in{\\mathcal{X}}\\subset^{d} and the node attributes for all nodes are stored in 𝐗∈n×dsuperscript𝑛𝑑𝐗absent\\mathbf{X}\\in^{n\\times d} for a graph with n𝑛n nodes. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_11", "text": " While GNNs use the graph structure explicitly, Transformers remove that explicit structure, and instead infer relations between nodes by leveraging the node attributes. In this sense, the Transformer (Vaswani et al., 2017) ignores the graph structure and rather considers the graph as a (multi-) set of nodes, and uses the self-attention mechanism to infer the similarity between nodes. The Transformer itself is composed of two main blocks: a self-attention module followed by a feed-forward neural network. In the self-attention module, the input node features 𝐗𝐗\\mathbf{X} are first projected to query (𝐐𝐐\\mathbf{Q}), key (𝐊𝐊\\mathbf{K}) and value (𝐕𝐕\\mathbf{V}) matrices through a linear projection such that 𝐐=𝐗𝐖𝐐𝐐subscript𝐗𝐖𝐐\\mathbf{Q}=\\mathbf{X}\\mathbf{W_{Q}}, 𝐊=𝐗𝐖𝐊𝐊subscript𝐗𝐖𝐊\\mathbf{K}=\\mathbf{X}\\mathbf{W_{K}} and 𝐕=𝐗𝐖𝐕𝐕subscript𝐗𝐖𝐕\\mathbf{V}=\\mathbf{X}\\mathbf{W_{V}} respectively. We can compute the self-attention via Attn​(𝐗):=softmax​(𝐐𝐊Tdo​u​t)​𝐕∈n×do​u​t,assignAttn𝐗softmaxsuperscript𝐐𝐊𝑇subscript𝑑𝑜𝑢𝑡𝐕superscript𝑛subscript𝑑𝑜𝑢𝑡absent\\mathrm{Attn}(\\mathbf{X}):=\\mathrm{softmax}(\\frac{\\mathbf{Q}\\mathbf{K}^{T}}{\\sqrt{d_{out}}})\\mathbf{V}\\in^{n\\times d_{out}}, (1) where do​u​tsubscript𝑑𝑜𝑢𝑡d_{out} refers to the dimension of 𝐐𝐐\\mathbf{Q}, and 𝐖𝐐,𝐖𝐊,𝐖𝐕subscript𝐖𝐐subscript𝐖𝐊subscript𝐖𝐕\\mathbf{W_{Q}},\\mathbf{W_{K}},\\mathbf{W_{V}} are trainable parameters. It is common to use multi-head attention, which concatenates multiple instances of Eq. (1) and has shown to be effective in practice (Vaswani et al., 2017). Then, the output of the self-attention is followed by a skip-connection and a feed-forward network (FFN), which jointly compose a Transformer layer, as shown below: 𝐗′superscript𝐗′\\displaystyle\\mathbf{X}^{\\prime} =𝐗+Attn​(𝐗),absent𝐗Attn𝐗\\displaystyle=\\mathbf{X}+\\mathrm{Attn}(\\mathbf{X}), (2) 𝐗′′superscript𝐗′′\\displaystyle\\mathbf{X}^{\\prime\\prime} =FFN​(𝐗′):=ReLU​(𝐗′​W1)​W2.absentFFNsuperscript𝐗′assignReLUsuperscript𝐗′subscript𝑊1subscript𝑊2\\displaystyle=\\mathrm{FFN}(\\mathbf{X}^{\\prime}):=\\text{ReLU}(\\mathbf{X}^{\\prime}W_{1})W_{2}. Multiple layers can be stacked to form a Transformer model, which ultimately provides node-level representations of the graph. As the self-attention is equivariant to permutations of the input nodes, the Transformer will always generate the same representations for nodes with the same attributes regardless of their locations and surrounding structures in the graph. It is thus necessary to incorporate such information into the Transformer, generally via absolute encoding. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_12", "text": " Absolute encoding refers to adding or concatenating the positional or structural representations of the graph to the input node features before the main Transformer model, such as the Laplacian positional encoding (Dwivedi & Bresson, 2021) or RWPE (Dwivedi et al., 2022). The main shortcoming of these encoding methods is that they generally do not provide a measure of the structural similarity between nodes and their neighborhoods. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_13", "text": " As noticed by Mialon et al. (2021), the self-attention in Eq. (1) can be rewritten as a kernel smoother Attn​(xv)=∑u∈Vκexp​(xv,xu)∑w∈Vκexp​(xv,xw)​f​(xu),∀v∈V,formulae-sequenceAttnsubscript𝑥𝑣subscript𝑢𝑉subscript𝜅subscript𝑥𝑣subscript𝑥𝑢subscript𝑤𝑉subscript𝜅subscript𝑥𝑣subscript𝑥𝑤𝑓subscript𝑥𝑢for-all𝑣𝑉\\mathrm{Attn}(x_{v})=\\sum_{u\\in V}\\frac{\\kappa_{\\exp}(x_{v},x_{u})}{\\sum_{w\\in V}\\kappa_{\\exp}(x_{v},x_{w})}f(x_{u}),~{}\\forall v\\in V, (3) where f​(x)=𝐖𝐕​x𝑓𝑥subscript𝐖𝐕𝑥f(x)=\\mathbf{W_{V}}x is the linear value function and κexpsubscript𝜅\\kappa_{\\exp} is a (non-symmetric) exponential kernel on ×dd{}^{d}\\times^{d} parameterized by 𝐖𝐐subscript𝐖𝐐\\mathbf{W_{Q}} and 𝐖𝐊subscript𝐖𝐊\\mathbf{W_{K}}: κexp​(x,x′):=exp⁡(⟨𝐖𝐐​x,𝐖𝐊​x′⟩/do​u​t),assignsubscript𝜅𝑥superscript𝑥′subscript𝐖𝐐𝑥subscript𝐖𝐊superscript𝑥′subscript𝑑𝑜𝑢𝑡\\kappa_{\\exp}(x,x^{\\prime}):=\\exp\\left(\\langle\\mathbf{W_{Q}}x,\\mathbf{W_{K}}x^{\\prime}\\rangle/\\sqrt{d_{out}}\\right), (4) where ⟨⋅,⋅⟩⋅⋅\\langle\\cdot,\\cdot\\rangle is the dot product on d. With this form, Mialon et al. (2021) propose a relative positional encoding strategy via the product of this kernel and a diffusion kernel on the graph, which consequently captures the positional similarity between nodes. However, this method is only position-aware, in contrast to our structure-aware encoding that will be presented in Section 4. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_14", "text": " In this section, we will describe how to encode the graph structure into the self-attention mechanism and provide a class of Transformer models based on this framework. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_15", "text": " As presented above, self-attention in the Transformer can be rewritten as a kernel smoother where the kernel is a trainable exponential kernel defined on node features, and which only captures attributed similarity between a pair of nodes. The problem with this kernel smoother is that it cannot filter out nodes that are structurally different from the node of interest when they have the same or similar node features. In order to also incorporate the structural similarity between nodes, we consider a more generalized kernel that additionally accounts for the local substructures around each node. By introducing a set of subgraphs centered at each node, we define our structure-aware attention as: SA-attn​(v):=∑u∈Vκgraph​(SG​(v),SG​(u))∑w∈Vκgraph​(SG​(v),SG​(w))​f​(xu),assignSA-attn𝑣subscript𝑢𝑉subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑢subscript𝑤𝑉subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑤𝑓subscript𝑥𝑢\\text{SA-attn}(v):=\\sum_{u\\in V}\\frac{\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(u))}{\\sum_{w\\in V}\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(w))}f(x_{u}), (5) where SG​(v)subscript𝑆𝐺𝑣S_{G}(v) denotes a subgraph in G𝐺G centered at a node v𝑣v associated with node features 𝐗𝐗\\mathbf{X} and κgraphsubscript𝜅graph\\kappa_{\\text{graph}} can be any kernel that compares a pair of subgraphs. This new self-attention function not only takes the attributed similarity into account but also the structural similarity between subgraphs. It thus generates more expressive node representations than the original self-attention, as we will show in Section 4.4. Moreover, this self-attention is no longer equivariant to any permutation of nodes but only to nodes whose features and subgraphs coincide, which is a desirable property. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_16", "text": " In the rest of the paper, we will consider the following form of κgraphsubscript𝜅graph\\kappa_{\\text{graph}} that already includes a large class of expressive and computationally tractable models: κgraph​(SG​(v),SG​(u))=κexp​(φ​(v,G),φ​(u,G)),subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑢subscript𝜅𝜑𝑣𝐺𝜑𝑢𝐺\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(u))=\\kappa_{\\exp}(\\varphi(v,G),\\varphi(u,G)), (6) where φ​(u,G)𝜑𝑢𝐺\\varphi(u,G) is a structure extractor that extracts vector representations of some subgraph centered at u𝑢u with node features 𝐗𝐗\\mathbf{X}. We provide several alternatives of the structure extractor below. It is worth noting that our structure-aware self-attention is flexible enough to be combined with any model that generates representations of subgraphs, including GNNs and (differentiable) graph kernels. For notational simplicity, we assume there are no edge attributes, but our method can easily incorporate edge attributes as long as the structure extractor can accommodate them. The edge attributes are consequently not considered in the self-attention computation, but are incorporated into the structure-aware node representations. In the structure extractors presented in this paper, this means that edge attributes were included whenever the base GNN was able to handle edge attributes. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_17", "text": " A straightforward way to extract local structural information at node u𝑢u is to apply any existing GNN model to the input graph with node features 𝐗𝐗\\mathbf{X} and take the output node representation at u𝑢u as the subgraph representation at u𝑢u. More formally, if we denote by GNNG(k)superscriptsubscriptGNN𝐺𝑘\\text{GNN}_{G}^{(k)} an arbitrary GNN model with k𝑘k layers applied to G𝐺G with node features 𝐗𝐗\\mathbf{X}, then φ​(u,G)=GNNG(k)​(u).𝜑𝑢𝐺subscriptsuperscriptGNN𝑘𝐺𝑢\\varphi(u,G)=\\text{GNN}^{(k)}_{G}(u). (7) This extractor is able to represent the k𝑘k-subtree structure rooted at u𝑢u (Xu et al., 2019). While this class of structure extractors is fast to compute and can flexibly leverage any existing GNN, they cannot be more expressive than the Weisfeiler–Lehman test due to the expressiveness limitation of message passing GNNs (Xu et al., 2019). In practice, a small value of k𝑘k already leads to good performance, while not suffering from over-smoothing or over-squashing. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_18", "text": " A more expressive extractor is to use a GNN to directly compute the representation of the entire k𝑘k-hop subgraph centered at u𝑢u rather than just the node representation u𝑢u. Recent work has explored the idea of using subgraphs rather than subtrees around a node in GNNs, with positive experimental results (Zhang & Li, 2021; Wijesinghe & Wang, 2022), as well as being strictly more powerful than the 1-WL test (Zhang & Li, 2021). We follow the same setup as is done in Zhang & Li (2021), and adapt our GNN extractor to utilize the entire k𝑘k-hop subgraph. The k𝑘k-subgraph GNN extractor aggregates the updated node representations of all nodes within the k𝑘k-hop neighborhood using a pooling function such as summation. Formally, if we denote by 𝒩k​(u)subscript𝒩𝑘𝑢{\\mathcal{N}}_{k}(u) the k𝑘k-hop neighborhood of node u𝑢u including itself, the representation of a node u𝑢u is: φ​(u,G)=∑v∈𝒩k​(u)GNNG(k)​(v).𝜑𝑢𝐺subscript𝑣subscript𝒩𝑘𝑢subscriptsuperscriptGNN𝑘𝐺𝑣\\varphi(u,G)=\\sum_{v\\in{\\mathcal{N}}_{k}(u)}\\text{GNN}^{(k)}_{G}(v). (8) ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_19", "text": " We observe that prior to the pooling function, the k𝑘k-subgraph GNN extractor is equivalent to using the k𝑘k-subtree GNN extractor within each k𝑘k-hop subgraph. So as to capture the attributed similarity as well as structural similarity, we augment the node representation from k𝑘k-subgraph GNN extractor with the original node features via concatenation. While this extractor provides more expressive subgraph representations than the k𝑘k-subtree extractor, it requires enumerating all k𝑘k-hop subgraphs, and consequently does not scale as well as the k𝑘k-subtree extractor to large datasets. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_20", "text": " Finally, we present a list of other potential structure extractors for different purposes. One possible choice is to directly learn a number of “hidden graphs” as the “anchor subgraphs” to represent subgraphs for better model interpretability, by using the concepts introduced in Nikolentzos & Vazirgiannis (2020). While Nikolentzos & Vazirgiannis (2020) obtain a vector representation of the input graph by counting the number of matching walks between the whole graph and each of the hidden graphs, one could extend this to the node level by comparing the hidden graphs to the k𝑘k-hop subgraph centered around each node. The adjacency matrix of the hidden graphs is a trainable parameter in the network, thereby enabling end-to-end training to identify which subgraph structures are predictive. Then, for a trained model, visualizing the learned hidden graphs provides useful insights about the structural motifs in the dataset. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_21", "text": " Furthermore, more domain-specific GNNs could also be used to extract potentially more expressive subgraph representations. For instance, Bodnar et al. (2021) recently proposed a new kind of message passing scheme operating on regular cell complexes which benefits from provably stronger expressivity for molecules. Our self-attention mechanism can fully benefit from the development of more domain-specific and expressive GNNs. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_22", "text": " Finally, another possible structure extractor is to use a non-parametric graph kernel (e.g. a Weisfeiler-Lehman graph kernel) on the k𝑘k-hop subgraphs centered around each node. This provides a flexible way to combine graph kernels and deep learning, which might offer new theoretical insights into the link between the self-attention and kernel methods. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_23", "text": " Having defined our structure-aware self-attention function, the other components of the Structure-Aware Transformer follow the Transformer architecture as described in Section 3.1; see Figure 2 for a visual overview. Specifically, the self-attention function is followed by a skip-connection, a FFN and two normalization layers before and after the FFN. In addition, we also include the degree factor in the skip-connection, which was found useful for reducing the overwhelming influence of highly connected graph components (Mialon et al., 2021), i.e., xv′=xv+1/dv​SA-attn​(v),superscriptsubscript𝑥𝑣′subscript𝑥𝑣1subscript𝑑𝑣SA-attn𝑣x_{v}^{\\prime}=x_{v}+1/\\sqrt{d_{v}}\\,\\text{SA-attn}(v), (9) where dvsubscript𝑑𝑣d_{v} denotes the degree of node v𝑣v. After a Transformer layer, we obtain a new graph with the same structure but different node features G′=(V,E,𝐗′)superscript𝐺′𝑉𝐸superscript𝐗′G^{\\prime}=(V,E,\\mathbf{X}^{\\prime}), where 𝐗′superscript𝐗′\\mathbf{X}^{\\prime} corresponds to the output of the Transformer layer. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_24", "text": " Finally, for graph property prediction, there are various ways to aggregate node-level representations into a graph representation, such as by taking the average or sum. Alternatively, one can use the embedding of a virtual (CLS) node (Jain et al., 2021) that is attached to the input graph without any connectivity to other nodes. We compare these approaches in Section 5. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_25", "text": " While the self-attention in Eq. (5) is structure-aware, most absolute encoding techniques are only position-aware and could therefore provide complementary information. Indeed, we find that the combination leads to further performance improvements, which we show in Section 5. We choose to use the RWPE (Dwivedi et al., 2022), though any other absolute positional representations, including learnable ones, can also be used. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_26", "text": " We further argue that only using absolute positional encoding with the Transformer would exhibit a too relaxed structural inductive bias which is not guaranteed to generate similar node representations even if two nodes have similar local structures. This is due to the fact that distance or Laplacian-based positional representations generally serve as structural or positional signatures but do not provide a measure of structural similarity between nodes, especially in the inductive case where two nodes are from different graphs. This is also empirically affirmed in Section 5 by their relatively worse performance without using our structural encoding. In contrast, the subgraph representations used in the structure-aware attention can be tailored to measure the structural similarity between nodes, and thus generate similar node-level representations if they possess similar attributes and surrounding structures. We can formally state this in the following theorem: ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_27", "text": " The proof is provided in the Appendix. The metric D𝐷D is an optimal matching metric between two multisets which measures how different they are. This theorem shows that two node representations from the SA-attn are similar if the graphs that they belong to have similar multisets of node features and subgraph representations overall, and at the same time, the subgraph representations at these two nodes are similar. In particular, if two nodes belong to the same graph, i.e. G=G′𝐺superscript𝐺′G=G^{\\prime}, then the second and last terms on the right side of Eq. (10) are equal to zero and the distance between their representations is thus constrained by the distance between their corresponding subgraph representations. However, for Transformers with absolute positional encoding, the distance between two node representations is not constrained by their structural similarity, as the distance between two positional representations does not necessarily characterize how structurally similar two nodes are. Despite stronger inductive biases, we will show that our model is still sufficiently expressive in the next section. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_28", "text": " The expressive power of graph Transformers compared to classic GNNs has hardly been studied, since the soft structural inductive bias introduced in absolute encoding is generally hard to characterize. Thanks to the unique design of our SAT, which relies on a subgraph structure extractor, it becomes possible to study the expressiveness of the output representations. More specifically, we formally show that the node representation from a structure-aware attention layer is at least as expressive as its subgraph representation given by the structure extractor, following the injectivity of the attention function with respect to the query: ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_29", "text": " Note that the assumptions made in the theorem are mild as one can always add some absolute encoding or random noise to make the attributes of one node different from all other nodes, and similarly for subgraph representations. The countable assumption on 𝒳𝒳{\\mathcal{X}} is generally adopted for expressivity analysis of GNNs (e.g. Xu et al. (2019)). We assume f𝑓f to be any mapping rather than just a linear function as in the definition of the self-attention function since it can be practically approximated by a FFN in multi-layer Transformers through the universal approximation theorem (Hornik, 1991). Theorem 2 suggests that if the structure extractor is sufficiently expressive, the resulting SAT model can also be at least equally expressive. Furthermore, more expressive extractors could lead to more expressively powerful SAT models and thus better prediction performance, which is also empirically confirmed in Section 5. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_30", "text": " In this section, we evaluate SAT models versus several SOTA methods for graph representation learning, including GNNs and Transformers, on five graph and node prediction tasks, as well as analyze the different components of our architecture to identify what drives the performance. In summary, we discovered the following aspects about SAT: ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_31", "text": " • The structure-aware framework achieves SOTA performance on graph and node classification tasks, outperforming SOTA graph Transformers and sparse GNNs. • Both instances of the SAT, namely k𝑘k-subtree and k𝑘k-subgraph SAT, always improve upon the base GNN it is built upon, highlighting the improved expressiveness of our structure-aware approach. • We show that incorporating the structure via our structure-aware attention brings a notable improvement relative to the vanilla Transformer with RWPE that just uses node attribute similarity instead of also incorporating structural similarity. We also show that a small value of k𝑘k already leads to good performance, while not suffering from over-smoothing or over-squashing. • We show that choosing a proper absolute positional encoding and a readout method improves performance, but to a much lesser extent than incorporating the structure into the approach. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_32", "text": " Furthermore, we note that SAT achieves SOTA performance while only considering a small hyperparameter search space. Performance could likely be further improved with more hyperparameter tuning. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_33", "text": " We assess the performance of our method with five medium to large benchmark datasets for node and graph property prediction, including ZINC (Dwivedi et al., 2020), CLUSTER (Dwivedi et al., 2020), PATTERN (Dwivedi et al., 2020), OGBG-PPA (Hu et al., 2020a) and OGBG-CODE2 (Hu et al., 2020a). ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_34", "text": " We compare our method to the following GNNs: GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2018), GIN (Xu et al., 2019), PNA (Corso et al., 2020), DeeperGCN  (Li et al., 2020a), and ExpC (Yang et al., 2022). Our comparison partners also include several recently proposed Transformers on graphs, including the original Transformer with RWPE (Dwivedi et al., 2022), Graph Transformer (Dwivedi & Bresson, 2021), SAN (Kreuzer et al., 2021), Graphormer (Ying et al., 2021) and GraphTrans (Jain et al., 2021), a model that uses the vanilla Transformer on top of a GNN. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_35", "text": " All results for the comparison methods are either taken from the original paper or from Dwivedi et al. (2020) if not available. We consider k𝑘k-subtree and k𝑘k-subgraph SAT equipped with different GNN extractors, including GCN, GIN, GraphSAGE and PNA. For OGBG-PPA and OGBG-CODE2, we do not run experiments for k𝑘k-subgraph SAT models due to large memory requirements. Full details on the datasets, experimental setup, and hyperparameters are provided in the Appendix. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_36", "text": " We show the performance of SATs compared to other GNNs and Transformers in Table 1 and 2. SAT models consistently outperform SOTA methods on these datasets, showing its ability to combine the benefits of both GNNs and Transformers. In particular, for the CODE2 dataset, our SAT models outperform SOTA methods by a large margin despite a relatively small number of parameters and minimal hyperparameter tuning, which will put it at the first place on the OGB leaderboard. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_37", "text": " Table 3 summarizes the performance of SAT relative to the sparse GNN it uses to extract the subgraph representations, across different GNNs. We observe that both variations of SAT consistently bring large performance gains to its base GNN counterpart, making it a systematic enhancer of any GNN model. Furthermore, PNA, which is the most expressive GNN we considered, has consistently the best performance when used with SAT, empirically validating our theoretical finding in Section 4.4. k𝑘k-subgraph SAT also outperforms or performs equally as k𝑘k-subtree SAT in almost all the cases, showing its superior expressiveness. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_38", "text": " While Table 3 showcases the added value of the SAT relative to sparse GNNs, we now dissect the components of SAT on the ZINC dataset to identify which aspects of the architecture bring the biggest performance gains. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_39", "text": " The key contribution of SAT is its ability to explicitly incorporate structural information in the self-attention. Here, we seek to demonstrate that this information provides crucial predictive information, and study how the choice of k𝑘k affects the results. Figure 3(a) shows how the test MAE is impacted by varying k𝑘k for k𝑘k-subtree and k𝑘k-subgraph extractors using PNA on the ZINC dataset. All models use the RWPE. k=0𝑘0k=0 corresponds to the vanilla Transformer only using absolute positional encoding, i.e. not using structure. We find that incorporating structural information leads to substantial improvement in performance, with optimal performance around k=3𝑘3k=3 for both k𝑘k-subtree and k𝑘k-subgraph extractors. As k𝑘k increases beyond k=4𝑘4k=4, the performance in k𝑘k-subtree extractors deteriorated, which is consistent with the observed phenomenon that GNNs work best in shallower networks (Kipf & Welling, 2017). We observe that k𝑘k-subgraph does not suffer as much from this issue, underscoring a new aspect of its usefulness. On the other hand, k𝑘k-subtree extractors are more computationally efficient and scalable to larger OGB datasets. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_40", "text": " We assess here whether the absolute encoding brought complementary information to SAT. In Figure 3(b), we conduct an ablation study showing the results of SAT with and without absolute positional encoding, including RWPE and Laplacian PE (Dwivedi et al., 2020). Our SAT with a positional encoding outperforms its counterpart without it, confirming the complementary nature of the two encodings. However, we also note that the performance gain brought by the absolute encoding is far less than the gain obtained by using our structure-aware attention, as shown in Figure 3(a) (comparing the instance of k=0𝑘0k=0 to k>0𝑘0k>0), emphasizing that our structure-aware attention is the more important aspect of the model. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_41", "text": " Finally, we compare the performance of SAT models using different readout methods for aggregating node-level representations on the ZINC dataset in Figure 3(c), including the CLS pooling discussed in Section 4.2. Unlike the remarkable influence of the readout method in GNNs (Xu et al., 2019), we observe very little impact in SAT models. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_42", "text": " In addition to performance improvement, we show that SAT offers better model interpretability compared to the classic Transformer with only absolute positional encoding. We respectively train a SAT model and a Transformer with a CLS readout on the Mutagenicity dataset, and visualize the attention scores between the (CLS) node and other nodes learned by SAT and the Transformer in Figure 4. The salient difference between the two models is that SAT has structure-aware node embeddings, and thus we can attribute the following interpretability gains to that. While both models manage to identify some chemical motifs known for mutagenicity, such as NO2 and NH2, the attention scores learned by SAT are sparser and more informative, meaning that SAT puts more attention weights on these known mutagenic motifs than the Transformer with RWPE. The vanilla Transformer even fails to put attention on some important atoms such as the H atoms in the NH2 group. The only H atoms highlighted by SAT are those in the NH2 group, suggesting that our SAT indeed takes the structure into account. More focus on these discriminative motifs makes the SAT model less influenced by other chemical patterns that commonly exist in the dataset, such as benzene, and thus leads to overall improved performance. More results are provided in the Appendix. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_43", "text": " We introduced the SAT model, which successfully incorporates structural information into the Transformer architecture and overcomes the limitations of the absolute encoding. In addition to SOTA empirical performance with minimal hyperparameter tuning, SAT also provides better interpretability than the Transformer. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_44", "text": " As mentioned above, k𝑘k-subgraph SAT has higher memory requirements than k𝑘k-subtree SAT, which can restrict its applicability if access to high memory GPUs is restricted. We see the main limitation of SAT is that it suffers from the same drawbacks as the Transformer, namely the quadratic complexity of the self-attention computation. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_45", "text": " Because SAT can be combined with any GNN, a natural extension of our work is to combine SAT with structure extractors which have shown to be strictly more expressive than the 1-WL test, such as the recent topological GNN introduced by Horn et al. (2021). Additionally, the SAT framework is flexible and can incorporate any structure extractor which produces structure-aware node representations, and could even be extended beyond using GNNs, such as differentiable graph kernels. ", "title": "Structure-Aware Transformer for Graph Representation Learning" }, { "id": "2202.03036_all_46", "text": " Another important area for future work is to focus on reducing the high memory cost and time complexity of the self-attention computation, as is being done in recent efforts for developing a so-called linear transformer, which has linear complexity in both time and space requirements (Tay et al., 2020; Wang et al., 2020; Qin et al., 2022). ", "title": "Structure-Aware Transformer for Graph Representation Learning" } ]
Is the KMeans algorithm discussed in the paper require a labelled dataset?
The K-means algorithm clusters the set of classes that the models often predict together [24]. In this work, this clustering approach did not require true labels [27]. However, the models themselves were trained using examples from a dataset, JFT, which contains labeled images [28]. Thus, although the K-means algorithm does not require a labeled dataset, the models whose predictions are used in the algorithm required a labeled dataset [26].
[ 24, 27, 28, 26 ]
[ { "id": "1503.02531_all_0", "text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typically use very similar models for the training stage and the deployment stage despite their very different requirements: For tasks like speech and object recognition, training must extract structure from very large, highly redundant datasets but it does not need to operate in real time and it can use a huge amount of computation. Deployment to a large number of users, however, has much more stringent requirements on latency and computational resources. The analogy with insects suggests that we should be willing to train very cumbersome models if that makes it easier to extract structure from the data. The cumbersome model could be an ensemble of separately trained models or a single very large model trained with a very strong regularizer such as dropout . Once the cumbersome model has been trained, we can then use a different kind of training, which we call “distillation” to transfer the knowledge from the cumbersome model to a small model that is more suitable for deployment. A version of this strategy has already been pioneered by Rich Caruana and his collaborators . In their important paper they demonstrate convincingly that the knowledge acquired by a large ensemble of models can be transferred to a single small model. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_1", "text": " A conceptual block that may have prevented more investigation of this very promising approach is that we tend to identify the knowledge in a trained model with the learned parameter values and this makes it hard to see how we can change the form of the model but keep the same knowledge. A more abstract view of the knowledge, that frees it from any particular instantiation, is that it is a learned mapping from input vectors to output vectors. For cumbersome models that learn to discriminate between a large number of classes, the normal training objective is to maximize the average log probability of the correct answer, but a side-effect of the learning is that the trained model assigns probabilities to all of the incorrect answers and even when these probabilities are very small, some of them are much larger than others. The relative probabilities of incorrect answers tell us a lot about how the cumbersome model tends to generalize. An image of a BMW, for example, may only have a very small chance of being mistaken for a garbage truck, but that mistake is still many times more probable than mistaking it for a carrot. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_2", "text": " It is generally accepted that the objective function used for training should reflect the true objective of the user as closely as possible. Despite this, models are usually trained to optimize performance on the training data when the real objective is to generalize well to new data. It would clearly be better to train models to generalize well, but this requires information about the correct way to generalize and this information is not normally available. When we are distilling the knowledge from a large model into a small one, however, we can train the small model to generalize in the same way as the large model. If the cumbersome model generalizes well because, for example, it is the average of a large ensemble of different models, a small model trained to generalize in the same way will typically do much better on test data than a small model that is trained in the normal way on the same training set as was used to train the ensemble. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_3", "text": " An obvious way to transfer the generalization ability of the cumbersome model to a small model is to use the class probabilities produced by the cumbersome model as “soft targets” for training the small model. For this transfer stage, we could use the same training set or a separate “transfer” set. When the cumbersome model is a large ensemble of simpler models, we can use an arithmetic or geometric mean of their individual predictive distributions as the soft targets. When the soft targets have high entropy, they provide much more information per training case than hard targets and much less variance in the gradient between training cases, so the small model can often be trained on much less data than the original cumbersome model and using a much higher learning rate. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_4", "text": " For tasks like MNIST in which the cumbersome model almost always produces the correct answer with very high confidence, much of the information about the learned function resides in the ratios of very small probabilities in the soft targets. For example, one version of a 2 may be given a probability of 10−6superscript10610^{-6} of being a 3 and 10−9superscript10910^{-9} of being a 7 whereas for another version it may be the other way around. This is valuable information that defines a rich similarity structure over the data (i. e. it says which 2’s look like 3’s and which look like 7’s) but it has very little influence on the cross-entropy cost function during the transfer stage because the probabilities are so close to zero. Caruana and his collaborators circumvent this problem by using the logits (the inputs to the final softmax) rather than the probabilities produced by the softmax as the targets for learning the small model and they minimize the squared difference between the logits produced by the cumbersome model and the logits produced by the small model. Our more general solution, called “distillation”, is to raise the temperature of the final softmax until the cumbersome model produces a suitably soft set of targets. We then use the same high temperature when training the small model to match these soft targets. We show later that matching the logits of the cumbersome model is actually a special case of distillation. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_5", "text": " The transfer set that is used to train the small model could consist entirely of unlabeled data or we could use the original training set. We have found that using the original training set works well, especially if we add a small term to the objective function that encourages the small model to predict the true targets as well as matching the soft targets provided by the cumbersome model. Typically, the small model cannot exactly match the soft targets and erring in the direction of the correct answer turns out to be helpful. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_6", "text": " Neural networks typically produce class probabilities by using a “softmax” output layer that converts the logit, zisubscript𝑧𝑖z_{i}, computed for each class into a probability, qisubscript𝑞𝑖q_{i}, by comparing zisubscript𝑧𝑖z_{i} with the other logits. qi=e​x​p​(zi/T)∑je​x​p​(zj/T)subscript𝑞𝑖𝑒𝑥𝑝subscript𝑧𝑖𝑇subscript𝑗𝑒𝑥𝑝subscript𝑧𝑗𝑇q_{i}=\\frac{exp(z_{i}/T)}{\\sum_{j}exp(z_{j}/T)} (1) where T𝑇T is a temperature that is normally set to 111. Using a higher value for T𝑇T produces a softer probability distribution over classes. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_7", "text": " In the simplest form of distillation, knowledge is transferred to the distilled model by training it on a transfer set and using a soft target distribution for each case in the transfer set that is produced by using the cumbersome model with a high temperature in its softmax. The same high temperature is used when training the distilled model, but after it has been trained it uses a temperature of 1. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_8", "text": " When the correct labels are known for all or some of the transfer set, this method can be significantly improved by also training the distilled model to produce the correct labels. One way to do this is to use the correct labels to modify the soft targets, but we found that a better way is to simply use a weighted average of two different objective functions. The first objective function is the cross entropy with the soft targets and this cross entropy is computed using the same high temperature in the softmax of the distilled model as was used for generating the soft targets from the cumbersome model. The second objective function is the cross entropy with the correct labels. This is computed using exactly the same logits in softmax of the distilled model but at a temperature of 1. We found that the best results were generally obtained by using a condiderably lower weight on the second objective function. Since the magnitudes of the gradients produced by the soft targets scale as 1/T21superscript𝑇21/T^{2} it is important to multiply them by T2superscript𝑇2T^{2} when using both hard and soft targets. This ensures that the relative contributions of the hard and soft targets remain roughly unchanged if the temperature used for distillation is changed while experimenting with meta-parameters. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_9", "text": " Each case in the transfer set contributes a cross-entropy gradient, d​C/d​zi𝑑𝐶𝑑subscript𝑧𝑖dC/dz_{i}, with respect to each logit, zisubscript𝑧𝑖z_{i} of the distilled model. If the cumbersome model has logits visubscript𝑣𝑖v_{i} which produce soft target probabilities pisubscript𝑝𝑖p_{i} and the transfer training is done at a temperature of T𝑇T, this gradient is given by: ∂C∂zi=1T​(qi−pi)=1T​(ezi/T∑jezj/T−evi/T∑jevj/T)𝐶subscript𝑧𝑖1𝑇subscript𝑞𝑖subscript𝑝𝑖1𝑇superscript𝑒subscript𝑧𝑖𝑇subscript𝑗superscript𝑒subscript𝑧𝑗𝑇superscript𝑒subscript𝑣𝑖𝑇subscript𝑗superscript𝑒subscript𝑣𝑗𝑇\\frac{\\partial C}{\\partial z_{i}}=\\frac{1}{T}\\left(q_{i}-p_{i}\\right)=\\frac{1}{T}\\left(\\frac{e^{z_{i}/T}}{\\sum_{j}e^{z_{j}/T}}-\\frac{e^{v_{i}/T}}{\\sum_{j}e^{v_{j}/T}}\\right) (2) ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_10", "text": " If the temperature is high compared with the magnitude of the logits, we can approximate: ∂C∂zi≈1T​(1+zi/TN+∑jzj/T−1+vi/TN+∑jvj/T)𝐶subscript𝑧𝑖1𝑇1subscript𝑧𝑖𝑇𝑁subscript𝑗subscript𝑧𝑗𝑇1subscript𝑣𝑖𝑇𝑁subscript𝑗subscript𝑣𝑗𝑇\\frac{\\partial C}{\\partial z_{i}}\\approx\\frac{1}{T}\\left(\\frac{1+z_{i}/T}{N+\\sum_{j}z_{j}/T}-\\frac{1+v_{i}/T}{N+\\sum_{j}v_{j}/T}\\right) (3) ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_11", "text": " If we now assume that the logits have been zero-meaned separately for each transfer case so that ∑jzj=∑jvj=0subscript𝑗subscript𝑧𝑗subscript𝑗subscript𝑣𝑗0\\sum_{j}z_{j}=\\sum_{j}v_{j}=0 Eq. 3 simplifies to: ∂C∂zi≈1N​T2​(zi−vi)𝐶subscript𝑧𝑖1𝑁superscript𝑇2subscript𝑧𝑖subscript𝑣𝑖\\frac{\\partial C}{\\partial z_{i}}\\approx\\frac{1}{NT^{2}}\\left(z_{i}-v_{i}\\right) (4) So in the high temperature limit, distillation is equivalent to minimizing 1/2​(zi−vi)212superscriptsubscript𝑧𝑖subscript𝑣𝑖2{1/2}(z_{i}-v_{i})^{2}, provided the logits are zero-meaned separately for each transfer case. At lower temperatures, distillation pays much less attention to matching logits that are much more negative than the average. This is potentially advantageous because these logits are almost completely unconstrained by the cost function used for training the cumbersome model so they could be very noisy. On the other hand, the very negative logits may convey useful information about the knowledge acquired by the cumbersome model. Which of these effects dominates is an empirical question. We show that when the distilled model is much too small to capture all of the knowledege in the cumbersome model, intermediate temperatures work best which strongly suggests that ignoring the large negative logits can be helpful. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_12", "text": " To see how well distillation works, we trained a single large neural net with two hidden layers of 1200 rectified linear hidden units on all 60,000 training cases. The net was strongly regularized using dropout and weight-constraints as described in . Dropout can be viewed as a way of training an exponentially large ensemble of models that share weights. In addition, the input images were jittered by up to two pixels in any direction. This net achieved 67 test errors whereas a smaller net with two hidden layers of 800 rectified linear hidden units and no regularization achieved 146 errors. But if the smaller net was regularized solely by adding the additional task of matching the soft targets produced by the large net at a temperature of 20, it achieved 74 test errors. This shows that soft targets can transfer a great deal of knowledge to the distilled model, including the knowledge about how to generalize that is learned from translated training data even though the transfer set does not contain any translations. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_13", "text": " When the distilled net had 300 or more units in each of its two hidden layers, all temperatures above 8 gave fairly similar results. But when this was radically reduced to 30 units per layer, temperatures in the range 2.5 to 4 worked significantly better than higher or lower temperatures. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_14", "text": " We then tried omitting all examples of the digit 3 from the transfer set. So from the perspective of the distilled model, 3 is a mythical digit that it has never seen. Despite this, the distilled model only makes 206 test errors of which 133 are on the 1010 threes in the test set. Most of the errors are caused by the fact that the learned bias for the 3 class is much too low. If this bias is increased by 3.5 (which optimizes overall performance on the test set), the distilled model makes 109 errors of which 14 are on 3s. So with the right bias, the distilled model gets 98.6% of the test 3s correct despite never having seen a 3 during training. If the transfer set contains only the 7s and 8s from the training set, the distilled model makes 47.3% test errors, but when the biases for 7 and 8 are reduced by 7.6 to optimize test performance, this falls to 13.2% test errors. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_15", "text": " In this section, we investigate the effects of ensembling Deep Neural Network (DNN) acoustic models that are used in Automatic Speech Recognition (ASR). We show that the distillation strategy that we propose in this paper achieves the desired effect of distilling an ensemble of models into a single model that works significantly better than a model of the same size that is learned directly from the same training data. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_16", "text": " State-of-the-art ASR systems currently use DNNs to map a (short) temporal context of features derived from the waveform to a probability distribution over the discrete states of a Hidden Markov Model (HMM) . More specifically, the DNN produces a probability distribution over clusters of tri-phone states at each time and a decoder then finds a path through the HMM states that is the best compromise between using high probability states and producing a transcription that is probable under the language model. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_17", "text": " Although it is possible (and desirable) to train the DNN in such a way that the decoder (and, thus, the language model) is taken into account by marginalizing over all possible paths, it is common to train the DNN to perform frame-by-frame classification by (locally) minimizing the cross entropy between the predictions made by the net and the labels given by a forced alignment with the ground truth sequence of states for each observation: 𝜽=arg⁡max𝜽′⁡P​(ht|𝐬t;𝜽′)𝜽subscriptsuperscript𝜽′𝑃conditionalsubscriptℎ𝑡subscript𝐬𝑡superscript𝜽′\\boldsymbol{\\theta}=\\arg\\max_{\\boldsymbol{\\theta}^{\\prime}}P(h_{t}|\\mathbf{s}_{t};\\boldsymbol{\\theta}^{\\prime}) where 𝜽𝜽\\boldsymbol{\\theta} are the parameters of our acoustic model P𝑃P which maps acoustic observations at time t𝑡t, 𝐬tsubscript𝐬𝑡\\mathbf{s}_{t}, to a probability, P​(ht|𝐬t;𝜽′)𝑃conditionalsubscriptℎ𝑡subscript𝐬𝑡superscript𝜽′P(h_{t}|\\mathbf{s}_{t};\\boldsymbol{\\theta}^{\\prime}) , of the “correct” HMM state htsubscriptℎ𝑡h_{t}, which is determined by a forced alignment with the correct sequence of words. The model is trained with a distributed stochastic gradient descent approach. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_18", "text": " We use an architecture with 8 hidden layers each containing 2560 rectified linear units and a final softmax layer with 14,000 labels (HMM targets htsubscriptℎ𝑡h_{t}). The input is 26 frames of 40 Mel-scaled filterbank coefficients with a 10ms advance per frame and we predict the HMM state of 21st frame. The total number of parameters is about 85M. This is a slightly outdated version of the acoustic model used by Android voice search, and should be considered as a very strong baseline. To train the DNN acoustic model we use about 2000 hours of spoken English data, which yields about 700M training examples. This system achieves a frame accuracy of 58.9%, and a Word Error Rate (WER) of 10.9% on our development set. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_19", "text": " We trained 10 separate models to predict P​(ht|𝐬t;𝜽)𝑃conditionalsubscriptℎ𝑡subscript𝐬𝑡𝜽P(h_{t}|\\mathbf{s}_{t};\\boldsymbol{\\theta}), using exactly the same architecture and training procedure as the baseline. The models are randomly initialized with different initial parameter values and we find that this creates sufficient diversity in the trained models to allow the averaged predictions of the ensemble to significantly outperform the individual models. We have explored adding diversity to the models by varying the sets of data that each model sees, but we found this to not significantly change our results, so we opted for the simpler approach. For the distillation we tried temperatures of (1,𝟐,5,10)12510(1,{\\bf 2},5,10) and used a relative weight of 0.5 on the cross-entropy for the hard targets, where bold font indicates the best value that was used for table 1 . ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_20", "text": " Table 1 shows that, indeed, our distillation approach is able to extract more useful information from the training set than simply using the hard labels to train a single model. More than 80% of the improvement in frame classification accuracy achieved by using an ensemble of 10 models is transferred to the distilled model which is similar to the improvement we observed in our preliminary experiments on MNIST. The ensemble gives a smaller improvement on the ultimate objective of WER (on a 23K-word test set) due to the mismatch in the objective function, but again, the improvement in WER achieved by the ensemble is transferred to the distilled model. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_21", "text": " We have recently become aware of related work on learning a small acoustic model by matching the class probabilities of an already trained larger model . However, they do the distillation at a temperature of 1 using a large unlabeled dataset and their best distilled model only reduces the error rate of the small model by 28% of the gap between the error rates of the large and small models when they are both trained with hard labels. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_22", "text": " Training an ensemble of models is a very simple way to take advantage of parallel computation and the usual objection that an ensemble requires too much computation at test time can be dealt with by using distillation. There is, however, another important objection to ensembles: If the individual models are large neural networks and the dataset is very large, the amount of computation required at training time is excessive, even though it is easy to parallelize. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_23", "text": " In this section we give an example of such a dataset and we show how learning specialist models that each focus on a different confusable subset of the classes can reduce the total amount of computation required to learn an ensemble. The main problem with specialists that focus on making fine-grained distinctions is that they overfit very easily and we describe how this overfitting may be prevented by using soft targets. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_24", "text": " JFT is an internal Google dataset that has 100 million labeled images with 15,000 labels. When we did this work, Google’s baseline model for JFT was a deep convolutional neural network that had been trained for about six months using asynchronous stochastic gradient descent on a large number of cores. This training used two types of parallelism . First, there were many replicas of the neural net running on different sets of cores and processing different mini-batches from the training set. Each replica computes the average gradient on its current mini-batch and sends this gradient to a sharded parameter server which sends back new values for the parameters. These new values reflect all of the gradients received by the parameter server since the last time it sent parameters to the replica. Second, each replica is spread over multiple cores by putting different subsets of the neurons on each core. Ensemble training is yet a third type of parallelism that can be wrapped around the other two types, but only if a lot more cores are available. Waiting for several years to train an ensemble of models was not an option, so we needed a much faster way to improve the baseline model. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_25", "text": " When the number of classes is very large, it makes sense for the cumbersome model to be an ensemble that contains one generalist model trained on all the data and many “specialist” models, each of which is trained on data that is highly enriched in examples from a very confusable subset of the classes (like different types of mushroom). The softmax of this type of specialist can be made much smaller by combining all of the classes it does not care about into a single dustbin class. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_26", "text": " To reduce overfitting and share the work of learning lower level feature detectors, each specialist model is initialized with the weights of the generalist model. These weights are then slightly modified by training the specialist with half its examples coming from its special subset and half sampled at random from the remainder of the training set. After training, we can correct for the biased training set by incrementing the logit of the dustbin class by the log of the proportion by which the specialist class is oversampled. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_27", "text": " In order to derive groupings of object categories for the specialists, we decided to focus on categories that our full network often confuses. Even though we could have computed the confusion matrix and used it as a way to find such clusters, we opted for a simpler approach that does not require the true labels to construct the clusters. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_28", "text": " In particular, we apply a clustering algorithm to the covariance matrix of the predictions of our generalist model, so that a set of classes Smsuperscript𝑆𝑚S^{m} that are often predicted together will be used as targets for one of our specialist models, m𝑚m. We applied an on-line version of the K-means algorithm to the columns of the covariance matrix, and obtained reasonable clusters (shown in Table 2). We tried several clustering algorithms which produced similar results. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_29", "text": " Before investigating what happens when specialist models are distilled, we wanted to see how well ensembles containing specialists performed. In addition to the specialist models, we always have a generalist model so that we can deal with classes for which we have no specialists and so that we can decide which specialists to use. Given an input image 𝐱𝐱\\mathbf{x}, we do top-one classification in two steps: ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_30", "text": " Step 1: For each test case, we find the n𝑛n most probable classes according to the generalist model. Call this set of classes k𝑘k. In our experiments, we used n=1𝑛1n=1. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_31", "text": " Step 2: We then take all the specialist models, m𝑚m, whose special subset of confusable classes, Smsuperscript𝑆𝑚S^{m}, has a non-empty intersection with k𝑘k and call this the active set of specialists Aksubscript𝐴𝑘A_{k} (note that this set may be empty). We then find the full probability distribution 𝐪𝐪\\mathbf{q} over all the classes that minimizes: K​L​(𝐩g,𝐪)+∑m∈AkK​L​(𝐩m,𝐪)𝐾𝐿superscript𝐩𝑔𝐪subscript𝑚subscript𝐴𝑘𝐾𝐿superscript𝐩𝑚𝐪KL(\\mathbf{p}^{g},\\mathbf{q})+\\sum_{m\\in A_{k}}KL(\\mathbf{p}^{m},\\mathbf{q}) (5) where K​L𝐾𝐿KL denotes the KL divergence, and 𝐩msuperscript𝐩𝑚\\mathbf{p}^{m} 𝐩gsuperscript𝐩𝑔\\mathbf{p}^{g} denote the probability distribution of a specialist model or the generalist full model. The distribution 𝐩msuperscript𝐩𝑚\\mathbf{p}^{m} is a distribution over all the specialist classes of m𝑚m plus a single dustbin class, so when computing its KL divergence from the full 𝐪𝐪\\mathbf{q} distribution we sum all of the probabilities that the full 𝐪𝐪\\mathbf{q} distribution assigns to all the classes in m𝑚m’s dustbin. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_32", "text": " Eq. 5 does not have a general closed form solution, though when all the models produce a single probability for each class the solution is either the arithmetic or geometric mean, depending on whether we use K​L​(𝐩,𝐪)𝐾𝐿𝐩𝐪KL(\\mathbf{p},\\mathbf{q}) or K​L​(𝐪,𝐩)𝐾𝐿𝐪𝐩KL(\\mathbf{q},\\mathbf{p})). We parameterize 𝐪=s​o​f​t​m​a​x​(𝐳)𝐪𝑠𝑜𝑓𝑡𝑚𝑎𝑥𝐳\\mathbf{q}=softmax(\\mathbf{z}) (with T=1𝑇1T=1) and we use gradient descent to optimize the logits 𝐳𝐳\\mathbf{z} w.r.t. eq. 5. Note that this optimization must be carried out for each image. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_33", "text": " Starting from the trained baseline full network, the specialists train extremely fast (a few days instead of many weeks for JFT). Also, all the specialists are trained completely independently. Table  3 shows the absolute test accuracy for the baseline system and the baseline system combined with the specialist models. With 61 specialist models, there is a 4.4% relative improvement in test accuracy overall. We also report conditional test accuracy, which is the accuracy by only considering examples belonging to the specialist classes, and restricting our predictions to that subset of classes. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_34", "text": " For our JFT specialist experiments, we trained 61 specialist models, each with 300 classes (plus the dustbin class). Because the sets of classes for the specialists are not disjoint, we often had multiple specialists covering a particular image class. Table  4 shows the number of test set examples, the change in the number of examples correct at position 1 when using the specialist(s), and the relative percentage improvement in top1 accuracy for the JFT dataset broken down by the number of specialists covering the class. We are encouraged by the general trend that accuracy improvements are larger when we have more specialists covering a particular class, since training independent specialist models is very easy to parallelize. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_35", "text": " One of our main claims about using soft targets instead of hard targets is that a lot of helpful information can be carried in soft targets that could not possibly be encoded with a single hard target. In this section we demonstrate that this is a very large effect by using far less data to fit the 85M parameters of the baseline speech model described earlier. Table 5 shows that with only 3% of the data (about 20M examples), training the baseline model with hard targets leads to severe overfitting (we did early stopping, as the accuracy drops sharply after reaching 44.5%), whereas the same model trained with soft targets is able to recover almost all the information in the full training set (about 2% shy). It is even more remarkable to note that we did not have to do early stopping: the system with soft targets simply “converged” to 57%. This shows that soft targets are a very effective way of communicating the regularities discovered by a model trained on all of the data to another model. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_36", "text": " The specialists that we used in our experiments on the JFT dataset collapsed all of their non-specialist classes into a single dustbin class. If we allow specialists to have a full softmax over all classes, there may be a much better way to prevent them overfitting than using early stopping. A specialist is trained on data that is highly enriched in its special classes. This means that the effective size of its training set is much smaller and it has a strong tendency to overfit on its special classes. This problem cannot be solved by making the specialist a lot smaller because then we lose the very helpful transfer effects we get from modeling all of the non-specialist classes. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_37", "text": " Our experiment using 3% of the speech data strongly suggests that if a specialist is initialized with the weights of the generalist, we can make it retain nearly all of its knowledge about the non-special classes by training it with soft targets for the non-special classes in addition to training it with hard targets. The soft targets can be provided by the generalist. We are currently exploring this approach. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_38", "text": " The use of specialists that are trained on subsets of the data has some resemblance to mixtures of experts which use a gating network to compute the probability of assigning each example to each expert. At the same time as the experts are learning to deal with the examples assigned to them, the gating network is learning to choose which experts to assign each example to based on the relative discriminative performance of the experts for that example. Using the discriminative performance of the experts to determine the learned assignments is much better than simply clustering the input vectors and assigning an expert to each cluster, but it makes the training hard to parallelize: First, the weighted training set for each expert keeps changing in a way that depends on all the other experts and second, the gating network needs to compare the performance of different experts on the same example to know how to revise its assignment probabilities. These difficulties have meant that mixtures of experts are rarely used in the regime where they might be most beneficial: tasks with huge datasets that contain distinctly different subsets. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_39", "text": " It is much easier to parallelize the training of multiple specialists. We first train a generalist model and then use the confusion matrix to define the subsets that the specialists are trained on. Once these subsets have been defined the specialists can be trained entirely independently. At test time we can use the predictions from the generalist model to decide which specialists are relevant and only these specialists need to be run. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_40", "text": " We have shown that distilling works very well for transferring knowledge from an ensemble or from a large highly regularized model into a smaller, distilled model. On MNIST distillation works remarkably well even when the transfer set that is used to train the distilled model lacks any examples of one or more of the classes. For a deep acoustic model that is version of the one used by Android voice search, we have shown that nearly all of the improvement that is achieved by training an ensemble of deep neural nets can be distilled into a single neural net of the same size which is far easier to deploy. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_41", "text": " For really big neural networks, it can be infeasible even to train a full ensemble, but we have shown that the performance of a single really big net that has been trained for a very long time can be significantly improved by learning a large number of specialist nets, each of which learns to discriminate between the classes in a highly confusable cluster. We have not yet shown that we can distill the knowledge in the specialists back into the single large net. ", "title": "Distilling the Knowledge in a Neural Network" }, { "id": "1503.02531_all_42", "text": " We thank Yangqing Jia for assistance with training models on ImageNet and Ilya Sutskever and Yoram Singer for helpful discussions. ", "title": "Distilling the Knowledge in a Neural Network" } ]
What considerations would authors need to take to extend this model to languages that have more phonemes than American English? (eg. Indian languages, Chinese, etc)
When authors need to take to extend this model to other languages or other datasets than TIMIT corpus, they should extend the phonemes set for some phonemes that are not included in American English [27].
[ 27 ]
[ { "id": "1506.07503_all_0", "text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation  and visual object classification .111An early version of this work was presented at the NIPS 2014 Deep Learning Workshop . Such models iteratively process their input by selecting relevant content at every step. This basic idea significantly extends the applicability range of end-to-end training methods, for instance, making it possible to construct networks with external memory (6, 7). ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_1", "text": " We introduce extensions to attention-based recurrent networks that make them applicable to speech recognition. Learning to recognize speech can be viewed as learning to generate a sequence (transcription) given another sequence (speech). From this perspective it is similar to machine translation and handwriting synthesis tasks, for which attention-based methods have been found suitable (2, 1). However, compared to machine translation, speech recognition principally differs by requesting much longer input sequences (thousands of frames instead of dozens of words), which introduces a challenge of distinguishing similar speech fragments222Explained in more detail in Sec. 2.1. in a single utterance. It is also different from handwriting synthesis, since the input sequence is much noisier and does not have as clear structure. For these reasons speech recognition is an interesting testbed for developing new attention-based architectures capable of processing long and noisy inputs. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_2", "text": " Application of attention-based models to speech recognition is also an important step toward building fully end-to-end trainable speech recognition systems, which is an active area of research. The dominant approach is still based on hybrid systems consisting of a deep neural acoustic model, a triphone HMM model and an n-gram language model (8, 9). This requires dictionaries of hand-crafted pronunciation and phoneme lexicons, and a multi-stage training procedure to make the components work together. Excellent results by an HMM-less recognizer have recently been reported, with the system consisting of a CTC-trained neural network and a language model . Still, the language model was added only at the last stage in that work, thus leaving open a question of how much an acoustic model can benefit from being aware of a language model during training. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_3", "text": " In this paper, we evaluate attention-based models on a phoneme recognition task using the widely-used TIMIT dataset. At each time step in generating an output sequence (phonemes), an attention mechanism selects or weighs the signals produced by a trained feature extraction mechanism at potentially all of the time steps in the input sequence (speech frames). The weighted feature vector then helps to condition the generation of the next element of the output sequence. Since the utterances in this dataset are rather short (mostly under 5 seconds), we measure the ability of the considered models in recognizing much longer utterances which were created by artificially concatenating the existing utterances. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_4", "text": " We start with a model proposed in for the machine translation task as the baseline. This model seems entirely vulnerable to the issue of similar speech fragments but despite our expectations it was competitive on the original test set, reaching 18.7% phoneme error rate (PER). However, its performance degraded quickly with longer, concatenated utterances. We provide evidence that this model adapted to track the absolute location in the input sequence of the content it is recognizing, a strategy feasible for short utterances from the original test set but inherently unscalable. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_5", "text": " In order to circumvent this undesired behavior, in this paper, we propose to modify the attention mechanism such that it explicitly takes into account both (a) the location of the focus from the previous step, as in  and (b) the features of the input sequence, as in . This is achieved by adding as inputs to the attention mechanism auxiliary convolutional features which are extracted by convolving the attention weights from the previous step with trainable filters. We show that a model with such convolutional features performs significantly better on the considered task (18.0% PER). More importantly, the model with convolutional features robustly recognized utterances many times longer than the ones from the training set, always staying below 20% PER. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_6", "text": " Therefore, the contribution of this work is three-fold. For one, we present a novel purely neural speech recognition architecture based on an attention mechanism, whose performance is comparable to that of the conventional approaches on the TIMIT dataset. Moreover, we propose a generic method of adding location awareness to the attention mechanism. Finally, we introduce a modification of the attention mechanism to avoid concentrating the attention on a single frame, and thus avoid obtaining less “effective training examples”, bringing the PER down to 17.6%. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_7", "text": " An attention-based recurrent sequence generator (ARSG) is a recurrent neural network that stochastically generates an output sequence (y1,…,yT)subscript𝑦1…subscript𝑦𝑇(y_{1},\\dots,y_{T}) from an input x𝑥x. In practice, x𝑥x is often processed by an encoder which outputs a sequential input representation h=(h1,…,hL)ℎsubscriptℎ1…subscriptℎ𝐿h=(h_{1},\\ldots,h_{L}) more suitable for the attention mechanism to work with. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_8", "text": " In the context of this work, the output y𝑦y is a sequence of phonemes, and the input x=(x1,…,xL′)𝑥subscript𝑥1…subscript𝑥superscript𝐿′x=(x_{1},\\ldots,x_{L^{\\prime}}) is a sequence of feature vectors. Each feature vector is extracted from a small overlapping window of audio frames. The encoder is implemented as a deep bidirectional recurrent network (BiRNN), to form a sequential representation hℎh of length L=L′𝐿superscript𝐿′L=L^{\\prime}. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_9", "text": " At the i𝑖i-th step an ARSG generates an output yisubscript𝑦𝑖y_{i} by focusing on the relevant elements of hℎh: αi=A​t​t​e​n​d​(si−1,αi−1,h)subscript𝛼𝑖𝐴𝑡𝑡𝑒𝑛𝑑subscript𝑠𝑖1subscript𝛼𝑖1ℎ\\displaystyle\\alpha_{i}=Attend(s_{i-1},\\alpha_{i-1},h) (1) gi=∑j=1Lαi,j​hjsubscript𝑔𝑖superscriptsubscript𝑗1𝐿subscript𝛼𝑖𝑗subscriptℎ𝑗\\displaystyle g_{i}=\\sum\\limits_{j=1}^{L}\\alpha_{i,j}h_{j} (2) yi∼G​e​n​e​r​a​t​e​(si−1,gi),similar-tosubscript𝑦𝑖𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒subscript𝑠𝑖1subscript𝑔𝑖\\displaystyle y_{i}\\sim Generate(s_{i-1},g_{i}), (3) where si−1subscript𝑠𝑖1s_{i-1} is the (i−1)𝑖1(i-1)-th state of the recurrent neural network to which we refer as the generator, αi∈ℝLsubscript𝛼𝑖superscriptℝ𝐿\\alpha_{i}\\in\\mathbb{R}^{L} is a vector of the attention weights, also often called the alignment . Using the terminology from , we call gisubscript𝑔𝑖g_{i} a glimpse. The step is completed by computing a new generator state: si=R​e​c​u​r​r​e​n​c​y​(si−1,gi,yi)subscript𝑠𝑖𝑅𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑦subscript𝑠𝑖1subscript𝑔𝑖subscript𝑦𝑖\\displaystyle s_{i}=Recurrency(s_{i-1},g_{i},y_{i}) (4) Long short-term memory units (LSTM, ) and gated recurrent units (GRU, ) are typically used as a recurrent activation, to which we refer as a recurrency. The process is graphically illustrated in Fig. 1. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_10", "text": " Inspired by we distinguish between location-based, content-based and hybrid attention mechanisms. A​t​t​e​n​d𝐴𝑡𝑡𝑒𝑛𝑑Attend in Eq. (1) describes the most generic, hybrid attention. If the term αi−1subscript𝛼𝑖1\\alpha_{i-1} is dropped from A​t​t​e​n​d𝐴𝑡𝑡𝑒𝑛𝑑Attend arguments, i.e., αi=A​t​t​e​n​d​(si−1,h)subscript𝛼𝑖𝐴𝑡𝑡𝑒𝑛𝑑subscript𝑠𝑖1ℎ\\alpha_{i}=Attend(s_{i-1},h), we call it content-based (see, e.g., or ). In this case, A​t​t​e​n​d𝐴𝑡𝑡𝑒𝑛𝑑Attend is often implemented by scoring each element in hℎh separately and normalizing the scores: ei,j=S​c​o​r​e​(si−1,hj),subscript𝑒𝑖𝑗𝑆𝑐𝑜𝑟𝑒subscript𝑠𝑖1subscriptℎ𝑗\\displaystyle e_{i,j}=Score(s_{i-1},h_{j}), (5) αi,j=exp⁡(ei,j)/∑j=1Lexp⁡(ei,j).subscript𝛼𝑖𝑗/subscript𝑒𝑖𝑗superscriptsubscript𝑗1𝐿subscript𝑒𝑖𝑗\\displaystyle\\alpha_{i,j}=\\exp(e_{i,j})\\left/\\sum\\limits_{j=1}^{L}\\exp(e_{i,j})\\right.. (6) ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_11", "text": " The main limitation of such scheme is that identical or very similar elements of hℎh are scored equally regardless of their position in the sequence. This is the issue of “similar speech fragments” raised above. Often this issue is partially alleviated by an encoder such as e.g. a BiRNN  or a deep convolutional network  that encode contextual information into every element of hℎh . However, capacity of hℎh elements is always limited, and thus disambiguation by context is only possible to a limited extent. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_12", "text": " Alternatively, a location-based attention mechanism computes the alignment from the generator state and the previous alignment only such that αi=A​t​t​e​n​d​(si−1,αi−1)subscript𝛼𝑖𝐴𝑡𝑡𝑒𝑛𝑑subscript𝑠𝑖1subscript𝛼𝑖1\\alpha_{i}=Attend(s_{i-1},\\alpha_{i-1}). For instance, Graves used the location-based attention mechanism using a Gaussian mixture model in his handwriting synthesis model. In the case of speech recognition, this type of location-based attention mechanism would have to predict the distance between consequent phonemes using si−1subscript𝑠𝑖1s_{i-1} only, which we expect to be hard due to large variance of this quantity. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_13", "text": " For these limitations associated with both content-based and location-based mechanisms, we argue that a hybrid attention mechanism is a natural candidate for speech recognition. Informally, we would like an attention model that uses the previous alignment αi−1subscript𝛼𝑖1\\alpha_{i-1} to select a short list of elements from hℎh, from which the content-based attention, in Eqs. (5)–(6), will select the relevant ones without confusion. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_14", "text": " We start from the ARSG-based model with the content-based attention mechanism proposed in . This model can be described by Eqs. (5)–(6), where ei,j=w⊤​tanh⁡(W​si−1+V​hj+b).subscript𝑒𝑖𝑗superscript𝑤top𝑊subscript𝑠𝑖1𝑉subscriptℎ𝑗𝑏\\displaystyle e_{i,j}=w^{\\top}\\tanh(Ws_{i-1}+Vh_{j}+b). (7) w𝑤w and b𝑏b are vectors, W𝑊W and V𝑉V are matrices. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_15", "text": " We extend this content-based attention mechanism of the original model to be location-aware by making it take into account the alignment produced at the previous step. First, we extract k𝑘k vectors fi,j∈ℝksubscript𝑓𝑖𝑗superscriptℝ𝑘f_{i,j}\\in\\mathbb{R}^{k} for every position j𝑗j of the previous alignment αi−1subscript𝛼𝑖1\\alpha_{i-1} by convolving it with a matrix F∈ℝk×r𝐹superscriptℝ𝑘𝑟F\\in\\mathbb{R}^{k\\times r}: fi=F∗αi−1.subscript𝑓𝑖𝐹subscript𝛼𝑖1\\displaystyle f_{i}=F*\\alpha_{i-1}. (8) These additional vectors fi,jsubscript𝑓𝑖𝑗f_{i,j} are then used by the scoring mechanism ei,jsubscript𝑒𝑖𝑗e_{i,j}: ei,j=w⊤​tanh⁡(W​si−1+V​hj+U​fi,j+b)subscript𝑒𝑖𝑗superscript𝑤top𝑊subscript𝑠𝑖1𝑉subscriptℎ𝑗𝑈subscript𝑓𝑖𝑗𝑏\\displaystyle e_{i,j}=w^{\\top}\\tanh(Ws_{i-1}+Vh_{j}+Uf_{i,j}+b) (9) ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_16", "text": " There are three potential issues with the normalization in Eq. (6). ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_17", "text": " First, when the input sequence hℎh is long, the glimpse gisubscript𝑔𝑖g_{i} is likely to contain noisy information from many irrelevant feature vectors hjsubscriptℎ𝑗h_{j}, as the normalized scores αi,jsubscript𝛼𝑖𝑗\\alpha_{i,j} are all positive and sum to 111. This makes it difficult for the proposed ARSG to focus clearly on a few relevant frames at each time i𝑖i. Second, the attention mechanism is required to consider all the L𝐿L frames each time it decodes a single output yisubscript𝑦𝑖y_{i} while decoding the output of length T𝑇T, leading to a computational complexity of O​(L​T)𝑂𝐿𝑇O(LT). This may easily become prohibitively expensive, when input utterances are long (and issue that is less serious for machine translation, because in that case the input sequence is made of words, not of 20ms acoustic frames). ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_18", "text": " The other side of the coin is that the use of softmax normalization in Eq. (6) prefers to mostly focus on only a single feature vector hjsubscriptℎ𝑗h_{j}. This prevents the model from aggregating multiple top-scored frames to form a glimpse gisubscript𝑔𝑖g_{i}. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_19", "text": " There is a straightforward way to address the first issue of a noisy glimpse by “sharpening” the scores αi,jsubscript𝛼𝑖𝑗\\alpha_{i,j}. One way to sharpen the weights is to introduce an inverse temperature β>1𝛽1\\beta>1 to the softmax function such that ai,j=exp⁡(β​ei,j)/∑j=1Lexp⁡(β​ei,j),subscript𝑎𝑖𝑗/𝛽subscript𝑒𝑖𝑗superscriptsubscript𝑗1𝐿𝛽subscript𝑒𝑖𝑗a_{i,j}=\\exp(\\beta e_{i,j})\\left/\\sum_{j=1}^{L}\\exp(\\beta e_{i,j})\\right., or to keep only the top-k𝑘k frames according to the scores and re-normalize them. These sharpening methods, however, still requires us to compute the score of every frame each time (O​(L​T)𝑂𝐿𝑇O(LT)), and they worsen the second issue, of overly narrow focus. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_20", "text": " We also propose and investigate a windowing technique. At each time i𝑖i, the attention mechanism considers only a subsequence h~=(hpi−w,…,hpi+w−1)~ℎsubscriptℎsubscript𝑝𝑖𝑤…subscriptℎsubscript𝑝𝑖𝑤1\\tilde{h}=(h_{p_{i}-w},\\ldots,h_{p_{i}+w-1}) of the whole sequence hℎh, where w≪Lmuch-less-than𝑤𝐿w\\ll L is the predefined window width and pisubscript𝑝𝑖p_{i} is the median of the alignment αi−1subscript𝛼𝑖1\\alpha_{i-1}. The scores for hj∉h~subscriptℎ𝑗~ℎh_{j}\\notin\\tilde{h} are not computed, resulting in a lower complexity of O​(L+T)𝑂𝐿𝑇O(L+T). This windowing technique is similar to taking the top-k𝑘k frames, and similarly, has the effect of sharpening. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_21", "text": " The proposed sharpening based on windowing can be used both during training and evaluation. Later, in the experiments, we only consider the case where it is used during evaluation. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_22", "text": " We observed that the proposed sharpening methods indeed helped with long utterances. However, all of them, and especially selecting the frame with the highest score, negatively affected the model’s performance on the standard development set which mostly consists of short utterances. This observations let us hypothesize that it is helpful for the model to aggregate selections from multiple top-scored frames. In a sense this brings more diversity, i.e., more effective training examples, to the output part of the model, as more input locations are considered. To facilitate this effect, we replace the unbounded exponential function of the softmax function in Eq. (6) with the bounded logistic sigmoid σ𝜎\\sigma such that ai,j=σ​(ei,j)/∑j=1Lσ​(ei,j).subscript𝑎𝑖𝑗/𝜎subscript𝑒𝑖𝑗superscriptsubscript𝑗1𝐿𝜎subscript𝑒𝑖𝑗a_{i,j}=\\sigma(e_{i,j})\\left/\\sum_{j=1}^{L}\\sigma(e_{i,j})\\right.. This has the effect of smoothing the focus found by the attention mechanism. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_23", "text": " Speech recognizers based on the connectionist temporal classification (CTC, ) and its extension, RNN Transducer , are the closest to the ARSG model considered in this paper. They follow earlier work on end-to-end trainable deep learning over sequences with gradient signals flowing through the alignment process . They have been shown to perform well on the phoneme recognition task . Furthermore, the CTC was recently found to be able to directly transcribe text from speech without any intermediate phonetic representation . ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_24", "text": " The considered ARSG is different from both the CTC and RNN Transducer in two ways. First, whereas the attention mechanism deterministically aligns the input and the output sequences, the CTC and RNN Transducer treat the alignment as a latent random variable over which MAP (maximum a posteriori) inference is performed. This deterministic nature of the ARSG’s alignment mechanism allows beam search procedure to be simpler. Furthermore, we empirically observe that a much smaller beam width can be used with the deterministic mechanism, which allows faster decoding (see Sec. 4.2 and Fig. 2). Second, the alignment mechanism of both the CTC and RNN Transducer is constrained to be “monotonic” to keep marginalization of the alignment tractable. On the other hand, the proposed attention mechanism can result in non-monotonic alignment, which makes it suitable for a larger variety of tasks other than speech recognition. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_25", "text": " A hybrid attention model using a convolution operation was also proposed in for neural Turing machines (NTM). At each time step, the NTM computes content-based attention weights which are then convolved with a predicted shifting distribution. Unlike the NTM’s approach, the hybrid mechanism proposed here lets learning figure out how the content-based and location-based addressing be combined by a deep, parametric function (see Eq. (9).) ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_26", "text": " Sukhbaatar et al. describes a similar hybrid attention mechanism, where location embeddings are used as input to the attention model. This approach has an important disadvantage that the model cannot work with an input sequence longer than those seen during training. Our approach, on the other hand, works well on sequences many times longer than those seen during training (see Sec. 5.) ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_27", "text": " We closely followed the procedure in . All experiments were performed on the TIMIT corpus . We used the train-dev-test split from the Kaldi TIMIT s5 recipe. We trained on the standard 462 speaker set with all SA utterances removed and used the 50 speaker dev set for early stopping. We tested on the 24 speaker core test set. All networks were trained on 40 mel-scale filter-bank features together with the energy in each frame, and first and second temporal differences, yielding in total 123 features per frame. Each feature was rescaled to have zero mean and unit variance over the training set. Networks were trained on the full 61-phone set extended with an extra “end-of-sequence” token that was appended to each target sequence. Similarly, we appended an all-zero frame at the end of each input sequence to indicate the end of the utterance. Decoding was performed using the 61+1 phoneme set, while scoring was done on the 39 phoneme set. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_28", "text": " One property of ARSG models is that different subsets of parameters are reused different number of times; L𝐿L times for those of the encoder, L​T𝐿𝑇LT for the attention weights and T𝑇T times for all the other parameters of the ARSG. This makes the scales of derivatives w.r.t. parameters vary significantly, and we handle it by using an adaptive learning rate algorithm, AdaDelta  which has two hyperparameters ϵitalic-ϵ\\epsilon and ρ𝜌\\rho. All the weight matrices were initialized from a normal Gaussian distribution with its standard deviation set to 0.010.010.01. Recurrent weights were furthermore orthogonalized. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_29", "text": " As TIMIT is a relatively small dataset, proper regularization is crucial. We used the adaptive weight noise as a main regularizer . We first trained our models with a column norm constraint  with the maximum norm 111 until the lowest development negative log-likelihood is achieved.333 Applying the weight noise from the beginning of training caused severe underfitting. During this time, ϵitalic-ϵ\\epsilon and ρ𝜌\\rho are set to 10−8superscript10810^{-8} and 0.950.950.95, respectively. At this point, we began using the adaptive weight noise, and scaled down the model complexity cost LCsubscript𝐿𝐶L_{C} by a factor of 10, while disabling the column norm constraints. Once the new lowest development log-likelihood was reached, we fine-tuned the model with a smaller ϵ=10−10italic-ϵsuperscript1010\\epsilon=10^{-10}, until we did not observe the improvement in the development phoneme error rate (PER) for 100K weight updates. Batch size 1 was used throughout the training. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_30", "text": " We evaluated the ARSGs with different attention mechanisms. The encoder was a 3-layer BiRNN with 256 GRU units in each direction, and the activations of the 512 top-layer units were used as the representation hℎh. The generator had a single recurrent layer of 256 GRU units. G​e​n​e​r​a​t​e𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒Generate in Eq. (3) had a hidden layer of 64 maxout units. The initial states of both the encoder and generator were treated as additional parameters. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_31", "text": " Our baseline model is the one with a purely content-based attention mechanism (See Eqs. (5)–(7).) The scoring network in Eq. (7) had 512 hidden units. The other two models use the convolutional features in Eq. (8) with k=10𝑘10k=10 and r=201𝑟201r=201. One of them uses the smoothing from Sec. 2.3. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_32", "text": " A left-to-right beam search over phoneme sequences was used during decoding . Beam search was stopped when the “end-of-sequence” token ⟨eos⟩delimited-⟨⟩eos\\left<\\text{eos}\\right> was emitted. We started with a beam width of 10, increasing it up to 40 when the network failed to produce ⟨eos⟩delimited-⟨⟩eos\\left<\\text{eos}\\right> with the narrower beam. As shown in Fig. 2, decoding with a wider beam gives little-to-none benefit. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_33", "text": " All the models achieved competitive PERs (see Table 1). With the convolutional features, we see 3.7% relative improvement over the baseline and further 5.9% with the smoothing. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_34", "text": " To our surprise (see Sec. 2.1.), the baseline model learned to align properly. An alignment produced by the baseline model on a sequence with repeated phonemes (utterance FDHC0_SX209) is presented in Fig. 3 which demonstrates that the baseline model is not confused by short-range repetitions. We can also see from the figure that it prefers to select frames that are near the beginning or even slightly before the phoneme location provided as a part of the dataset. The alignments produced by the other models were very similar visually. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_35", "text": " The good performance of the baseline model led us to the question of how it distinguishes between repetitions of similar phoneme sequences and how reliably it decodes longer sequences with more repetitions. We created two datasets of long utterances; one by repeating each test utterance, and the other by concatenating randomly chosen utterances. In both cases, the waveforms were cross-faded with a 0.05s silence inserted as the “pau” phone. We concatenated up to 151515 utterances. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_36", "text": " First, we checked the forced alignment with these longer utterances by forcing the generator to emit the correct phonemes. Each alignment was considered correct if 90% of the alignment weight lies inside the ground-truth phoneme window extended by 20 frames on each side. Under this definition, all phones but the ⟨eos⟩delimited-⟨⟩eos\\left<\\text{eos}\\right> shown in Fig. 3 are properly aligned. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_37", "text": " The first column of Fig. 4 shows the number of correctly aligned frames w.r.t. the utterance length (in frames) for some of the considered models. One can see that the baseline model was able to decode sequences up to about 120 phones when a single utterance was repeated, and up to about 150 phones when different utterances were concatenated. Even when it failed, it correctly aligned about 50 phones. On the other hand, the model with the hybrid attention mechanism with convolutional features was able to align sequences up to 200 phones long. However, once it began to fail, the model was not able to align almost all phones. The model with the smoothing behaved similarly to the one with convolutional features only. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_38", "text": " We examined failed alignments to understand these two different modes of failure. Some of the examples are shown in the Supplementary Materials. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_39", "text": " We found that the baseline model properly aligns about 40 first phones, then makes a jump to the end of the recording and cycles over the last 10 phones. This behavior suggests that it learned to track its approximate location in the source sequence. However, the tracking capability is limited to the lengths observed during training. Once the tracker saturates, it jumps to the end of the recording. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_40", "text": " In contrast, when the location-aware network failed it just stopped aligning – no particular frames were selected for each phone. We attribute this behavior to the issue of noisy glimpse discussed in Sec. 2.3. With a long utterance there are many irrelevant frames negatively affecting the weight assigned to the correct frames. In line with this conjecture, the location-aware network works slightly better on the repetition of the same utterance, where all frames are somehow relevant, than on the concatenation of different utterances, where each misaligned frame is irrelevant. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_41", "text": " To gain more insight we applied the alignment sharpening schemes described in Sec. 2.3. In the remaining columns of Fig. 4, we see that the sharpening methods help the location-aware network to find proper alignments, while they show little effect on the baseline network. The windowing technique helps both the baseline and location-aware networks, with the location-aware network properly aligning nearly all sequences. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_42", "text": " During visual inspection, we noticed that in the middle of very long utterances the baseline model was confused by repetitions of similar content within the window, and that such confusions did not happen in the beginning. This supports our conjecture above. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_43", "text": " We evaluated the models on long sequences. Each model was decoded using the alignment sharpening techniques that helped to obtain proper forced alignments. The results are presented in Fig. 5. The baseline model fails to decode long utterances, even when a narrow window is used to constrain the alignments it produces. The two other location-aware networks are able to decode utterances formed by concatenating up to 11 test utterances. Better results were obtained with a wider window, presumably because it resembles more the training conditions when at each step the attention mechanism was seeing the whole input sequence. With the wide window, both of the networks scored about 20% PER on the long utterances, indicating that the proposed location-aware attention mechanism can scale to sequences much longer than those in the training set with only minor modifications required at the decoding stage. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_44", "text": " We proposed and evaluated a novel end-to-end trainable speech recognition architecture based on a hybrid attention mechanism which combines both content and location information in order to select the next position in the input sequence for decoding. One desirable property of the proposed model is that it can recognize utterances much longer than the ones it was trained on. In the future, we expect this model to be used to directly recognize text from speech (10, 17), in which case it may become important to incorporate a monolingual language model to the ARSG architecture . ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_45", "text": " This work has contributed two novel ideas for attention mechanisms: a better normalization approach yielding smoother alignments and a generic principle for extracting and using features from the previous alignments. Both of these can potentially be applied beyond speech recognition. For instance, the proposed attention can be used without modification in neural Turing machines, or by using 2–D convolution instead of 1–D, for improving image caption generation . ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_46", "text": " All experiments were conducted using Theano (27, 28), PyLearn2 , and Blocks libraries. ", "title": "Attention-Based Models for Speech Recognition" }, { "id": "1506.07503_all_47", "text": " The authors would like to acknowledge the support of the following agencies for research funding and computing support: National Science Center (Poland), NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Bahdanau also thanks Planet Intelligent Systems GmbH and Yandex. ", "title": "Attention-Based Models for Speech Recognition" } ]
What are the motivation behind choosing TREC-COVID for analysis on annotation bias?
This dataset could be made unbiased by manual annotations [5].
[ 5 ]
[ { "id": "2104.08663_all_0", "text": " Major natural language processing (NLP) problems rely on a practical and efficient retrieval component as a first step to find relevant information. Challenging problems include open-domain question-answering , claim-verification , duplicate question detection , and many more. Traditionally, retrieval has been dominated by lexical approaches like TF-IDF or BM25 . However, these approaches suffer from lexical gap and are able to only retrieve documents containing keywords present within the query. Further, lexical approaches treat queries and documents as bag-of-words by not taking word ordering into consideration. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_1", "text": " Recently, deep learning and in particular pre-trained Transformer models like BERT have become popular in information retrieval . These neural retrieval systems can be used in many fundamentally different ways to improve retrieval performance. We provide an brief overview of the systems in Section 2.1. Many prior work train neural retrieval systems on large datasets like Natural Questions (NQ) (133k training examples) or MS MARCO (533k training examples), which both focus on passage retrieval given a question or short keyword-based query. In most prior work, approaches are afterward evaluated on the same dataset, where significant performance gains over lexical approaches like BM25 are demonstrated (15, 31, 46). ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_2", "text": " However, creating a large training corpus is often time-consuming and expensive and hence many retrieval systems are applied in a zero-shot setup, with no available training data to train the system. So far, it is unclear how well existing trained neural models will perform for other text domains or textual retrieval tasks. Even more important, it is unclear how well different approaches, like sparse embeddings vs. dense embeddings, generalize to out-of-distribution data. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_3", "text": " In this work, we present a novel robust and heterogeneous benchmark called beir (Benchmarking IR), comprising of 18 retrieval datasets for comparison and evaluation of model generalization. Prior retrieval benchmarks (19, 50) have issues of a comparatively narrow evaluation focusing either only on a single task, like question-answering, or on a certain domain. In beir, we focus on Diversity, we include nine different retrieval tasks: Fact checking, citation prediction, duplicate question retrieval, argument retrieval, news retrieval, question answering, tweet retrieval, bio-medical IR, and entity retrieval. Further, we include datasets from diverse text domains, datasets that cover broad topics (like Wikipedia) and specialized topics (like COVID-19 publications), different text types (news articles vs. Tweets), datasets of various sizes (3.6k - 15M documents), and datasets with different query lengths (average query length between 3 and 192 words) and document lengths (average document length between 11 and 635 words). ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_4", "text": " We use beir to evaluate ten diverse retrieval methods from five broad architectures: lexical, sparse, dense, late interaction, and re-ranking. From our analysis, we find that no single approach consistently outperforms other approaches on all datasets. Further, we notice that the in-domain performance of a model does not correlate well with its generalization capabilities: models fine-tuned with identical training data might generalize differently. In terms of efficiency, we find a trade-off between the performances and the computational cost: computationally expensive models, like re-ranking models and late interaction model perform the best. More efficient approaches e.g. based on dense or sparse embeddings can substantially underperform traditional lexical models like BM25. Overall, BM25 remains a strong baseline for zero-shot text retrieval. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_5", "text": " Finally, we notice that there can be a strong lexical bias present in datasets included within the benchmark, likely as lexical models are pre-dominantly used during the annotation or creation of datasets. This can give an unfair disadvantage to non-lexical approaches. We analyze this for the TREC-COVID dataset: We manually annotate the missing relevance judgements for the tested systems and see a significant performance improvement for non-lexical approaches. Hence, future work requires better unbiased datasets that allow a fair comparison for all types of retrieval systems. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_6", "text": " With beir, we take an important step towards a single and unified benchmark to evaluate the zero-shot capabilities of retrieval systems. It allows to study when and why certain approaches perform well, and hopefully steers innovation to more robust retrieval systems. We release beir and an integration of diverse retrieval systems and datasets in a well-documented, easy to use and extensible open-source package. beir is model-agnostic, welcomes methods of all kinds, and also allows easy integration of new tasks and datasets. More details are available at https://github.com/UKPLab/beir. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_7", "text": " To our knowledge, beir is the first broad, zero-shot information retrieval benchmark. Existing works (19, 50) do not evaluate retrieval in a zero-shot setting in depth, they either focus over a single task, small corpora or on a certain domain. This setting hinders for investigation of model generalization across diverse set of domains and task types. MultiReQA consists of eight Question-Answering (QA) datasets and evaluates sentence-level answer retrieval given a question. It only tests a single task and five out of eight datasets are from Wikipedia. Further, MultiReQA evaluates retrieval over rather small corpora: six out of eight tasks have less than 100k candidate sentences, which benefits dense retrieval over lexical as previously shown . KILT consists of five knowledge-intensive tasks including a total of eleven datasets. The tasks involve retrieval, but it is not the primary task. Further, KILT retrieves documents only from Wikipedia. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_8", "text": " Information retrieval is the process of searching and returning relevant documents for a query from a collection. In our paper, we focus on text retrieval and use document as a cover term for text of any length in the given collection and query for the user input, which can be of any length as well. Traditionally, lexical approaches like TF-IDF and BM25 have dominated textual information retrieval. Recently, there is a strong interest in using neural networks to improve or replace these lexical approaches. In this section, we highlight a few neural-based approaches and we refer the reader to Lin et al. for a recent survey in neural retrieval. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_9", "text": " Retriever-based  Lexical approaches suffer from the lexical gap . To overcome this, earlier techniques proposed to improve lexical retrieval systems with neural networks. Sparse methods such as docT5query identified document expansion terms using a sequence-to-sequence model that generated possible queries for which the given document would be relevant. DeepCT on the other hand used a BERT model to learn relevant term weights in a document and generated a pseudo-document representation. Both methods still rely on BM25 for the remaining parts. Similarly, SPARTA learned token-level contextualized representations with BERT and converted the document into an efficient inverse index. More recently, dense retrieval approaches were proposed. They are capable of capturing semantic matches and try to overcome the (potential) lexical gap. Dense retrievers map queries and documents in a shared, dense vector space . This allowed the document representation to be pre-computed and indexed. A bi-encoder neural architecture based on pre-trained Transformers has shown strong performance for various open-domain question-answering tasks (19, 31, 35, 43). This dense approach was recently extended by hybrid lexical-dense approaches which aims to combine the strengths of both approaches (17, 57, 42). Another parallel line of work proposed an unsupervised domain-adaption approach (35, 43) for training dense retrievers by generating synthetic queries on a target domain. Lastly, ColBERT (Contextualized late interaction over BERT) computes multiple contextualized embeddings on a token level for queries and documents and uses an maximum-similarity function for retrieving relevant documents. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_10", "text": " Re-ranking-based  Neural re-ranking approaches use the output of a first-stage retrieval system, often BM25, and re-ranks the documents to create a better comparison of the retrieved documents. Significant improvement in performance was achieved with the cross-attention mechanism of BERT . However, at a disadvantage of a high computational overhead . ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_11", "text": " beir aims to provide a one-stop zero-shot evaluation benchmark for all diverse retrieval tasks. To construct a comprehensive evaluation benchmark, the selection methodology is crucial to collect tasks and datasets with desired properties. For beir, the methodology is motivated by the following three factors: (i) Diverse tasks: Information retrieval is a versatile task and the lengths of queries and indexed documents can differ between tasks. Sometimes, queries are short, like a keyword, while in other cases, they can be long like a news article. Similarly, indexed documents can sometimes be long, and for other tasks, short like a tweet. (ii) Diverse domains: Retrieval systems should be evaluated in various types of domains. From broad ones like News or Wikipedia, to highly specialized ones such as scientific publications in one particular field. Hence, we include domains which provide a representation of real-world problems and are diverse ranging from generic to specialized. (iii) Task difficulties: Our benchmark is challenging and the difficulty of a task included has to be sufficient. If a task is easily solved by any algorithm, it will not be useful to compare various models used for evaluation. We evaluated several tasks based on existing literature and selected popular tasks which we believe are recently developed, challenging and are not yet fully solved with existing approaches. (iv) Diverse annotation strategies: Creating retrieval datasets are inherently complex and are subject to annotation biases (see Section 6 for details), which hinders a fair comparison of approaches. To reduce the impact of such biases, we selected datasets which have been created in many different ways: Some where annotated by crowd-workers, others by experts, and others are based on the feedback from large online communities. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_12", "text": " In total, we include 18 English zero-shot evaluation datasets from 9 heterogeneous retrieval tasks. As the majority of the evaluated approaches are trained on the MS MARCO dataset, we also report performances on this dataset, but don’t include the outcome in our zero-shot comparison. We would like to refer the reader to Appendix D where we motivate each one of the 9 retrieval tasks and 18 datasets in depth. Examples for each dataset are listed in Table 8. We additionally provide dataset licenses in Appendix E, and links to the datasets in Table 5. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_13", "text": " Table 1 summarizes the statistics of the datasets provided in beir. A majority of datasets contain binary relevancy judgements, i.e. relevant or non-relevant, and a few contain fine-grained relevancy judgements. Some datasets contain few relevant documents for a query (< 2), while other datasets like TREC-COVID can contain up to even 500 relevant documents for a query. Only 8 out of 19 datasets (including MS MARCO) have training data denoting the practical importance for zero-shot retrieval benchmarking. All datasets except ArguAna have short queries (either a single sentence or 2-3 keywords). Figure 1 shows an overview of the tasks and datasets in the beir benchmark. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_14", "text": " Information Retrieval (IR) is ubiquitous, there are lots of datasets available within each task and further even more tasks with retrieval. However, it is not feasible to include all datasets within the benchmark for evaluation. We tried to cover a balanced mixture of a wide range of tasks and datasets and paid importance not to overweight a specific task like question-answering. Future datasets can easily be integrated in beir, and existing models can be evaluated on any new dataset quickly. The beir website will host an actively maintained leaderboard111beir Leaderboard: https://tinyurl.com/beir-leaderboard with all datasets and models. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_15", "text": " The datasets present in beir are selected from diverse domains ranging from Wikipedia, scientific publications, Twitter, news, to online user communities, and many more. To measure the diversity in domains, we compute the domain overlap between the pairwise datasets using a pairwise weighted Jaccard similarity score on unigram word overlap between all dataset pairs. For more details on the theoretical formulation of the similarity score, please refer to Appendix F. Figure 2 shows a heatmap denoting the pairwise weighted jaccard scores and the clustered force-directed placement diagram. Nodes (or datasets) close in this graph have a high word overlap, while nodes far away in the graph have a low overlap. From Figure 2, we observe a rather low weighted Jaccard word overlap across different domains, indicating that beir is a challenging benchmark where approaches must generalize well to diverse out-of-distribution domains. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_16", "text": " The beir software222beir Code & documentation: https://github.com/UKPLab/beir provides an is an easy to use Python framework (pip install beir) for model evaluation. It contains extensive wrappers to replicate experiments and evaluate models from well-known repositories including Sentence-Transformers , Transformers , Anserini , DPR , Elasticsearch, ColBERT , and Universal Sentence Encoder . This makes the software useful for both academia and industry. The software also provides you with all IR-based metrics from Precision, Recall, MAP (Mean Average Precision), MRR (Mean Reciprocal Rate) to nDCG (Normalised Cumulative Discount Gain) for any top-k hits. One can use the beir benchmark for evaluating existing models on new retrieval datasets and for evaluating new models on the included datasets. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_17", "text": " Datasets are often scattered online and are provided in various file-formats, making the evaluation of models on various datasets difficult. beir introduces a standard format (corpus, queries and qrels) and converts existing datasets in this easy universal data format, allowing to evaluate faster on an increasing number of datasets. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_18", "text": " Depending upon the nature and requirements of real-world applications, retrieval tasks can be either be precision or recall focused. To obtain comparable results across models and datasets in beir, we argue that it is important to leverage a single evaluation metric that can be computed comparably across all tasks. Decision support metrics such as Precision and Recall which are both rank unaware are not suitable. Binary rank-aware metrics such as MRR (Mean Reciprocal Rate) and MAP (Mean Average Precision) fail to evaluate tasks with graded relevance judgements. We find that Normalised Cumulative Discount Gain (nDCG@k) provides a good balance suitable for both tasks involving binary and graded relevance judgements. We refer the reader to Wang et al. for understanding the theoretical advantages of the metric. For our experiments, we utilize the Python interface of the official TREC evaluation tool and compute nDCG@10 for all datasets. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_19", "text": " We use beir to compare diverse, recent, state-of-the-art retrieval architectures with a focus on transformer-based neural approaches. We evaluate on publicly available pre-trained checkpoints, which we provide in Table 6. Due to the length limitations of transformer-based networks, we use only the first 512 word pieces within all documents in our experiments across all neural architectures. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_20", "text": " We group the models based on their architecture: (i) lexical, (ii) sparse, (iii) dense, (iv) late-interaction, and (v) re-ranking. Besides the included models, the beir benchmark is model agnostic and in future different model configurations can be easily incorporated within the benchmark. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_21", "text": " (i) Lexical Retrieval: (a) BM25 is a commonly-used bag-of-words retrieval function based on token-matching between two high-dimensional sparse vectors with TF-IDF token weights. We use Anserini with the default Lucene parameters (k=0.9 and b=0.4). We index the title (if available) and passage as separate fields for documents. In our leaderboard, we also tested Elasticsearch BM25 and Anserini + RM3 expansion, but found Anserini BM25 to perform the best. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_22", "text": " (ii) Sparse Retrieval: (a) DeepCT uses a bert-base-uncased model trained on MS MARCO to learn the term weight frequencies (tf). It generates a pseudo-document with keywords multiplied with the learnt term-frequencies. We use the original setup of Dai and Callan in combination with BM25 with default Anserini parameters which we empirically found to perform better over the tuned MS MARCO parameters. (b) SPARTA computes similarity scores between the non-contextualized query embeddings from BERT with the contextualized document embeddings. These scores can be pre-computed for a given document, which results in a 30k dimensional sparse vector. As the original implementation is not publicly available, we re-implemented the approach. We fine-tune a DistilBERT model on the MS MARCO dataset and use sparse-vectors with 2,000 non-zero entries. (c) DocT5query is a popular document expansion technique using a T5 (base) model trained on MS MARCO to generate synthetic queries and append them to the original document for lexical search. We replicate the setup of Nogueira and Lin and generate 40 queries for each document and use BM25 with default Anserini parameters. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_23", "text": " (iii) Dense Retrieval: (a) DPR is a two-tower bi-encoder trained with a single BM25 hard negative and in-batch negatives. We found the open-sourced Multi model to perform better over the single NQ model in our setting. The Multi-DPR model is a bert-base-uncased model trained on four QA datasets (including titles): NQ , TriviaQA , WebQuestions and CuratedTREC . (b) ANCE is a bi-encoder constructing hard negatives from an Approximate Nearest Neighbor (ANN) index of the corpus, which in parallel updates to select hard negative training instances during fine-tuning of the model. We use the publicly available RoBERTa model trained on MS MARCO for 600K steps for our experiments. (c) TAS-B is a bi-encoder trained with Balanced Topic Aware Sampling using dual supervision from a cross-encoder and a ColBERT model. The model was trained with a combination of both a pairwise Margin-MSE loss and an in-batch negative loss function. (d) GenQ: is an unsupervised domain-adaption approach for dense retrieval models by training on synthetically generated data. First, we fine-tune a T5 (base) model on MS MARCO for 2 epochs. Then, for a target dataset we generate 5 queries for each document using a combination of top-k and nucleus-sampling (top-k: 25; top-p: 0.95). Due to resource constraints, we cap the maximum number of target documents in each dataset to 100K. For retrieval, we continue to fine-tune the TAS-B model using in-batch negatives on the synthetic queries and document pair data. Note, GenQ creates an independent model for each task. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_24", "text": " (iv) Late-Interaction: (a) ColBERT encodes and represents the query and passage into a bag of multiple contextualized token embeddings. The late-interactions are aggregated with sum of the max-pooling query term and a dot-product across all passage terms. We use the ColBERT model as a dense-retriever (end-to-end retrieval as defined ): first top-k candidates are retrieved using ANN with faiss (faiss depth = 100) and ColBERT re-ranks by computing the late aggregated interactions. We train a bert-base-uncased model, with maximum sequence length of 300 on the MS MARCO dataset for 300K steps. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_25", "text": " (v) Re-ranking model: (a) BM25 + CE reranks the top-100 retrieved hits from a first-stage BM25 (Anserini) model. We evaluated 14 different cross-attentional re-ranking models that are publicly available on the HuggingFace model hub and found that a 6-layer, 384-h MiniLM cross-encoder model offers the best performance on MS MARCO. The model was trained on MS MARCO using a knowledge distillation setup with an ensemble of three teacher models: BERT-base, BERT-large, and ALBERT-large models following the setup in Hofstätter et al. . ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_26", "text": " In this section, we evaluate and analyze how retrieval models perform on the beir benchmark. Table 2 reports the results of all evaluated systems on the selected benchmark datasets. As a baseline, we compare our retrieval systems against BM25. Figure 4 shows, on how many datasets a respective model is able to perform better or worse than BM25. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_27", "text": " 1. In-domain performance is not a good indicator for out-of-domain generalization. We observe BM25 heavily underperforms neural approaches by 7-18 points on in-domain MS MARCO. However, beir reveals it to be a strong baseline for generalization and generally outperforming many other, more complex approaches. This stresses the point, that retrieval methods must be evaluated on a broad range of datasets. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_28", "text": " 2. Term-weighting fails, document expansion captures out-of-domain keyword vocabulary. DeepCT and SPARTA both use a transformer network to learn term weighting. While both methods perform well in-domain on MS MARCO, they completely fail to generalize well by under performing BM25 on nearly all datasets. In contrast, document expansion based docT5query is able to add new relevant keywords to a document and performs strong on the beir datasets. It outperforms BM25 on 11/18 datasets while providing a competitive performance on the remaining datasets. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_29", "text": " 3. Dense retrieval models with issues for out-of-distribution data. Dense retrieval models (esp. ANCE and TAS-B), that map queries and documents independently to vector spaces, perform strongly on certain datasets, while on many other datasets perform significantly worse than BM25. For example, dense retrievers are observed to underperform on datasets with a large domain shift compared from what they have been trained on, like in BioASQ, or task-shifts like in Touché-2020. DPR, the only non-MSMARCO trained dataset overall performs the worst in generalization on the benchmark. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_30", "text": " 4. Re-ranking and Late-Interaction models generalize well to out-of-distribution data. The cross-attentional re-ranking model (BM25+CE) performs the best and is able to outperform BM25 on almost all (16/18) datasets. It only fails on ArguAna and Touché-2020, two retrieval tasks that are extremely different to the MS MARCO training dataset. The late-interaction model ColBERT computes token embeddings independently for the query and document, and scores (query, document)-pairs by a cross-attentional like MaxSim operation. It performs a bit weaker than the cross-attentional re-ranking model, but is still able to outperform BM25 on 9/18 datasets. It appears that cross-attention and cross-attentional like operations are important for a good out-of-distribution generalization. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_31", "text": " 5. Strong training losses for dense retrieval leads to better out-of-distribution performances. TAS-B provides the best zero-shot generalization performance among its dense counterparts. It outperforms ANCE on 14/18 and DPR on 17/18 datasets respectively. We speculate that the reason lies in a strong training setup in combination of both in-domain batch negatives and Margin-MSE losses for the TAS-B model. This training loss function (with strong ensemble teachers in a Knowledge Distillation setup) shows strong generalization performances. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_32", "text": " 6. TAS-B model prefers to retrieve documents with shorter lengths. TAS-B underperforms ANCE on two datasets: TREC-COVID by 17.3 points and Touché-2020 by 7.8 points. We observed that these models retrieve documents with vastly different lengths as shown in Figure 4. On TREC-COVID, TAS-B retrieves documents with a median length of mere 10 words versus ANCE with 160 words. Similarly on Touché-2020, 14 words vs. 89 words with TAS-B and ANCE respectively. As discussed in Appendix H, this preference for shorter or longer documents is due to the used loss function. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_33", "text": " 7. Does domain adaptation help improve generalization of dense-retrievers? We evaluated GenQ, which further fine-tunes the TAS-B model on synthetic query data. It outperforms the TAS-B model on specialized domains like scientific publications, finance or StackExchange. On broader and more generic domains, like Wikipedia, it performs weaker than the original TAS-B model. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_34", "text": " Models need to potentially compare a single query against millions of documents at inference, hence, a high computational speed for retrieving results in real-time is desired. Besides speed, index sizes are vital and are often stored entirely in memory. We randomly sample 1 million documents from DBPedia and evaluate latency. For dense models, we use exact search, while for ColBERT we follow the original setup and use approximate nearest neighbor search. Performances on CPU were measured with an 8 core Intel Xeon Platinum 8168 CPU @ 2.70GHz and on GPU using a single Nvidia Tesla V100, CUDA 11.0. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_35", "text": " Tradeoff between performance and retrieval latency  The best out-of-distribution generalization performances by re-ranking top-100 BM25 documents and with late-interaction models come at the cost of high latency (> 350 ms), being slowest at inference. In contrast, dense retrievers are 20-30x faster (< 20ms) compared to the re-ranking models and follow a low-latency pattern. On CPU, the sparse models dominate in terms of speed (20-25ms). ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_36", "text": " Tradeoff between performance and index sizes  Lexical, re-ranking and dense methods have the smallest index sizes (< 3GB) to store 1M documents from DBPedia. SPARTA requires the second largest index to store a 30k dim sparse vector while ColBERT requires the largest index as it stores multiple 128 dim dense vectors for a single document. Index sizes are especially relevant when document sizes scale higher: ColBERT requires ~900GB to store the BioASQ (~15M documents) index, whereas BM25 only requires 18GB. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_37", "text": " Creating a perfectly unbiased evaluation dataset for retrieval is inherently complex and is subject to multiple biases induced by the: (i) annotation guidelines, (ii) annotation setup, and by the (iii) human annotators. Further, it is impossible to manually annotate the relevance for all (query, document)-pairs. Instead, existing retrieval methods are used to get a pool of candidate documents which are then marked for their relevance. All other unseen documents are assumed to be irrelevant. This is a source for selection bias : A new retrieval system might retrieve vastly different results than the system used for the annotation. These hits are automatically assumed to be irrelevant. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_38", "text": " Many beir datasets are found to be subject to a lexical bias, i.e. a lexical based retrieval system like TF-IDF or BM25 has been used to retrieve the candidates for annotation. For example, in BioASQ, candidates have been retrieved for annotation via term-matching with boosting tags . Creation of Signal-1M (RT) involved retrieving tweets for a query with 7 out of these 8 techniques relying upon lexical term-matching signals . Such a lexical bias disfavours approaches that don’t rely on lexical matching, like dense retrieval methods, as retrieved hits without lexical overlap are automatically assumed to be irrelevant, even though the hits might be relevant for a query. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_39", "text": " In order to study the impact of this particular type of bias, we conducted a study on the recent TREC-COVID dataset. TREC-COVID used a pooling method (38, 40) to reduce the impact of the aforementioned bias: The annotation set was constructed by using the search results from the various systems participating in the challenge. Table 4 shows the Hole@10 rate for the tested systems, i.e., how many top-10 hits is each system retrieving that have not been seen by annotators. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_40", "text": " The results reveal large differences between approaches: Lexical approaches like BM25 and docT5query have a rather low Hole@10 value of 6.4% and 2.8%, indicating that the annotation pool contained the top-hits from lexical retrieval systems. In contrast, dense retrieval systems like ANCE and TAS-B have a much higher Hole@10 of 14.4% and 31.8%, indicating that a large fraction of hits found by these systems have not been judged by annotators. Next, we manually added for all systems, the missing annotation (or holes) following the original annotation guidelines. During annotation, we were unaware of the system who retrieved the missing annotation to avoid a preference bias. In total, we annotated 980 query-document pairs in TREC-COVID. We then re-computed nDCG@10 for all systems with this additional annotations. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_41", "text": " As shown in Table 4, we observe that lexical approaches improves only slightly, e.g. for docT5query just from 0.713 to 0.714 after adding the missing relevance judgements. In contrast, for the dense retrieval system ANCE, the performance improves from 0.654 (slightly below BM25) to 0.735, which is 6.7 points above the BM25 performance. Similar improvements are noticed in ColBERT (5.8 points). Even though many systems contributed to the TREC-COVID annotation pool, the annotation pool is still biased towards lexical approaches. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_42", "text": " In this work, we presented beir: a heterogeneous benchmark for information retrieval. We provided a broader selection of target tasks ranging from narrow expert domains to open domain datasets. We included nine different retrieval tasks spanning 18 diverse datasets. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_43", "text": " By open-sourcing beir, with a standardized data format and easy-to-adapt code examples for many different retrieval strategies, we take an important steps towards a unified benchmark to evaluate the zero-shot capabilities of retrieval systems. It hopefully steers innovation towards more robust retrieval systems and to new insights which retrieval architectures perform well across tasks and domains. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_44", "text": " We studied the effectiveness of ten different retrieval models and demonstrate, that in-domain performance cannot predict how well an approach will generalize in a zero-shot setup. Many approaches that outperform BM25 on an in-domain evaluation, perform poorly on the beir datasets. Cross-attentional re-ranking, late-interaction ColBERT, and the document expansion technique docT5query performed overall well across the evaluated tasks. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" }, { "id": "2104.08663_all_45", "text": " Our study on annotation selection bias highlights the challenge of evaluating new models on existing datasets: Even though TREC-COVID is based on the predictions from many systems, contributed by a diverse set of teams, we found largely different Hole@10 rates for the tested systems, negatively affecting non-lexical approaches. Better datasets, that use diverse pooling strategies, are needed for a fair evaluation of retrieval approaches. By integrate a large number of diverse retrieval systems into BEIR, creating such diverse pools becomes significantly simplified. ", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models" } ]
Why did the author adopt an attention mechanism as a base architecture of model?
attention mechanism is the finest model that author can used to their model [0].
[ 0 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al., 2015; Antol et al., 2015; Xiong et al., 2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are often used to extract the most relevant information from the context for answering the question by summarizing the context into a fixed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_1", "text": " In this paper, we introduce the Bi-Directional Attention Flow  (BiDAF) network, a hierarchical multi-stage architecture for modeling the representations of the context paragraph at different levels of granularity (Figure 1). BiDAF includes character-level, word-level, and contextual embeddings, and uses bi-directional attention flow to obtain a query-aware context representation. Our attention mechanism offers following improvements to the previously popular attention paradigms. First, our attention layer is not used to summarize the context paragraph into a fixed-size vector. Instead, the attention is computed for every time step, and the attended vector at each time step, along with the representations from previous layers, is allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. Second, we use a memory-less attention mechanism. That is, while we iteratively compute attention through time as in Bahdanau et al. (2015), the attention at each time step is a function of only the query and the context paragraph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simplification leads to the division of labor between the attention layer and the modeling layer. It forces the attention layer to focus on learning the attention between the query and the context, and enables the modeling layer to focus on learning the interaction within the query-aware context representation (the output of the attention layer). It also allows the attention at each time step to be unaffected from incorrect attendances at previous time steps. Our experiments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_2", "text": " Our BiDAF model111Our code and interactive demo are available at: allenai.github.io/bi-att-flow/ outperforms all previous approaches on the highly-competitive Stanford Question Answering Dataset (SQuAD) test set leaderboard at the time of submission. With a modification to only the output layer, BiDAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, visualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al., 2016). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_3", "text": " Our machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure 1): 1. Character Embedding Layer maps each word to a vector space using character-level CNNs. 2. Word Embedding Layer maps each word to a vector space using a pre-trained word embedding model. 3. Contextual Embedding Layer utilizes contextual cues from surrounding words to refine the embedding of the words. These first three layers are applied to both the query and context. 4. Attention Flow Layer couples the query and context vectors and produces a set of query-aware feature vectors for each word in the context. 5. Modeling Layer employs a Recurrent Neural Network to scan the context. 6. Output Layer provides an answer to the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_4", "text": " Character embedding layer is responsible for mapping each word to a high-dimensional vector space. Let {𝒙1,…​𝒙T}subscript𝒙1…subscript𝒙𝑇\\{\\bm{x}_{1},\\dots\\bm{x}_{T}\\} and {𝒒1,…​𝒒J}subscript𝒒1…subscript𝒒𝐽\\{\\bm{q}_{1},\\dots\\bm{q}_{J}\\} represent the words in the input context paragraph and query, respectively. Following Kim (2014), we obtain the character-level embedding of each word using Convolutional Neural Networks (CNN). Characters are embedded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the input channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a fixed-size vector for each word. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_5", "text": " Word embedding layer also maps each word to a high-dimensional vector space. We use pre-trained word vectors, GloVe (Pennington et al., 2014), to obtain the fixed word embedding of each word. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_6", "text": " The concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al., 2015). The outputs of the Highway Network are two sequences of d𝑑d-dimensional vectors, or more conveniently, two matrices: 𝐗∈ℝd×T𝐗superscriptℝ𝑑𝑇{\\bf X}\\in\\mathbb{R}^{d\\times T} for the context and 𝐐∈ℝd×J𝐐superscriptℝ𝑑𝐽{\\bf Q}\\in\\mathbb{R}^{d\\times J} for the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_7", "text": " We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain 𝐇∈ℝ2​d×T𝐇superscriptℝ2𝑑𝑇{\\bf H}\\in\\mathbb{R}^{2d\\times T} from the context word vectors 𝐗𝐗{\\bf X}, and 𝐔∈ℝ2​d×J𝐔superscriptℝ2𝑑𝐽{\\bf U}\\in\\mathbb{R}^{2d\\times J} from query word vectors 𝐐𝐐{\\bf Q}. Note that each column vector of 𝐇𝐇{\\bf H} and 𝐔𝐔{\\bf U} is 2​d2𝑑2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d𝑑d-dimensional output. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_8", "text": " It is worth noting that the first three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision field. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_9", "text": " Attention flow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al., 2015; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2016), the attention flow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_10", "text": " The inputs to the layer are contextual vector representations of the context 𝐇𝐇{\\bf H} and the query 𝐔𝐔{\\bf U}. The outputs of the layer are the query-aware vector representations of the context words, 𝐆𝐆{\\bf G}, along with the contextual embeddings from the previous layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_11", "text": " In this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, 𝐒∈ℝT×J𝐒superscriptℝ𝑇𝐽{\\bf S}\\in\\mathbb{R}^{T\\times J}, between the contextual embeddings of the context (𝐇𝐇{\\bf H}) and the query (𝐔𝐔{\\bf U}), where 𝐒t​jsubscript𝐒𝑡𝑗{\\bf S}_{tj} indicates the similarity between t𝑡t-th context word and j𝑗j-th query word. The similarity matrix is computed by 𝐒t​j=α​(𝐇:t,𝐔:j)∈ℝsubscript𝐒𝑡𝑗𝛼subscript𝐇:absent𝑡subscript𝐔:absent𝑗ℝ{\\bf S}_{tj}=\\alpha({\\bf H}_{:t},{\\bf U}_{:j})\\in\\mathbb{R} (1) where α𝛼\\alpha is a trainable scalar function that encodes the similarity between its two input vectors, 𝐇:tsubscript𝐇:absent𝑡{\\bf H}_{:t} is t𝑡t-th column vector of 𝐇𝐇{\\bf H}, and 𝐔:jsubscript𝐔:absent𝑗{\\bf U}_{:j} is j𝑗j-th column vector of 𝐔𝐔{\\bf U}, We choose α​(𝐡,𝐮)=𝐰(𝐒)⊤​(𝐡;𝐮;𝐡∘𝐮)𝛼𝐡𝐮subscriptsuperscript𝐰top𝐒𝐡𝐮𝐡𝐮\\alpha({\\bf h},{\\bf u})={\\bf w}^{\\top}_{({\\bf S})}({\\bf h};{\\bf u};{\\bf h}\\circ{\\bf u}), where 𝐰(𝐒)∈ℝ6​dsubscript𝐰𝐒superscriptℝ6𝑑{\\bf w}_{({\\bf S})}\\in\\mathbb{R}^{6d} is a trainable weight vector, ∘\\circ is elementwise multiplication, (;)(;) is vector concatenation across row, and implicit multiplication is matrix multiplication. Now we use 𝐒𝐒{\\bf S} to obtain the attentions and the attended vectors in both directions. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_12", "text": " Context-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let 𝐚t∈ℝJsubscript𝐚𝑡superscriptℝ𝐽{\\bf a}_{t}\\in\\mathbb{R}^{J} represent the attention weights on the query words by t𝑡t-th context word, ∑𝐚t​j=1subscript𝐚𝑡𝑗1\\sum{\\bf a}_{tj}=1 for all t𝑡t. The attention weight is computed by 𝐚t=softmax​(𝐒t:)∈ℝJsubscript𝐚𝑡softmaxsubscript𝐒:𝑡absentsuperscriptℝ𝐽{\\bf a}_{t}=\\mathrm{softmax}({\\bf S}_{t:})\\in\\mathbb{R}^{J}, and subsequently each attended query vector is 𝐔~:t=∑j𝐚t​j​𝐔:jsubscript~𝐔:absent𝑡subscript𝑗subscript𝐚𝑡𝑗subscript𝐔:absent𝑗\\tilde{{\\bf U}}_{:t}=\\sum_{j}{\\bf a}_{tj}{\\bf U}_{:j}. Hence 𝐔~~𝐔\\tilde{{\\bf U}} is a 2​d2𝑑2d-by-T𝑇T matrix containing the attended query vectors for the entire context. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_13", "text": " Query-to-context Attention. Query-to-context (Q2C) attention signifies which context words have the closest similarity to one of the query words and are hence critical for answering the query. We obtain the attention weights on the context words by 𝐛=softmax​(maxc​o​l⁡(𝐒))∈ℝT𝐛softmaxsubscript𝑐𝑜𝑙𝐒superscriptℝ𝑇{\\bf b}=\\mathrm{softmax}(\\max_{col}({\\bf S}))\\in\\mathbb{R}^{T}, where the maximum function (maxc​o​lsubscript𝑐𝑜𝑙\\max_{col}) is performed across the column. Then the attended context vector is 𝐡~=∑t𝐛t​𝐇:t∈ℝ2​d~𝐡subscript𝑡subscript𝐛𝑡subscript𝐇:absent𝑡superscriptℝ2𝑑\\tilde{\\bf h}=\\sum_{t}{\\bf b}_{t}{\\bf H}_{:t}\\in\\mathbb{R}^{2d}. This vector indicates the weighted sum of the most important words in the context with respect to the query. 𝐡~~𝐡\\tilde{\\bf h} is tiled T𝑇T times across the column, thus giving 𝐇~∈ℝ2​d×T~𝐇superscriptℝ2𝑑𝑇\\tilde{\\bf H}\\in\\mathbb{R}^{2d\\times T}. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_14", "text": " Finally, the contextual embeddings and the attention vectors are combined together to yield 𝐆𝐆{\\bf G}, where each column vector can be considered as the query-aware representation of each context word. We define 𝐆𝐆{\\bf G} by 𝐆:t=𝜷​(𝐇:t,𝐔~:t,𝐇~:t)∈ℝd𝐆subscript𝐆:absent𝑡𝜷subscript𝐇:absent𝑡subscript~𝐔:absent𝑡subscript~𝐇:absent𝑡superscriptℝsubscript𝑑𝐆{\\bf G}_{:t}={\\bm{\\beta}}({\\bf H}_{:t},\\tilde{\\bf U}_{:t},\\tilde{\\bf H}_{:t})\\in\\mathbb{R}^{d_{\\bf G}} (2) where 𝐆:tsubscript𝐆:absent𝑡{\\bf G}_{:t} is the t𝑡t-th column vector (corresponding to t𝑡t-th context word), 𝜷𝜷{\\bm{\\beta}} is a trainable vector function that fuses its (three) input vectors, and d𝐆subscript𝑑𝐆d_{\\bf G} is the output dimension of the 𝜷𝜷{\\bm{\\beta}} function. While the 𝜷𝜷{\\bm{\\beta}} function can be an arbitrary trainable neural network, such as multi-layer perceptron, a simple concatenation as following still shows good performance in our experiments: 𝜷​(𝐡,𝐮~,𝐡~)=(𝐡;𝐮~;𝐡∘𝐮~;𝐡∘𝐡~)∈ℝ8​d×T𝜷𝐡~𝐮~𝐡𝐡~𝐮𝐡~𝐮𝐡~𝐡superscriptℝ8𝑑𝑇{\\bm{\\beta}}({\\bf h},\\tilde{\\bf u},\\tilde{\\bf h})=({\\bf h};\\tilde{\\bf u};{\\bf h}\\circ\\tilde{\\bf u};{\\bf h}\\circ\\tilde{\\bf h})\\in\\mathbb{R}^{8d\\times T} (i.e., d𝐆=8​dsubscript𝑑𝐆8𝑑d_{\\bf G}=8d). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_15", "text": " The input to the modeling layer is 𝐆𝐆{\\bf G}, which encodes the query-aware representations of context words. The output of the modeling layer captures the interaction among the context words conditioned on the query. This is different from the contextual embedding layer, which captures the interaction among context words independent of the query. We use two layers of bi-directional LSTM, with the output size of d𝑑d for each direction. Hence we obtain a matrix 𝐌∈ℝ2​d×T𝐌superscriptℝ2𝑑𝑇{\\bf M}\\in\\mathbb{R}^{2d\\times T}, which is passed onto the output layer to predict the answer. Each column vector of 𝐌𝐌{\\bf M} is expected to contain contextual information about the word with respect to the entire context paragraph and the query. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_16", "text": " The output layer is application-specific. The modular nature of BiDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section 5, we use a slight modification of this output layer for cloze-style comprehension. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_17", "text": " The QA task requires the model to find a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by 𝐩1=softmax​(𝐰(𝐩1)⊤​(𝐆;𝐌)),superscript𝐩1softmaxsuperscriptsubscript𝐰superscript𝐩1top𝐆𝐌{\\bf p}^{1}=\\mathrm{softmax}({\\bf w}_{({\\bf p}^{1})}^{\\top}({\\bf G};{\\bf M})), (3) where 𝐰(𝐩1)∈ℝ10​dsubscript𝐰superscript𝐩1superscriptℝ10𝑑{\\bf w}_{({\\bf p}^{1})}\\in\\mathbb{R}^{10d} is a trainable weight vector. For the end index of the answer phrase, we pass 𝐌𝐌{\\bf M} to another bidirectional LSTM layer and obtain 𝐌2∈ℝ2​d×Tsuperscript𝐌2superscriptℝ2𝑑𝑇{\\bf M}^{2}\\in\\mathbb{R}^{2d\\times T}. Then we use 𝐌2superscript𝐌2{\\bf M}^{2} to obtain the probability distribution of the end index in a similar manner: 𝐩2=softmax​(𝐰(𝐩2)⊤​(𝐆;𝐌2))superscript𝐩2softmaxsuperscriptsubscript𝐰superscript𝐩2top𝐆superscript𝐌2{\\bf p}^{2}=\\mathrm{softmax}({\\bf w}_{({\\bf p}^{2})}^{\\top}({\\bf G};{\\bf M}^{2})) (4) ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_18", "text": " Training. We define the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples: L​(θ)=−1N​∑iNlog⁡(𝐩yi11)+log⁡(𝐩yi22)𝐿𝜃1𝑁subscriptsuperscript𝑁𝑖subscriptsuperscript𝐩1subscriptsuperscript𝑦1𝑖subscriptsuperscript𝐩2subscriptsuperscript𝑦2𝑖L(\\theta)=-\\frac{1}{N}\\sum^{N}_{i}\\log({\\bf p}^{1}_{y^{1}_{i}})+\\log({\\bf p}^{2}_{y^{2}_{i}}) (5) where θ𝜃\\theta is the set of all trainable weights in the model (the weights and biases of CNN filters and LSTM cells, 𝐰(𝐒)subscript𝐰𝐒{\\bf w}_{({\\bf S})}, 𝐰(𝐩1)subscript𝐰superscript𝐩1{\\bf w}_{({\\bf p}^{1})} and 𝐰(𝐩2)subscript𝐰superscript𝐩2{\\bf w}_{({\\bf p}^{2})}), N𝑁N is the number of examples in the dataset, yi1subscriptsuperscript𝑦1𝑖y^{1}_{i} and yi2subscriptsuperscript𝑦2𝑖y^{2}_{i} are the true start and end indices of the i𝑖i-th example, respectively, and 𝐩ksubscript𝐩𝑘{\\bf p}_{k} indicates the k𝑘k-th value of the vector 𝐩𝐩{\\bf p}. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_19", "text": " Test. The answer span (k,l)𝑘𝑙(k,l) where k≤l𝑘𝑙k\\leq l with the maximum value of 𝐩k1​𝐩l2subscriptsuperscript𝐩1𝑘subscriptsuperscript𝐩2𝑙{\\bf p}^{1}_{k}{\\bf p}^{2}_{l} is chosen, which can be computed in linear time with dynamic programming. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_20", "text": " A significant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al., 2013) were too small to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail by Hermann et al. (2015) and Childrens Book Test by Hill et al. (2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_21", "text": " Previous works in end-to-end machine comprehension use attention mechanisms in three distinct ways. The first group (largely inspired by Bahdanau et al. (2015)) uses a dynamic attention mechanism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention. Hermann et al. (2015) argue that the dynamic attention model performs better than using a single fixed query vector to attend on context words on CNN & DailyMail datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weights in the same model drastically improves the accuracy. Wang & Jiang (2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast to these models, BiDAF uses a memory-less attention mechanism. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_22", "text": " The second group computes the attention weights once, which are then fed into an output layer for final prediction (e.g., Kadlec et al. (2016)). Attention-over-attention model (Cui et al., 2016) uses a 2D similarity matrix between the query and context words (similar to Equation 1) to compute the weighted average of query-to-context attention. In contrast to these models, BiDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors flow into the modeling (RNN) layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_23", "text": " The third group (considered as variants of Memory Network (Weston et al., 2015)) repeats computing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al., 2016; Dhingra et al., 2016). Shen et al. (2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our BiDAF model to incorporate multiple hops. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_24", "text": " The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al., 2015; Malinowski et al., 2015). Attention mechanisms have also been successfully employed for the VQA task and can be broadly clustered based on the granularity of their attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al., 2016; Xiong et al., 2016a). At a finer level, each question word attends to each image patch and the highest attention value for each spatial location (Xu & Saenko, 2016) is adopted. A hybrid approach is to combine questions representations at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al., 2015). Several approaches to constructing the attention matrix have been used including element-wise product, element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al., 2016). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_25", "text": " Lu et al. (2016) have recently shown that in addition to attending from the question to image patches, attending from the image back to the question words provides an improvement on the VQA task. This finding in the visual domain is consistent with our finding in the language domain, where our bi-directional attention between the query and context provides improved results. Their model, however, uses the attention weights directly in the output layer and does not take advantage of the attention flow to the modeling layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_26", "text": " In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al., 2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_27", "text": " SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,000 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k train/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_28", "text": " The model architecture used for this task is depicted in Figure 1. Each paragraph and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fed into the model. We use 100 1D filters for CNN char embedding, each with a width of 5. The hidden state size (d𝑑d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler, 2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.50.50.5, for 12 epochs. A dropout (Srivastava et al., 2014) rate of 0.20.20.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.9990.9990.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 12 runs for each question. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_29", "text": " The results of our model and competing approaches on the hidden test are summarized in Table 2(a). BiDAF (ensemble) achieves an EM score of 73.3 and an F1 score of 81.1, outperforming all previous approaches. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_30", "text": " Table 2(b) shows the performance of our model and its ablations on the SQuAD dev set. Both char-level and word-level embeddings contribute towards the model’s performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole, while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi-directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector 𝐔~~𝐔\\tilde{\\bf U} with the average of the output vectors of the question’s contextual embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on both metrics. For ablating Q2C attention, the output of the attention layer, 𝐆𝐆{\\bf G}, does not include terms that have the attended Q2C vectors, 𝐇~~𝐇\\tilde{\\bf H}. To evaluate the attention flow, we study a dynamic attention model, where the attention is dynamically computed within the modeling layer’s LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention is pre-computed before flowing to the modeling layer. Despite being a simpler attention mechanism, our proposed static attention outperforms the dynamically computed attention by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the first 4 layers which are then incorporated by the modeling layer. We also show the performance of BiDAF with several different definitions of α𝛼\\alpha and 𝜷𝜷{\\bm{\\beta}} functions (Equation 1 and 2) in Appendix B. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_31", "text": " We now provide a qualitative analysis of our model on the SQuAD dev set. First, we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequent query words in the dev data and look at the context words that have the highest cosine similarity to the query words (Table 2). At the word embedding layer, query words such as When, Where and Who are not well aligned to possible answers in the context, but this dramatically changes in the contextual embedding layer which has access to context from surrounding words and is just 1 layer below the attention layer. When begins to match years, Where matches locations, and Who matches names. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_32", "text": " We also visualize these two feature spaces using t-SNE in Figure 2. t-SNE is performed on a large fraction of dev data but we only plot data points corresponding to the months of the year. An interesting pattern emerges in the Word space, where May is separated from the rest of the months because May has multiple meanings in the English language. The contextual embedding layer uses contextual cues from surrounding words and is able to separate the usages of the word May. Finally we visualize the attention matrices for some question-context tuples in the dev data in Figure 3. In the first example, Where matches locations and in the second example, many matches quantities and numerical symbols. Also, entities in the question typically attend to the same entities in the context, thus providing a feature for the model to localize possible answers. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_33", "text": " We analyse the performance of our our model with a traditional language-feature-based baseline (Rajpurkar et al., 2016). Figure 2b shows a Venn diagram of the dev set questions correctly answered by the models. Our model is able to answer more than 86% of the questions correctly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the first words in the questions (Figure 2c). Our model outperforms the traditional baseline comfortably in every category. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_34", "text": " We randomly select 50 incorrect questions (based on EM) and categorize them into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowledge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix A for the examples of the error modes. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_35", "text": " We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015). ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_36", "text": " In a cloze test, the reader is asked to fill in words that have been removed from a passage, for measuring one’s ability to comprehend text. Hermann et al. (2015) have recently compiled a massive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word, the missing word is always a named entity, anonymized with a random ID. Also, the IDs must be shuffled constantly during test, which is also critical for full anonymization. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_37", "text": " The model architecture used for this task is very similar to that for SQuAD (Section 4) with only a few small changes to adapt it to the cloze test. Since each answer in the CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (𝐩1superscript𝐩1{\\bf p}^{1}); the prediction for the end index (𝐩2superscript𝐩2{\\bf p}^{2}) is omitted from the loss function. Also, we mask out all non-entity words in the final classification layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al. (2016). During training, after we obtain 𝐩1superscript𝐩1{\\bf p}^{1}, we sum all probability values of the entity instances in the context that correspond to the correct answer. Then the loss function is computed from the summed probability. We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al., 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BiDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by parallelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section 4. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_38", "text": " The results of our single-run models and competing approaches on the CNN/DailyMail datasets are summarized in Table 3. ∗ indicates ensemble methods. BiDAF outperforms previous single-run models on both datasets for both val and test data. On the DailyMail test, our single-run model even outperforms the best ensemble method. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_39", "text": " In this paper, we introduce BiDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each component in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct locations in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer. ", "title": "Bidirectional Attention Flow for Machine Comprehension" }, { "id": "1611.01603_all_40", "text": " This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments. ", "title": "Bidirectional Attention Flow for Machine Comprehension" } ]