v20.2.0 Tentacle released
Tentacle is the 20th stable release of Ceph.
This is the first stable release of Ceph Tentacle.
Contents:
- Major Changes from Squid
- Upgrading from Reef or Squid
- Upgrading from pre-Reef releases (like Quincy)
- Thank You to Our Contributors
Major Changes from Squid ¶
Highlights ¶
See the sections below for more details on these items.
CephFS
- Directories may now be configured with case-insensitive or normalized directory entry names.
- Modifying the FS setting variable
max_mdswhen a cluster is unhealthy now requires users to pass the confirmation flag (--yes-i-really-mean-it). EOPNOTSUPP(Operation not supported) is now returned by the CephFS FUSE client forfallocatefor the default case (i.e.mode == 0).
Crimson
- SeaStore Tech Preview: SeaStore object store is now deployable alongside Crimson-OSD, mainly for early testing and experimentation. Community feedback is encouraged to help with future improvements.
Dashboard
- Support has been added for NVMe/TCP gateway groups and multiple namespaces, multi-cluster management, OAuth 2.0 integration, and enhanced RGW/SMB features including multi-site automation, tiering, policies, lifecycles, notifications, and granular replication.
Integrated SMB support
- Ceph clusters now offer an SMB Manager module that works like the existing NFS subsystem. The new SMB support allows the Ceph cluster to automatically create Samba-backed SMB file shares connected to CephFS. The
smbmodule can configure both basic Active Directory domain or standalone user authentication. The Ceph cluster can host one or more virtual SMB clusters which can be truly clustered using Samba's CTDB technology. Thesmbmodule requires a cephadm-enabled Ceph cluster and deploys container images provided by thesamba-containerproject. The Ceph dashboard can be used to configure SMB clusters and shares. A newcephfs-proxydaemon is automatically deployed to improve scalability and memory usage when connecting Samba to CephFS.
MGR
- Users now have the ability to force-disable always-on modules.
- The
restfulandzabbixmodules (deprecated since 2020) have been officially removed.
RADOS
- FastEC: Long-anticipated performance and space amplification optimizations are added for erasure-coded pools.
- BlueStore: Improved compression and a new, faster WAL (write-ahead-log).
- Data Availability Score: Users can now track a data availability score for each pool in their cluster.
- OMAP: All components have been switched to the faster OMAP iteration interface, which improves RGW bucket listing and scrub operations.
RBD
- New live migration features: RBD images can now be instantly imported from another Ceph cluster (native format) or from a wide variety of external sources/formats.
- There is now support for RBD namespace remapping while mirroring between Ceph clusters.
- Several commands related to group and group snap info were added or improved, and
rbd device mapcommand now defaults tomsgr2.
RGW
Added support for S3
GetObjectAttributes.For compatibility with AWS S3,
LastModifiedtimestamps are now truncated to the second. Note that during upgrade, users may observe these timestamps moving backwards as a result.Bucket resharding now does most of its processing before it starts to block write operations. This should significantly reduce the client-visible impact of resharding on large buckets.
The User Account feature introduced in Squid provides first-class support for IAM APIs and policy. Our preliminary STS support was based on tenants, and exposed some IAM APIs to admins only. This tenant-level IAM functionality is now deprecated in favor of accounts. While we'll continue to support the tenant feature itself for namespace isolation, the following features will be removed no sooner than the V release:
- Tenant-level IAM APIs including CreateRole, PutRolePolicy and PutUserPolicy,
- Use of tenant names instead of accounts in IAM policy documents,
- Interpretation of IAM policy without cross-account policy evaluation,
- S3 API support for cross-tenant names such as
Bucket='tenant:bucketname' - STS Lite and
sts:GetSessionToken.
Cephadm ¶
A new cephadm-managed
mgmt-gatewayservice provides a single, TLS-terminated entry point for Ceph management endpoints such as the Dashboard and the monitoring stack. The gateway is implemented as an nginx-based reverse proxy that fronts Prometheus, Grafana, and Alertmanager, so users no longer need to connect to those daemons directly or know which hosts they run on. When combined with the newoauth2-proxyservice, which integrates with external identity providers using the OpenID Connect (OIDC) / OAuth 2.0 protocols, the gateway can enforce centralized authentication and single sign-on (SSO) for both the Ceph Dashboard and the rest of the monitoring stack.High availability for the Ceph Dashboard and the Prometheus-based monitoring stack is now provided via the cephadm-managed
mgmt-gateway. nginx high-availability mechanisms allow the mgmt-gateway to detect healthy instances of the Dashboard, Prometheus, Grafana, and Alertmanager, route traffic accordingly, and handle manager failover transparently. When deployed with a virtual IP and multiplemgmt-gatewayinstances, this architecture keeps management access available even during daemon or host failures.A new
certmgrcephadm subsystem centralizes certificate lifecycle management for cephadm-managed services. certmgr acts as a cluster-internal root CA for cephadm-signed certificates, it can also consume user-provided certificates, and tracks how each certificate was provisioned. It standardizes HTTPS configuration for services such as RGW and the mgmt-gateway, automates renewal and rotation of cephadm-signed certificates, and raises health warnings when certificates are invalid, expiring or misconfigured. With certmgr, cephadm-signed certificates are available across all cephadm-managed services, providing secure defaults out of the box.
CephFS ¶
Directories may now be configured with case-insensitive or normalized directory entry names. This is an inheritable configuration, making it apply to an entire directory tree.
For more information, see https://docs.ceph.com/en/tentacle/cephfs/charmap/.
It is now possible to pause the threads that asynchronously purge deleted subvolumes by using the config option
mgr/volumes/pause_purging.It is now possible to pause the threads that asynchronously clone subvolume snapshots by using the config option
mgr/volumes/pause_cloning.Modifying the setting
max_mdswhen a cluster is unhealthy now requires users to pass the confirmation flag (--yes-i-really-mean-it). This has been added as a precaution to inform users that modifyingmax_mdsmay not help with troubleshooting or recovery efforts. Instead, it might further destabilize the cluster.EOPNOTSUPP(Operation not supported) is now returned by the CephFS FUSE client forfallocatein the default case (i.e.,mode == 0) since CephFS does not support disk space reservation. The only flags supported areFALLOC_FL_KEEP_SIZEandFALLOC_FL_PUNCH_HOLE.The
ceph fs subvolume snapshot getpathcommand now allows users to get the path of a snapshot of a subvolume. If the snapshot is not present,ENOENTis returned.The
ceph fs volume createcommand now allows users to pass metadata and data pool names to be used for creating the volume. If either is not passed, or if either is a non-empty pool, the command will abort.The format of the pool namespace name for CephFS volumes has been changed from
fsvolumens__<subvol-name>tofsvolumens__<subvol-grp-name>_<subvol-name>to avoid namespace collisions when two subvolumes located in different subvolume groups have the same name. Even with namespace collisions, there were no security issues, since the MDS auth cap is restricted to the subvolume path. Now, with this change, the namespaces are completely isolated.If the subvolume name passed to the command
ceph fs subvolume infois a clone, the output will now also contain a "source" field that tells the user the name of the source snapshot along with the name of the volume, subvolume group, and subvolume in which the source snapshot is located. For clones created with Tentacle or an earlier release, the value of this field will beN/A. Regular subvolumes do not have a source subvolume and therefore the output for them will not contain a "source" field regardless of the release.
Crimson / SeaStore ¶
The Crimson project continues to progress, with the Squid release marking the first technical preview available for Crimson. The Tentacle release introduces a host of improvements and new functionalities that enhance the robustness, performance, and usability of both Crimson-OSD and the SeaStore object store. In this release, SeaStore can now be deployed alongside the Crimson-OSD! Early testing and experimentation are highly encouraged and we’d greatly appreciate any initial feedback rounds from the community to help guide future improvements. Check out the Crimson project updates blog post for Tentacle where we highlight some of the work included in the latest release, moving us closer to fully replacing the existing Classical OSD in the future: https://ceph.io/en/news/blog/2025/crimson-T-release/
If you're new to the Crimson project, please visit the project page for more information and resources: https://ceph.io/en/news/crimson
Dashboard ¶
- There is now added support for NVMe/TCP gateway groups and multiple namespaces, multi-cluster management, OAuth 2.0 integration, and enhanced RGW/SMB features including multi-site automation, tiering, policies, lifecycles, notifications, and granular replication.
MGR ¶
The Ceph Manager's always-on modulues/plugins can now be force-disabled. This can be necessary in cases where we wish to prevent the manager from being flooded by module commands when Ceph services are down or degraded.
mgr/restful,mgr/zabbix: both modules, already deprecated since 2020, have been finally removed. They have not been actively maintained in the last years, and started suffering from vulnerabilities in their dependency chain (e.g.: CVE-2023-46136). An alternative for therestfulmodule is thedashboardmodule, which provides a richer and better maintained RESTful API. Regarding thezabbixmodule, there are alternative monitoring solutions, likeprometheus, which is the most widely adopted among the Ceph user community.
RADOS ¶
Long-anticipated performance and space amplification optimizations (FastEC) are added for erasure-coded pools, including partial reads and partial writes.
A new implementation of the Erasure Coding I/O code provides substantial performance improvements and some capacity improvements. The new code is designed to optimize performance when using Erasure Coding with block storage (RBD) and file storage (CephFS) but will have benefits for object storage (RGW), in particular when using smaller sized objects. A new flag
allow_ec_optimizationsmust be set on each pool to switch to using the new code. Existing pools can be upgraded once the OSD and Monitor daemons have been updated. There is no need to update the clients.The default plugin for erasure coded pools has been changed from Jerasure to ISA-L. Clusters created on Tentacle or later releases will use ISA-L as the default plugin when creating a new pool. Clusters that upgrade to the T release will continue to use their existing default values. The default values can be overridden by creating a new erasure code profile and selecting it when creating a new pool. ISA-L is recommended for new pools because the Jerasure library is no longer maintained.
BlueStore now has better compression and a new, faster WAL (write-ahead-log).
All components have been switched to the faster OMAP iteration interface, which improves RGW bucket listing and scrub operations.
It is now possible to bypass
ceph_assert()in extreme cases to help with disaster recovery.Testing improvements for dencoding verification were added.
A new command,
ceph osd pool availability-status, has been added that allows users to view the availability score for each pool in a cluster. A pool is considered unavailable if any PG in the pool is notactiveor if there are unfound objects. Otherwise the pool is considered available. The score is updated every one second by default. This interval can be changed using the new config optionpool_availability_update_interval. The feature is off by default. A new config optionenable_availability_trackingcan be used to turn on the feature if required. Another command is added to clear the availability status for a specific pool:$ ceph osd pool clear-availability-status <pool-name>This feature is in tech preview.
Related links:
- Feature ticket: https://tracker.ceph.com/issues/67777
- Documentation: https://docs.ceph.com/en/tentacle/rados/operations/monitoring/
Leader monitor and stretch mode status are now included in the
ceph statusoutput.Related tracker: https://tracker.ceph.com/issues/70406
The
ceph dfcommand reports incorrectMAX AVAILfor stretch mode pools when CRUSH rules use multiple take steps for datacenters.PGMap::get_rule_availincorrectly calculates available space from only one datacenter. As a workaround, define CRUSH rules withtake defaultandchoose firstn 0 type datacenter. See https://tracker.ceph.com/issues/56650#note-6 for details.Upgrading a cluster configured with a CRUSH rule with multiple take steps can lead to data shuffling, as the new CRUSH changes may necessitate data redistribution. In contrast, a stretch rule with a single-take configuration will not cause any data movement during the upgrade process.
Added convenience function
librados::AioCompletion::cancel()with the same behavior aslibrados::IoCtx::aio_cancel().The configuration parameter
osd_repair_during_recoveryhas been removed. That configuration flag used to control whether an operator-initiated "repair scrub" would be allowed to start on an OSD that is performing a recovery. In this Ceph version, operator-initiated scrubs and repair scrubs are never blocked by a repair being performed.Fixed issue of recovery/backfill hang due to improper handling of items in the dmclock's background clean-up thread.
Related tracker: https://tracker.ceph.com/issues/61594
The OSD's IOPS capacity used by the mClock scheduler is now also checked to determine if it's below a configured threshold value defined by:
osd_mclock_iops_capacity_low_threshold_hdd– set to 50 IOPSosd_mclock_iops_capacity_low_threshold_ssd– set to 1000 IOPS
The check is intended to handle cases where the measured IOPS is unrealistically low. If such a case is detected, the IOPS capacity is either set to the last valid value or the configured default to avoid affecting cluster performance (slow or stalled ops).
Documentation has been updated with steps to override OSD IOPS capacity configuration.
Related links:
- Tracker ticket: https://tracker.ceph.com/issues/70774
- Documentation: https://docs.ceph.com/en/tentacle/rados/configuration/mclock-config-ref/
pybind/rados: Fixes
WriteOp.zero()in the original reversed order of argumentsoffsetandlength. When pybind callsWriteOp.zero(), the argument passed does not matchrados_write_op_zero, and offset and length are swapped, which results in an unexpected response.
RBD ¶
RBD images can now be instantly imported from another Ceph cluster. The migration source spec for
nativeformat has growncluster_nameandclient_nameoptional fields for connecting to the source cluster after parsing the respectiveceph.conf-like configuration file.With the help of the new NBD stream (
"type": "nbd"), RBD images can now be instantly imported from a wide variety of external sources/formats. The exact set of supported formats and their features depends on the capabilities of the NBD server.While mirroring between Ceph clusters, the local and remote RBD namespaces don't need to be the same anymore (but the pool names still do). Using the new
--remote-namespaceoption ofrbd mirror pool enablecommand, it's now possible to pair a local namespace with an arbitrary remote namespace in the respective pool, including mapping a default namespace to a non-default namespace and vice versa, at the time mirroring is configured.All Python APIs that produce timestamps now return "aware"
datetimeobjects instead of "naive" ones (i.e., those including time zone information instead of those not including it). All timestamps remain in UTC, but includingtimezone.utcmakes it explicit and avoids the potential of the returned timestamp getting misinterpreted. In Python 3, manydatetimemethods treat "naive"datetimeobjects as local times.rbd group infoandrbd group snap infocommands are introduced to show information about a group and a group snapshot respectively.rbd group snap lsoutput now includes the group snapshot IDs. The header of the column showing the state of a group snapshot in the unformatted CLI output is changed fromSTATUStoSTATE. The state of a group snapshot that was shown asokis now shown ascomplete, which is more descriptive.In
rbd mirror image statusandrbd mirror pool status --verboseoutputs,mirror_uuidsfield has been renamed tomirror_uuidto highlight that the value is always a single UUID and never a list of any kind.Moving an image that is a member of a group to trash is no longer allowed. The
rbd trash mvcommand now behaves the same way asrbd rmin this scenario.rbd device mapcommand now defaults tomsgr2for all device types.-o ms_mode=legacycan be passed to continue usingmsgr1with krbd.The family of diff-iterate APIs has been extended to allow diffing from or between non-user type snapshots which can only be referred to by their IDs.
Fetching the mirroring mode of an image is invalid if the image is disabled for mirroring. The public APIs -- C++
mirror_image_get_mode(), Crbd_mirror_image_get_mode(), and PythonImage.mirror_image_get_mode()-- will returnEINVALwhen mirroring is disabled.Promoting an image is invalid if the image is not enabled for mirroring. The public APIs -- C++
mirror_image_promote(), Crbd_mirror_image_promote(), and PythonImage.mirror_image_promote()-- will return EINVAL instead of ENOENT when mirroring is not enabled.Requesting a resync on an image is invalid if the image is not enabled for mirroring. The public APIs -- C++
mirror_image_resync(), Crbd_mirror_image_resync(), and PythonImage.mirror_image_resync()-- will return EINVAL instead of ENOENT when mirroring is not enabled.
RGW ¶
Multiple fixes: Lua scripts will no longer run uselessly against health checks, properly quoted
ETagvalues returned by S3CopyPart,PostObject, andCompleteMultipartUploadresponses.IAM policy evaluation now supports conditions
ArnEqualsandArnLike, along with theirNotandIfExistsvariants.Added BEAST frontend option
so_reuseportwhich facilitates running multiple RGW instances on the same host by sharing a single TCP port.Replication policies now validate permissions using
s3:ReplicateObject,s3:ReplicateDelete, ands3:ReplicateTagsfor destination buckets. For source buckets, boths3:GetObjectVersionForReplicationands3:GetObject(Version)are supported. Actions likes3:GetObjectAcl,s3:GetObjectLegalHold, ands3:GetObjectRetentionare also considered when fetching the source object. Replication of tags is controlled by thes3:GetObject(Version)Taggingpermission.Adding missing quotes to the
ETagvalues returned by S3CopyPart,PostObject, andCompleteMultipartUploadresponses.PutObjectLockConfigurationcan now be used to enable S3 Object Lock on an existing versioning-enabled bucket that was not created with Object Lock enabled.The
x-amz-confirm-remove-self-bucket-accessheader is now supported byPutBucketPolicy. Additionally, the root user will always have access to modify the bucket policy, even if the current policy explicitly denies access.Added support for the
RestrictPublicBucketsproperty of the S3PublicAccessBlockconfiguration.The HeadBucket API now reports the
X-RGW-Bytes-UsedandX-RGW-Object-Countheaders only when theread-statsquerystring is explicitly included in the API request.
Telemetry ¶
- The
basicchannel in telemetry now captures theec_optimizationsflag, which will allow us to gauge feature adoption for the new FastEC improvements. To opt into telemetry, runceph telemetry on.
Upgrading from Reef or Squid ¶
Before starting, ensure that your cluster is stable and healthy with no down, recovering, incomplete, undersized or backfilling PGs. You can temporarily disable the PG autoscaler for all pools during the upgrade by running ceph osd pool set noautoscale before beginning, and if the autoscaler is desired after completion, running ceph osd pool unset noautoscale after upgrade success is confirmed.
Note:
You can monitor the progress of your upgrade at each stage with the
ceph versionscommand, which will tell you what Ceph version(s) are running for each type of daemon.
Upgrading Cephadm Clusters ¶
If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,
$ ceph orch upgrade start --image quay.io/ceph/ceph:v20.2.0
The same process is used to upgrade to future minor releases.
Upgrade progress can be monitored with
$ ceph orch upgrade status
Upgrade progress can also be monitored with ceph -s (which provides a simple progress bar) or more verbosely with
$ ceph -W cephadm
The upgrade can be paused or resumed with
$ ceph orch upgrade pause # to pause
$ ceph orch upgrade resume # to resume
or canceled with
$ ceph orch upgrade stop
Note that canceling the upgrade simply stops the process. There is no ability to downgrade back to Reef or Squid.
Upgrading Non-cephadm Clusters ¶
Note:
If your cluster is running Reef (18.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Tentacle is automated (see above). For more information, see https://docs.ceph.com/en/tentacle/cephadm/adoption/.
If your cluster is running Reef (18.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run the following command:
$ systemctl -l | grep <daemon type>Example:
$ systemctl -l | grep mon | grep active ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service loaded active running Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
Set the
nooutflag for the duration of the upgrade. (Optional, but recommended.)$ ceph osd set nooutUpgrade Monitors by installing the new packages and restarting the Monitor daemons. For example, on each Monitor host:
$ systemctl restart ceph-mon.targetOnce all Monitors are up, verify that the Monitor upgrade is complete by looking for the
tentaclestring in the mon map. The command:$ ceph mon dump | grep min_mon_releaseshould report:
min_mon_release 20 (tentacle)If it does not, that implies that one or more Monitors haven't been upgraded and restarted and/or the quorum does not include all Monitors.
Upgrade
ceph-mgrdaemons by installing the new packages and restarting all Manager daemons. For example, on each Manager host:$ systemctl restart ceph-mgr.targetVerify the
ceph-mgrdaemons are running by checkingceph -s:$ ceph -s ... services: mon: 3 daemons, quorum foo,bar,baz mgr: foo(active), standbys: bar, baz ...Upgrade all OSDs by installing the new packages and restarting the
ceph-osddaemons on all OSD hosts:$ systemctl restart ceph-osd.targetUpgrade all CephFS MDS daemons. For each CephFS file system:
5.1. Disable standby_replay:
$ ceph fs set <fs_name> allow_standby_replay false5.2. Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)
$ ceph status # ceph fs set <fs_name> max_mds 15.3. Wait for the cluster to deactivate any non-zero ranks by periodically checking the status:
$ ceph status5.4. Take all standby MDS daemons offline on the appropriate hosts with:
$ systemctl stop ceph-mds@<daemon_name>5.5. Confirm that only one MDS is online and is rank 0 for your FS:
$ ceph status5.6. Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon:
$ systemctl restart ceph-mds.target5.7. Restart all standby MDS daemons that were taken offline:
$ systemctl start ceph-mds.target5.8. Restore the original value of
max_mdsfor the volume:$ ceph fs set <fs_name> max_mds <original_max_mds>Upgrade all
radosgwdaemons by upgrading packages and restarting daemons on all hosts:$ systemctl restart ceph-radosgw.targetComplete the upgrade by disallowing pre-Tentacle OSDs and enabling all new Tentacle-only functionality:
$ ceph osd require-osd-release tentacleIf you set
nooutat the beginning, be sure to clear it with:$ ceph osd unset nooutConsider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see https://docs.ceph.com/en/tentacle/cephadm/adoption/.
Post-upgrade ¶
Verify the cluster is healthy with
ceph health.Consider enabling telemetry to send anonymized usage statistics and crash information to Ceph upstream developers. To see what would be reported without actually sending any information to anyone:
$ ceph telemetry preview-allIf you are comfortable with the data that is reported, you can opt-in to automatically report high-level cluster metadata with:
$ ceph telemetry onThe public dashboard that aggregates Ceph telemetry can be found at https://telemetry-public.ceph.com/.
Upgrading from Pre-Reef Releases (like Quincy) ¶
You must first upgrade to Reef (18.2.z) or Squid (19.2.z) before upgrading to Tentacle.
Thank You to Our Contributors ¶
We express our gratitude to all members of the Ceph community who contributed by proposing pull requests, testing this release, providing feedback, and offering valuable suggestions.
If you are interested in helping test the next release, Umbrella, please join us at the #ceph-at-scale Slack channel.
The Tentacle release would not be possible without the contributions of the community:
Aashish Sharma ▪ Abhishek Desai ▪ Abhishek Kane ▪ Abhishek Lekshmanan ▪ Achint Kaur ▪ Achintk1491 ▪ Adam C. Emerson ▪ Adam King ▪ Adam Kupczyk ▪ Adam Lyon-Jones ▪ Adarsh Ashokan ▪ Afreen Misbah ▪ Aishwarya Mathuria ▪ Alex Ainscow ▪ Alex Kershaw ▪ Alex Wojno ▪ Alexander Indenbaum ▪ Alexey Odinokov ▪ Alexon Oliveira ▪ Ali Maredia ▪ Ali Masarwa ▪ Aliaksei Makarau ▪ Anatoly Scheglov ▪ Andrei Ivashchenko ▪ Ankit Kumar ▪ Ankush Behl ▪ Anmol Babu ▪ Anoop C S ▪ Anthony D Atri ▪ Anuradha Gadge ▪ Anushruti Sharma ▪ arm7star ▪ Artem Vasilev ▪ Avan Thakkar ▪ Aviv Caro ▪ Benedikt Heine ▪ Bernard Landon ▪ Bill Scales ▪ Brad Hubbard ▪ Brian P ▪ bugwz ▪ cailianchun ▪ Casey Bodley ▪ Chanyoung Park ▪ Chen Yuanrun ▪ Chengen Du ▪ Christian Rohmann ▪ Christopher Hoffman ▪ chungfengz ▪ Chunmei Liu ▪ Connor Fawcett ▪ Cory Snyder ▪ Cybertinus ▪ daijufang ▪ Dan Mick ▪ Dan van der Ster ▪ Daniel Gryniewicz ▪ Danny Al-Gaaf ▪ DanWritesCode ▪ David Galloway ▪ Deepika Upadhyay ▪ Dhairya Parmar ▪ Divyansh Kamboj ▪ Dnyaneshwari ▪ Dominique Leuenberger ▪ Dongdong Tao ▪ Doug Whitfield ▪ Drunkard Zhang ▪ Effi Ofer ▪ Emin ▪ Emin Mert Sunacoglu ▪ Enrico Bocchi ▪ Enrico De Fent ▪ er0k ▪ Erik Sjölund ▪ Ernesto Puerta ▪ Ethan Wu ▪ Feng, Hualong ▪ Florent Carli ▪ Gabriel BenHanokh ▪ Gal Salomon ▪ Garry Drankovich ▪ Gil Bregman ▪ Gilad Sid ▪ gitkenan ▪ Gregory O'Neill ▪ Guillaume Abrioux ▪ gukaifeng ▪ Hannes Baum ▪ haoyixing ▪ hejindong ▪ Hezko ▪ Hoai-Thu Vuong ▪ Hualong Feng ▪ Hyun Jin Kim ▪ igomon ▪ Igor Fedotov ▪ Igor Golikov ▪ Ilya Dryomov ▪ imtzw ▪ Indira Sawant ▪ Ivo Almeida ▪ J. Eric Ivancich ▪ Jakob Haufe ▪ James Oakley ▪ Jamie Pryde ▪ Jane Zhu ▪ Janne Heß ▪ Jannis Speer ▪ Jared Yu ▪ Jaya Prakash ▪ Jayaprakash-ibm ▪ Jesse F. Williamson ▪ Jesse Williamson ▪ Jianwei Zhang ▪ Jianxin Li ▪ jiawd ▪ Jiffin Tony Thottan ▪ Joao Eduardo Luis ▪ Joel Davidow ▪ John Agombar ▪ John Mulligan ▪ Jon Bailey ▪ Jos Collin ▪ Jose J Palacios-Perez ▪ Joshua Baergen ▪ Joshua Blanch ▪ Juan Ferrer Toribio ▪ Juan Miguel Olmo Martínez ▪ julpark ▪ junxiang Mu ▪ Kalpesh Pandya ▪ Kamoltat Sirivadhna ▪ kchheda3 ▪ Kefu Chai ▪ Ken Dreyer ▪ Kevin Niederwanger ▪ Kevin Zhao ▪ Kotresh Hiremath Ravishankar ▪ Kritik Sachdeva ▪ Kushal Deb ▪ Kushal Jyoti Deb ▪ Kyrylo Shatskyy ▪ Laimis Juzeliūnas ▪ Laura Flores ▪ Lee Sanders ▪ Leo Mylonas ▪ Leonid Chernin ▪ Leonid Usov ▪ lightmelodies ▪ Linjing Li ▪ liubingrun ▪ lizhipeng ▪ Lorenz Bausch ▪ Luc Ritchie ▪ Lucian Petrut ▪ Luo Rixin ▪ Ma Jianpeng ▪ Marc Singer ▪ Marcel Lauhoff ▪ Mark Kogan ▪ Mark Nelson ▪ Martin Nowak ▪ Matan Breizman ▪ Matt Benjamin ▪ Matt Vandermeulen ▪ Matteo Paramatti ▪ Matthew Vernon ▪ Max Carrara ▪ Max Kellermann ▪ Md Mahamudur Rahaman Sajib ▪ Michael J. Kidd ▪ Michal Nasiadka ▪ Mike Perez ▪ Miki Patel ▪ Milind Changire ▪ Mindy Preston ▪ Mingyuan Liang ▪ Mohit Agrawal ▪ molpako ▪ mosayyebzadeh ▪ Mouratidis Theofilos ▪ Mykola Golub ▪ Myoungwon Oh ▪ N Balachandran ▪ Naman Munet ▪ Naveen Naidu ▪ nbalacha ▪ Neeraj Pratap Singh ▪ Neha Ojha ▪ Niklas Hambüchen ▪ Nithya Balachandran ▪ Nitzan Mordechai ▪ Nizamudeen A ▪ Oguzhan Ozmen ▪ Omid Yoosefi ▪ Omri Zeneva ▪ Or Ozeri ▪ Orit Wasserman ▪ Oshrey Avraham ▪ Patrick Donnelly ▪ Paul Cuzner ▪ Paul Stemmet ▪ Paulo E. Castro ▪ Pedro Gonzalez Gomez ▪ Pere Diaz Bou ▪ Peter Sabaini ▪ Pierre Riteau ▪ Piotr Parczewski ▪ Piyush Agarwal ▪ Ponnuvel Palaniyappan ▪ Prachi Goel ▪ Prashant D ▪ prik73 ▪ Pritha Srivastava ▪ Puja Shahu ▪ pujashahu ▪ qn2060 ▪ Radoslaw Zarzynski ▪ Raja Sharma ▪ Ramana Raja ▪ Redouane Kachach ▪ rhkelson ▪ Richard Poole ▪ Rishabh Dave ▪ Robin Geuze ▪ Ronen Friedman ▪ Rongqi Sun ▪ Rostyslav Khudov ▪ Roy Sahar ▪ Ryotaro Banno ▪ Sachin Prabhu ▪ Sachin Punadikar ▪ Sam Goyal ▪ Samarah Uriarte ▪ Samuel Just ▪ Satoru Takeuchi ▪ Seena Fallah ▪ Shachar Sharon ▪ Shasha Lu ▪ Shawn Edwards ▪ Shen Jiatong ▪ Shilpa Jagannath ▪ shimin ▪ Shinya Hayashi ▪ Shraddha Agrawal ▪ Shreya Sapale ▪ Shreyansh Sancheti ▪ Shrish0098 ▪ Shua Lv ▪ Shweta Bhosale ▪ Shweta Sodani ▪ Shwetha K Acharya ▪ Sidharth Anupkrishnan ▪ Silent ▪ Simon Jürgensmeyer ▪ Soumya Koduri ▪ Sridhar Seshasayee ▪ Stellios Williams ▪ Steven Chien ▪ Sun Lan ▪ Sungjoon Koh ▪ Sungmin Lee ▪ Sunil Angadi ▪ Sunnat Samadov ▪ Surya Kumari Jangala ▪ Suyash Dongre ▪ T K Chandra Hasan ▪ Taha Jahangir ▪ Tan Changzhi ▪ Teng Jie ▪ Teoman Onay ▪ Thomas Lamprecht ▪ Tobias Fischer ▪ Tobias Urdin ▪ Tod Chen ▪ Tomer Haskalovitch ▪ TomNewChao ▪ Toshikuni Fukaya ▪ Trang Tran ▪ TruongSinh Tran-Nguyen ▪ Tyler Brekke ▪ Tyler Stachecki ▪ Umesh Muthuvara ▪ Vallari Agrawal ▪ Venky Shankar ▪ Victoria Mackie ▪ Ville Ojamo ▪ Vinay Bhaskar Varada ▪ Wang Chao ▪ wanglinke ▪ Xavi Hernandez ▪ Xiubo Li ▪ Xuehan Xu ▪ XueYu Bai ▪ Yaarit Hatuka ▪ Yan, Zheng ▪ Yantao Xue ▪ Yao guotao ▪ Yehuda Sadeh ▪ Yingxin Cheng ▪ Yite Gu ▪ Yonatan Zaken ▪ Yuri Weinstein ▪ Yuval Lifshitz ▪ Zac Dover ▪ Zack Cerza ▪ Zaken ▪ Zhang Song ▪ zhangjianwei2 ▪ Zhansong Gao ▪ Zhipeng Li ▪ 胡玮文