[PATCH v39 11/24] x86/sgx: Add SGX enclave driver
Matthew Wilcox
willy at infradead.org
Mon Oct 5 01:30:53 UTC 2020
On Mon, Oct 05, 2020 at 02:41:53AM +0300, Jarkko Sakkinen wrote:
> On Sun, Oct 04, 2020 at 11:27:50PM +0100, Matthew Wilcox wrote:
> > int ret = 0;
> >
> > mutex_lock(&encl->lock);
> > rcu_read_lock();
>
> Right, so xa_*() take RCU lock implicitly and xas_* do not.
Not necessarily the RCU lock ... I did document all this in xarray.rst:
https://www.kernel.org/doc/html/latest/core-api/xarray.html
> > while (xas.index < idx_end) {
> > page = xas_next(&xas);
>
> It should iterate through every possible page index within the range,
> even the ones that do not have an entry, i.e. this loop also checks
> that there are no empty slots.
>
> Does xas_next() go through every possible index, or skip the non-empty
> ones?
xas_next(), as its documentation says, will move to the next array
index:
https://www.kernel.org/doc/html/latest/core-api/xarray.html#c.xas_next
> > if (!page || (~page->vm_max_prot_bits & vm_prot_bits))
> > ret = -EACCESS;
> > break;
> > }
> > }
> > rcu_read_unlock();
> > mutex_unlock(&encl->lock);
>
> In my Geminilake NUC the maximum size of the address space is 64GB for
> an enclave, and it is not fixed but can grow in microarchitectures
> beyond that.
>
> That means that in (*artificial*) worst case the locks would be kept for
> 64*1024*1024*1024/4096 = 16777216 iterations.
Oh, there's support for that on the XArray API too.
xas_lock_irq(&xas);
xas_for_each_marked(&xas, page, end, PAGECACHE_TAG_DIRTY) {
xas_set_mark(&xas, PAGECACHE_TAG_TOWRITE);
if (++tagged % XA_CHECK_SCHED)
continue;
xas_pause(&xas);
xas_unlock_irq(&xas);
cond_resched();
xas_lock_irq(&xas);
}
xas_unlock_irq(&xas);
More information about the Linux-security-module-archive
mailing list