2002-05-14 14:43:52 +00:00
/*
2005-08-30 18:32:10 +00:00
* Asterisk - - An open source telephony toolkit .
2002-05-14 14:43:52 +00:00
*
2010-04-22 19:08:01 +00:00
* Copyright ( C ) 1999 - 2010 , Digium , Inc .
2002-05-14 14:43:52 +00:00
*
2004-10-06 04:30:16 +00:00
* Mark Spencer < markster @ digium . com >
2002-05-14 14:43:52 +00:00
*
2005-08-30 18:32:10 +00:00
* See http : //www.asterisk.org for more information about
* the Asterisk project . Please do not directly contact
* any of the maintainers of this project for assistance ;
* the project provides a web site , mailing lists and IRC
* channels for your use .
*
2002-05-14 14:43:52 +00:00
* This program is free software , distributed under the terms of
2005-08-30 18:32:10 +00:00
* the GNU General Public License Version 2. See the LICENSE file
* at the top of the source tree .
*/
2005-10-24 20:12:06 +00:00
/*! \file
2007-11-16 22:37:17 +00:00
* \ brief Asterisk locking - related definitions :
2014-12-12 23:49:36 +00:00
* - ast_mutex_t , ast_rwlock_t and related functions ;
2007-11-16 22:37:17 +00:00
* - atomic arithmetic instructions ;
* - wrappers for channel locking .
2006-03-30 19:05:00 +00:00
*
* - See \ ref LockDef
*/
2006-03-31 00:25:50 +00:00
2006-10-30 16:33:02 +00:00
/*! \page LockDef Asterisk thread locking models
2006-03-30 16:09:23 +00:00
*
2006-09-18 22:05:53 +00:00
* This file provides different implementation of the functions ,
2006-03-30 16:09:23 +00:00
* depending on the platform , the use of DEBUG_THREADS , and the way
2006-09-18 22:05:53 +00:00
* module - level mutexes are initialized .
2006-03-30 16:09:23 +00:00
*
2006-03-30 19:05:00 +00:00
* - \ b static : the mutex is assigned the value AST_MUTEX_INIT_VALUE
2006-03-30 16:09:23 +00:00
* this is done at compile time , and is the way used on Linux .
* This method is not applicable to all platforms e . g . when the
* initialization needs that some code is run .
*
2006-03-30 19:05:00 +00:00
* - \ b through constructors : for each mutex , a constructor function is
2006-03-30 16:09:23 +00:00
* defined , which then runs when the program ( or the module )
* starts . The problem with this approach is that there is a
* lot of code duplication ( a new block of code is created for
* each mutex ) . Also , it does not prevent a user from declaring
* a global mutex without going through the wrapper macros ,
* so sane programming practices are still required .
2002-05-14 14:43:52 +00:00
*/
# ifndef _ASTERISK_LOCK_H
# define _ASTERISK_LOCK_H
2006-09-18 22:05:53 +00:00
# include <pthread.h>
2009-04-22 21:38:15 +00:00
# include <time.h>
2006-09-18 22:05:53 +00:00
# include <sys/param.h>
2008-05-23 22:41:28 +00:00
# ifdef HAVE_BKTR
2008-05-23 22:35:50 +00:00
# include <execinfo.h>
2008-05-23 22:41:28 +00:00
# endif
2025-05-02 12:19:25 -06:00
# ifdef DEBUG_THREADS
# include <string.h>
# endif
2009-04-22 21:38:15 +00:00
# ifndef HAVE_PTHREAD_RWLOCK_TIMEDWRLOCK
# include "asterisk/time.h"
# endif
Convert the ast_channel data structure over to the astobj2 framework.
There is a lot that could be said about this, but the patch is a big
improvement for performance, stability, code maintainability,
and ease of future code development.
The channel list is no longer an unsorted linked list. The main container
for channels is an astobj2 hash table. All of the code related to searching
for channels or iterating active channels has been rewritten. Let n be
the number of active channels. Iterating the channel list has gone from
O(n^2) to O(n). Searching for a channel by name went from O(n) to O(1).
Searching for a channel by extension is still O(n), but uses a new method
for doing so, which is more efficient.
The ast_channel object is now a reference counted object. The benefits
here are plentiful. Some benefits directly related to issues in the
previous code include:
1) When threads other than the channel thread owning a channel wanted
access to a channel, it had to hold the lock on it to ensure that it didn't
go away. This is no longer a requirement. Holding a reference is
sufficient.
2) There are places that now require less dealing with channel locks.
3) There are places where channel locks are held for much shorter periods
of time.
4) There are places where dealing with more than one channel at a time becomes
_MUCH_ easier. ChanSpy is a great example of this. Writing code in the
future that deals with multiple channels will be much easier.
Some additional information regarding channel locking and reference count
handling can be found in channel.h, where a new section has been added that
discusses some of the rules associated with it.
Mark Michelson also assisted with the development of this patch. He did the
conversion of ChanSpy and introduced a new API, ast_autochan, which makes it
much easier to deal with holding on to a channel pointer for an extended period
of time and having it get automatically updated if the channel gets masqueraded.
Mark was also a huge help in the code review process.
Thanks to David Vossel for his assistance with this branch, as well. David
did the conversion of the DAHDIScan application by making it become a wrapper
for ChanSpy internally.
The changes come from the svn/asterisk/team/russell/ast_channel_ao2 branch.
Review: http://reviewboard.digium.com/r/203/
git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@190423 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2009-04-24 14:04:26 +00:00
2013-06-08 22:09:07 +00:00
# include "asterisk/backtrace.h"
2006-09-18 22:05:53 +00:00
# include "asterisk/logger.h"
2010-07-03 02:36:31 +00:00
# include "asterisk/compiler.h"
2006-09-18 22:05:53 +00:00
2024-12-31 11:10:20 -07:00
# if defined(__cplusplus) || defined(c_plusplus)
extern " C " {
# endif
2004-03-15 08:36:25 +00:00
# define AST_PTHREADT_NULL (pthread_t) -1
# define AST_PTHREADT_STOP (pthread_t) -2
2010-04-06 19:28:42 +00:00
# if (defined(SOLARIS) || defined(BSD))
2004-06-09 01:45:08 +00:00
# define AST_MUTEX_INIT_W_CONSTRUCTORS
2006-09-18 22:05:53 +00:00
# endif /* SOLARIS || BSD */
2004-06-22 17:42:14 +00:00
2006-09-18 22:05:53 +00:00
/* Asterisk REQUIRES recursive (not error checking) mutexes
2004-06-22 17:42:14 +00:00
and will not run without them . */
2007-11-17 23:03:16 +00:00
# if defined(HAVE_PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP) && defined(HAVE_PTHREAD_MUTEX_RECURSIVE_NP)
2004-06-22 17:42:14 +00:00
# define PTHREAD_MUTEX_INIT_VALUE PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP
# define AST_MUTEX_KIND PTHREAD_MUTEX_RECURSIVE_NP
# else
# define PTHREAD_MUTEX_INIT_VALUE PTHREAD_MUTEX_INITIALIZER
# define AST_MUTEX_KIND PTHREAD_MUTEX_RECURSIVE
# endif /* PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP */
2004-06-09 01:45:08 +00:00
2010-04-22 19:08:01 +00:00
# ifdef HAVE_PTHREAD_RWLOCK_INITIALIZER
# define __AST_RWLOCK_INIT_VALUE PTHREAD_RWLOCK_INITIALIZER
# else /* HAVE_PTHREAD_RWLOCK_INITIALIZER */
# define __AST_RWLOCK_INIT_VALUE {0}
# endif /* HAVE_PTHREAD_RWLOCK_INITIALIZER */
2002-05-14 14:43:52 +00:00
2008-05-23 22:35:50 +00:00
# ifdef HAVE_BKTR
2008-06-03 18:26:51 +00:00
# define AST_LOCK_TRACK_INIT_VALUE { { NULL }, { 0 }, 0, { NULL }, { 0 }, {{{ 0 }}}, PTHREAD_MUTEX_INIT_VALUE }
2008-05-23 22:35:50 +00:00
# else
2008-06-03 18:26:51 +00:00
# define AST_LOCK_TRACK_INIT_VALUE { { NULL }, { 0 }, 0, { NULL }, { 0 }, PTHREAD_MUTEX_INIT_VALUE }
2008-05-23 22:35:50 +00:00
# endif
2005-09-09 05:35:14 +00:00
2018-09-27 20:32:21 -04:00
# define AST_MUTEX_INIT_VALUE { PTHREAD_MUTEX_INIT_VALUE, NULL, {1, 0} }
# define AST_MUTEX_INIT_VALUE_NOTRACKING { PTHREAD_MUTEX_INIT_VALUE, NULL, {0, 0} }
2008-06-03 18:26:51 +00:00
2018-09-27 20:32:21 -04:00
# define AST_RWLOCK_INIT_VALUE { __AST_RWLOCK_INIT_VALUE, NULL, {1, 0} }
# define AST_RWLOCK_INIT_VALUE_NOTRACKING { __AST_RWLOCK_INIT_VALUE, NULL, {0, 0} }
2010-04-22 19:08:01 +00:00
2005-09-09 05:35:14 +00:00
# define AST_MAX_REENTRANCY 10
2004-04-12 20:54:24 +00:00
2007-11-29 17:42:21 +00:00
struct ast_channel ;
2014-12-12 23:49:36 +00:00
/*!
* \ brief Lock tracking information .
*
* \ note Any changes to this struct MUST be reflected in the
* lock . c : restore_lock_tracking ( ) function .
*/
2008-06-03 18:26:51 +00:00
struct ast_lock_track {
2005-09-09 05:35:14 +00:00
const char * file [ AST_MAX_REENTRANCY ] ;
int lineno [ AST_MAX_REENTRANCY ] ;
2004-10-06 04:30:16 +00:00
int reentrancy ;
2005-09-09 05:35:14 +00:00
const char * func [ AST_MAX_REENTRANCY ] ;
chan_sip: Address runaway when realtime peers subscribe to mailboxes
Users upgrading from asterisk 13.5 to a later version and who use
realtime with peers that have mailboxes were experiencing runaway
situations that manifested as a continuous stream of taskprocessor
congestion errors, memory leaks and an unresponsive chan_sip.
A related issue was that setting rtcachefriends=no NEVER worked in
asterisk 13 (since the move to stasis). In 13.5 and earlier, when a
peer tried to register, all of the stasis threads would block and
chan_sip would again become unresponsive. After 13.5, the runaway
would happen.
There were a number of causes...
* mwi_event_cb was (indirectly) calling build_peer even though calls to
mwi_event_cb are often caused by build_peer.
* In an effort to prevent chan_sip from being unloaded while messages
were still in flight, destroy_mailboxes was calling
stasis_unsubscribe_and_join but in some cases waited forever for the
final message.
* add_peer_mailboxes wasn't properly marking the existing mailboxes
on a peer as "keep" so build_peer would always delete them all.
* add_peer_mwi_subs was unsubscribing existing mailbox subscriptions
then just creating them again.
All of this was causing a flood of subscribes and unsubscribes on
multiple threads all for the same peer and mailbox.
Fixes...
* add_peer_mailboxes now marks mailboxes correctly and build_peer only
deletes the ones that really are no longer needed by the peer.
* add_peer_mwi_subs now only adds subscriptions marked as "new" instead
of unsubscribing and resubscribing everything. It also adds the peer
object's address to the mailbox instead of its name to the subscription
userdata so mwi_event_cb doesn't have to call build_peer.
With these changes, with rtcachefriends=yes (the most common setting),
there are no leaks, locks, loops or crashes at shutdown.
rtcachefriends=no still causes leaks but at least it doesn't lock, loop
or crash. Since making rtcachefriends=no work wasnt in scope for this
issue, further work will have to be deferred to a separate patch.
Side fixes...
* The ast_lock_track structure had a member named "thread" which gdb
doesn't like since it conflicts with it's "thread" command. That
member was renamed to "thread_id".
ASTERISK-25468 #close
Change-Id: I07519ef7f092629e1e844f855abd279d6475cdd0
2016-09-20 08:42:15 -06:00
pthread_t thread_id [ AST_MAX_REENTRANCY ] ;
2008-05-23 22:35:50 +00:00
# ifdef HAVE_BKTR
struct ast_bt backtrace [ AST_MAX_REENTRANCY ] ;
# endif
2007-10-16 22:21:45 +00:00
pthread_mutex_t reentr_mutex ;
2002-05-14 14:43:52 +00:00
} ;
2018-09-27 20:32:21 -04:00
struct ast_lock_track_flags {
/*! non-zero if lock tracking is enabled */
unsigned int tracking : 1 ;
/*! non-zero if track is setup */
volatile unsigned int setup : 1 ;
} ;
2010-04-22 19:08:01 +00:00
/*! \brief Structure for mutex and tracking information.
*
* We have tracking information in this structure regardless of DEBUG_THREADS being enabled .
* The information will just be ignored in the core if a module does not request it . .
*/
2008-06-03 18:26:51 +00:00
struct ast_mutex_info {
2011-01-31 06:50:49 +00:00
pthread_mutex_t mutex ;
2023-09-13 09:18:04 -06:00
# if !defined(DEBUG_THREADS) && !defined(DEBUG_THREADS_LOOSE_ABI) && \
! defined ( DETECT_DEADLOCKS )
2018-09-27 20:32:21 -04:00
/*!
* These fields are renamed to ensure they are never used when
* DEBUG_THREADS is not defined .
*/
struct ast_lock_track * _track ;
struct ast_lock_track_flags _flags ;
2023-09-13 09:18:04 -06:00
# elif defined(DEBUG_THREADS) || defined(DETECT_DEADLOCKS)
2018-09-27 20:32:21 -04:00
/*! Track which thread holds this mutex. */
2011-01-31 06:50:49 +00:00
struct ast_lock_track * track ;
2018-09-27 20:32:21 -04:00
struct ast_lock_track_flags flags ;
# endif
2008-06-03 18:26:51 +00:00
} ;
2010-04-22 19:08:01 +00:00
/*! \brief Structure for rwlock and tracking information.
*
* We have tracking information in this structure regardless of DEBUG_THREADS being enabled .
* The information will just be ignored in the core if a module does not request it . .
*/
struct ast_rwlock_info {
2011-01-31 06:50:49 +00:00
pthread_rwlock_t lock ;
2023-09-13 09:18:04 -06:00
# if !defined(DEBUG_THREADS) && !defined(DEBUG_THREADS_LOOSE_ABI) && \
! defined ( DETECT_DEADLOCKS )
2018-09-27 20:32:21 -04:00
/*!
* These fields are renamed to ensure they are never used when
* DEBUG_THREADS is not defined .
*/
struct ast_lock_track * _track ;
struct ast_lock_track_flags _flags ;
2023-09-13 09:18:04 -06:00
# elif defined(DEBUG_THREADS) || defined(DETECT_DEADLOCKS)
2010-04-22 19:08:01 +00:00
/*! Track which thread holds this lock */
2011-01-31 06:50:49 +00:00
struct ast_lock_track * track ;
2018-09-27 20:32:21 -04:00
struct ast_lock_track_flags flags ;
# endif
2010-04-22 19:08:01 +00:00
} ;
2003-08-13 15:25:16 +00:00
typedef struct ast_mutex_info ast_mutex_t ;
2010-04-22 19:08:01 +00:00
typedef struct ast_rwlock_info ast_rwlock_t ;
2005-10-28 16:35:43 +00:00
typedef pthread_cond_t ast_cond_t ;
2010-04-22 19:08:01 +00:00
int __ast_pthread_mutex_init ( int tracking , const char * filename , int lineno , const char * func , const char * mutex_name , ast_mutex_t * t ) ;
int __ast_pthread_mutex_destroy ( const char * filename , int lineno , const char * func , const char * mutex_name , ast_mutex_t * t ) ;
int __ast_pthread_mutex_lock ( const char * filename , int lineno , const char * func , const char * mutex_name , ast_mutex_t * t ) ;
int __ast_pthread_mutex_trylock ( const char * filename , int lineno , const char * func , const char * mutex_name , ast_mutex_t * t ) ;
int __ast_pthread_mutex_unlock ( const char * filename , int lineno , const char * func , const char * mutex_name , ast_mutex_t * t ) ;
# define ast_mutex_init(pmutex) __ast_pthread_mutex_init(1, __FILE__, __LINE__, __PRETTY_FUNCTION__, #pmutex, pmutex)
# define ast_mutex_init_notracking(pmutex) __ast_pthread_mutex_init(0, __FILE__, __LINE__, __PRETTY_FUNCTION__, #pmutex, pmutex)
# define ast_mutex_destroy(a) __ast_pthread_mutex_destroy(__FILE__, __LINE__, __PRETTY_FUNCTION__, #a, a)
# define ast_mutex_lock(a) __ast_pthread_mutex_lock(__FILE__, __LINE__, __PRETTY_FUNCTION__, #a, a)
# define ast_mutex_unlock(a) __ast_pthread_mutex_unlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, #a, a)
# define ast_mutex_trylock(a) __ast_pthread_mutex_trylock(__FILE__, __LINE__, __PRETTY_FUNCTION__, #a, a)
int __ast_cond_init ( const char * filename , int lineno , const char * func , const char * cond_name , ast_cond_t * cond , pthread_condattr_t * cond_attr ) ;
int __ast_cond_signal ( const char * filename , int lineno , const char * func , const char * cond_name , ast_cond_t * cond ) ;
int __ast_cond_broadcast ( const char * filename , int lineno , const char * func , const char * cond_name , ast_cond_t * cond ) ;
int __ast_cond_destroy ( const char * filename , int lineno , const char * func , const char * cond_name , ast_cond_t * cond ) ;
int __ast_cond_wait ( const char * filename , int lineno , const char * func , const char * cond_name , const char * mutex_name , ast_cond_t * cond , ast_mutex_t * t ) ;
int __ast_cond_timedwait ( const char * filename , int lineno , const char * func , const char * cond_name , const char * mutex_name , ast_cond_t * cond , ast_mutex_t * t , const struct timespec * abstime ) ;
# define ast_cond_init(cond, attr) __ast_cond_init(__FILE__, __LINE__, __PRETTY_FUNCTION__, #cond, cond, attr)
# define ast_cond_destroy(cond) __ast_cond_destroy(__FILE__, __LINE__, __PRETTY_FUNCTION__, #cond, cond)
# define ast_cond_signal(cond) __ast_cond_signal(__FILE__, __LINE__, __PRETTY_FUNCTION__, #cond, cond)
# define ast_cond_broadcast(cond) __ast_cond_broadcast(__FILE__, __LINE__, __PRETTY_FUNCTION__, #cond, cond)
# define ast_cond_wait(cond, mutex) __ast_cond_wait(__FILE__, __LINE__, __PRETTY_FUNCTION__, #cond, #mutex, cond, mutex)
# define ast_cond_timedwait(cond, mutex, time) __ast_cond_timedwait(__FILE__, __LINE__, __PRETTY_FUNCTION__, #cond, #mutex, cond, mutex, time)
int __ast_rwlock_init ( int tracking , const char * filename , int lineno , const char * func , const char * rwlock_name , ast_rwlock_t * t ) ;
int __ast_rwlock_destroy ( const char * filename , int lineno , const char * func , const char * rwlock_name , ast_rwlock_t * t ) ;
2011-01-31 06:50:49 +00:00
int __ast_rwlock_unlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name ) ;
int __ast_rwlock_rdlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name ) ;
int __ast_rwlock_wrlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name ) ;
int __ast_rwlock_timedrdlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name , const struct timespec * abs_timeout ) ;
int __ast_rwlock_timedwrlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name , const struct timespec * abs_timeout ) ;
int __ast_rwlock_tryrdlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name ) ;
int __ast_rwlock_trywrlock ( const char * filename , int lineno , const char * func , ast_rwlock_t * t , const char * name ) ;
2010-04-22 19:08:01 +00:00
/*!
* \ brief wrapper for rwlock with tracking enabled
* \ return 0 on success , non zero for error
* \ since 1.6 .1
*/
# define ast_rwlock_init(rwlock) __ast_rwlock_init(1, __FILE__, __LINE__, __PRETTY_FUNCTION__, #rwlock, rwlock)
/*!
* \ brief wrapper for ast_rwlock_init with tracking disabled
* \ return 0 on success , non zero for error
* \ since 1.6 .1
*/
# define ast_rwlock_init_notracking(rwlock) __ast_rwlock_init(0, __FILE__, __LINE__, __PRETTY_FUNCTION__, #rwlock, rwlock)
# define ast_rwlock_destroy(rwlock) __ast_rwlock_destroy(__FILE__, __LINE__, __PRETTY_FUNCTION__, #rwlock, rwlock)
2011-01-31 06:50:49 +00:00
# define ast_rwlock_unlock(a) __ast_rwlock_unlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a)
# define ast_rwlock_rdlock(a) __ast_rwlock_rdlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a)
# define ast_rwlock_wrlock(a) __ast_rwlock_wrlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a)
# define ast_rwlock_tryrdlock(a) __ast_rwlock_tryrdlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a)
# define ast_rwlock_trywrlock(a) __ast_rwlock_trywrlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a)
# define ast_rwlock_timedrdlock(a, b) __ast_rwlock_timedrdlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a, b)
# define ast_rwlock_timedwrlock(a, b) __ast_rwlock_timedwrlock(__FILE__, __LINE__, __PRETTY_FUNCTION__, a, #a, b)
2010-04-22 19:08:01 +00:00
# define ROFFSET ((lt->reentrancy > 0) ? (lt->reentrancy-1) : 0)
# ifdef DEBUG_THREADS
# ifdef THREAD_CRASH
# define DO_THREAD_CRASH do { *((int *)(0)) = 1; } while(0)
# else
# define DO_THREAD_CRASH do { } while (0)
# endif
# include <errno.h>
2007-10-01 21:15:57 +00:00
enum ast_lock_type {
AST_MUTEX ,
AST_RDLOCK ,
AST_WRLOCK ,
} ;
2007-08-03 19:41:42 +00:00
/*!
* \ brief Store lock info for the current thread
*
* This function gets called in ast_mutex_lock ( ) and ast_mutex_trylock ( ) so
* that information about this lock can be stored in this thread ' s
* lock info struct . The lock is marked as pending as the thread is waiting
* on the lock . ast_mark_lock_acquired ( ) will mark it as held by this thread .
*/
2008-05-23 22:35:50 +00:00
void ast_store_lock_info ( enum ast_lock_type type , const char * filename ,
int line_num , const char * func , const char * lock_name , void * lock_addr , struct ast_bt * bt ) ;
2007-08-03 19:41:42 +00:00
/*!
* \ brief Mark the last lock as acquired
*/
2008-02-28 22:39:26 +00:00
void ast_mark_lock_acquired ( void * lock_addr ) ;
2007-08-03 19:41:42 +00:00
2007-10-09 22:21:49 +00:00
/*!
* \ brief Mark the last lock as failed ( trylock )
*/
2008-02-28 22:39:26 +00:00
void ast_mark_lock_failed ( void * lock_addr ) ;
2007-10-09 22:21:49 +00:00
2007-08-03 19:41:42 +00:00
/*!
* \ brief remove lock info for the current thread
*
* this gets called by ast_mutex_unlock so that information on the lock can
* be removed from the current thread ' s lock info struct .
*/
2008-05-23 22:35:50 +00:00
void ast_remove_lock_info ( void * lock_addr , struct ast_bt * bt ) ;
2013-09-09 20:13:40 +00:00
void ast_suspend_lock_info ( void * lock_addr ) ;
void ast_restore_lock_info ( void * lock_addr ) ;
2008-05-23 22:35:50 +00:00
2008-04-16 20:54:41 +00:00
/*!
* \ brief log info for the current lock with ast_log ( ) .
*
* this function would be mostly for debug . If you come across a lock
* that is unexpectedly but momentarily locked , and you wonder who
* are fighting with for the lock , this routine could be called , IF
* you have the thread debugging stuff turned on .
2009-03-09 20:58:17 +00:00
* \ param this_lock_addr lock address to return lock information
* \ since 1.6 .1
2008-04-16 20:54:41 +00:00
*/
2013-08-15 12:12:26 +00:00
void ast_log_show_lock ( void * this_lock_addr ) ;
/*!
* \ brief Generate a lock dump equivalent to " core show locks " .
*
* The lock dump generated is generally too large to be output by a
* single ast_verbose / log / debug / etc . call . Only ast_cli ( ) handles it
* properly without changing BUFSIZ in logger . c .
*
* Note : This must be ast_free ( ) d when you ' re done with it .
*
* \ retval An ast_str containing the lock dump
* \ retval NULL on error
* \ since 12
*/
struct ast_str * ast_dump_locks ( void ) ;
2008-05-29 17:35:19 +00:00
/*!
* \ brief retrieve lock info for the specified mutex
*
* this gets called during deadlock avoidance , so that the information may
* be preserved as to what location originally acquired the lock .
*/
2008-06-27 13:54:13 +00:00
int ast_find_lock_info ( void * lock_addr , char * filename , size_t filename_size , int * lineno , char * func , size_t func_size , char * mutex_name , size_t mutex_name_size ) ;
2008-05-29 17:35:19 +00:00
/*!
* \ brief Unlock a lock briefly
*
* used during deadlock avoidance , to preserve the original location where
* a lock was originally acquired .
*/
2013-11-02 04:12:36 +00:00
# define AO2_DEADLOCK_AVOIDANCE(obj) \
do { \
char __filename [ 80 ] , __func [ 80 ] , __mutex_name [ 80 ] ; \
int __lineno ; \
int __res = ast_find_lock_info ( ao2_object_get_lockaddr ( obj ) , __filename , sizeof ( __filename ) , & __lineno , __func , sizeof ( __func ) , __mutex_name , sizeof ( __mutex_name ) ) ; \
int __res2 = ao2_unlock ( obj ) ; \
usleep ( 1 ) ; \
if ( __res < 0 ) { /* Could happen if the ao2 object does not have a mutex. */ \
if ( __res2 ) { \
ast_log ( LOG_WARNING , " Could not unlock ao2 object '%s': %s and no lock info found! I will NOT try to relock. \n " , # obj , strerror ( __res2 ) ) ; \
} else { \
ao2_lock ( obj ) ; \
} \
} else { \
if ( __res2 ) { \
ast_log ( LOG_WARNING , " Could not unlock ao2 object '%s': %s. {{{Originally locked at %s line %d: (%s) '%s'}}} I will NOT try to relock. \n " , # obj , strerror ( __res2 ) , __filename , __lineno , __func , __mutex_name ) ; \
} else { \
__ao2_lock ( obj , AO2_LOCK_REQ_MUTEX , __filename , __func , __lineno , __mutex_name ) ; \
} \
} \
} while ( 0 )
2008-06-25 01:08:37 +00:00
# define CHANNEL_DEADLOCK_AVOIDANCE(chan) \
do { \
2008-06-27 13:54:13 +00:00
char __filename [ 80 ] , __func [ 80 ] , __mutex_name [ 80 ] ; \
2008-06-25 01:08:37 +00:00
int __lineno ; \
Convert the ast_channel data structure over to the astobj2 framework.
There is a lot that could be said about this, but the patch is a big
improvement for performance, stability, code maintainability,
and ease of future code development.
The channel list is no longer an unsorted linked list. The main container
for channels is an astobj2 hash table. All of the code related to searching
for channels or iterating active channels has been rewritten. Let n be
the number of active channels. Iterating the channel list has gone from
O(n^2) to O(n). Searching for a channel by name went from O(n) to O(1).
Searching for a channel by extension is still O(n), but uses a new method
for doing so, which is more efficient.
The ast_channel object is now a reference counted object. The benefits
here are plentiful. Some benefits directly related to issues in the
previous code include:
1) When threads other than the channel thread owning a channel wanted
access to a channel, it had to hold the lock on it to ensure that it didn't
go away. This is no longer a requirement. Holding a reference is
sufficient.
2) There are places that now require less dealing with channel locks.
3) There are places where channel locks are held for much shorter periods
of time.
4) There are places where dealing with more than one channel at a time becomes
_MUCH_ easier. ChanSpy is a great example of this. Writing code in the
future that deals with multiple channels will be much easier.
Some additional information regarding channel locking and reference count
handling can be found in channel.h, where a new section has been added that
discusses some of the rules associated with it.
Mark Michelson also assisted with the development of this patch. He did the
conversion of ChanSpy and introduced a new API, ast_autochan, which makes it
much easier to deal with holding on to a channel pointer for an extended period
of time and having it get automatically updated if the channel gets masqueraded.
Mark was also a huge help in the code review process.
Thanks to David Vossel for his assistance with this branch, as well. David
did the conversion of the DAHDIScan application by making it become a wrapper
for ChanSpy internally.
The changes come from the svn/asterisk/team/russell/ast_channel_ao2 branch.
Review: http://reviewboard.digium.com/r/203/
git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@190423 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2009-04-24 14:04:26 +00:00
int __res = ast_find_lock_info ( ao2_object_get_lockaddr ( chan ) , __filename , sizeof ( __filename ) , & __lineno , __func , sizeof ( __func ) , __mutex_name , sizeof ( __mutex_name ) ) ; \
2010-07-03 02:36:31 +00:00
int __res2 = ast_channel_unlock ( chan ) ; \
2008-06-25 01:08:37 +00:00
usleep ( 1 ) ; \
if ( __res < 0 ) { /* Shouldn't ever happen, but just in case... */ \
2010-07-03 02:36:31 +00:00
if ( __res2 ) { \
ast_log ( LOG_WARNING , " Could not unlock channel '%s': %s and no lock info found! I will NOT try to relock. \n " , # chan , strerror ( __res2 ) ) ; \
} else { \
ast_channel_lock ( chan ) ; \
} \
2008-06-25 01:08:37 +00:00
} else { \
2010-07-03 02:36:31 +00:00
if ( __res2 ) { \
ast_log ( LOG_WARNING , " Could not unlock channel '%s': %s. {{{Originally locked at %s line %d: (%s) '%s'}}} I will NOT try to relock. \n " , # chan , strerror ( __res2 ) , __filename , __lineno , __func , __mutex_name ) ; \
} else { \
2012-02-28 18:15:34 +00:00
__ao2_lock ( chan , AO2_LOCK_REQ_MUTEX , __filename , __func , __lineno , __mutex_name ) ; \
2010-07-03 02:36:31 +00:00
} \
2008-06-25 01:08:37 +00:00
} \
} while ( 0 )
2008-05-29 17:35:19 +00:00
# define DEADLOCK_AVOIDANCE(lock) \
do { \
2008-06-27 13:54:13 +00:00
char __filename [ 80 ] , __func [ 80 ] , __mutex_name [ 80 ] ; \
2008-05-29 17:35:19 +00:00
int __lineno ; \
2008-06-27 13:54:13 +00:00
int __res = ast_find_lock_info ( lock , __filename , sizeof ( __filename ) , & __lineno , __func , sizeof ( __func ) , __mutex_name , sizeof ( __mutex_name ) ) ; \
2010-07-03 02:36:31 +00:00
int __res2 = ast_mutex_unlock ( lock ) ; \
2008-05-29 17:35:19 +00:00
usleep ( 1 ) ; \
if ( __res < 0 ) { /* Shouldn't ever happen, but just in case... */ \
2010-07-03 02:36:31 +00:00
if ( __res2 = = 0 ) { \
ast_mutex_lock ( lock ) ; \
} else { \
ast_log ( LOG_WARNING , " Could not unlock mutex '%s': %s and no lock info found! I will NOT try to relock. \n " , # lock , strerror ( __res2 ) ) ; \
} \
2008-05-29 17:35:19 +00:00
} else { \
2010-07-03 02:36:31 +00:00
if ( __res2 = = 0 ) { \
__ast_pthread_mutex_lock ( __filename , __lineno , __func , __mutex_name , lock ) ; \
} else { \
ast_log ( LOG_WARNING , " Could not unlock mutex '%s': %s. {{{Originally locked at %s line %d: (%s) '%s'}}} I will NOT try to relock. \n " , # lock , strerror ( __res2 ) , __filename , __lineno , __func , __mutex_name ) ; \
} \
2008-05-29 17:35:19 +00:00
} \
} while ( 0 )
2008-04-16 20:54:41 +00:00
2008-06-27 17:02:56 +00:00
/*!
* \ brief Deadlock avoidance unlock
*
* In certain deadlock avoidance scenarios , there is more than one lock to be
* unlocked and relocked . Therefore , this pair of macros is provided for that
* purpose . Note that every DLA_UNLOCK _MUST_ be paired with a matching
* DLA_LOCK . The intent of this pair of macros is to be used around another
* set of deadlock avoidance code , mainly CHANNEL_DEADLOCK_AVOIDANCE , as the
* locking order specifies that we may safely lock a channel , followed by its
* pvt , with no worries about a deadlock . In any other scenario , this macro
* may not be safe to use .
*/
2008-06-25 02:34:11 +00:00
# define DLA_UNLOCK(lock) \
do { \
2008-06-27 13:54:13 +00:00
char __filename [ 80 ] , __func [ 80 ] , __mutex_name [ 80 ] ; \
2008-06-25 02:34:11 +00:00
int __lineno ; \
2008-06-27 13:54:13 +00:00
int __res = ast_find_lock_info ( lock , __filename , sizeof ( __filename ) , & __lineno , __func , sizeof ( __func ) , __mutex_name , sizeof ( __mutex_name ) ) ; \
2010-07-03 02:36:31 +00:00
int __res2 = ast_mutex_unlock ( lock ) ;
2008-06-25 02:34:11 +00:00
2008-06-27 17:02:56 +00:00
/*!
* \ brief Deadlock avoidance lock
*
* In certain deadlock avoidance scenarios , there is more than one lock to be
* unlocked and relocked . Therefore , this pair of macros is provided for that
* purpose . Note that every DLA_UNLOCK _MUST_ be paired with a matching
* DLA_LOCK . The intent of this pair of macros is to be used around another
* set of deadlock avoidance code , mainly CHANNEL_DEADLOCK_AVOIDANCE , as the
* locking order specifies that we may safely lock a channel , followed by its
* pvt , with no worries about a deadlock . In any other scenario , this macro
* may not be safe to use .
*/
2008-06-25 02:34:11 +00:00
# define DLA_LOCK(lock) \
if ( __res < 0 ) { /* Shouldn't ever happen, but just in case... */ \
2010-07-03 02:36:31 +00:00
if ( __res2 ) { \
ast_log ( LOG_WARNING , " Could not unlock mutex '%s': %s and no lock info found! I will NOT try to relock. \n " , # lock , strerror ( __res2 ) ) ; \
} else { \
ast_mutex_lock ( lock ) ; \
} \
2008-06-25 02:34:11 +00:00
} else { \
2010-07-03 02:36:31 +00:00
if ( __res2 ) { \
ast_log ( LOG_WARNING , " Could not unlock mutex '%s': %s. {{{Originally locked at %s line %d: (%s) '%s'}}} I will NOT try to relock. \n " , # lock , strerror ( __res2 ) , __filename , __lineno , __func , __mutex_name ) ; \
} else { \
__ast_pthread_mutex_lock ( __filename , __lineno , __func , __mutex_name , lock ) ; \
} \
2008-06-25 02:34:11 +00:00
} \
} while ( 0 )
2008-06-03 18:26:51 +00:00
static inline void ast_reentrancy_lock ( struct ast_lock_track * lt )
2007-10-16 22:21:45 +00:00
{
2012-11-23 00:02:23 +00:00
int res ;
if ( ( res = pthread_mutex_lock ( & lt - > reentr_mutex ) ) ) {
fprintf ( stderr , " ast_reentrancy_lock failed: '%s' (%d) \n " , strerror ( res ) , res ) ;
# if defined(DO_CRASH) || defined(THREAD_CRASH)
abort ( ) ;
# endif
}
2007-10-16 22:21:45 +00:00
}
2008-06-03 18:26:51 +00:00
static inline void ast_reentrancy_unlock ( struct ast_lock_track * lt )
2007-10-16 22:21:45 +00:00
{
2012-11-23 00:02:23 +00:00
int res ;
if ( ( res = pthread_mutex_unlock ( & lt - > reentr_mutex ) ) ) {
fprintf ( stderr , " ast_reentrancy_unlock failed: '%s' (%d) \n " , strerror ( res ) , res ) ;
# if defined(DO_CRASH) || defined(THREAD_CRASH)
abort ( ) ;
# endif
}
2007-10-16 22:21:45 +00:00
}
2010-04-22 19:08:01 +00:00
# else /* !DEBUG_THREADS */
2005-08-31 02:43:44 +00:00
2013-11-02 04:12:36 +00:00
# define AO2_DEADLOCK_AVOIDANCE(obj) \
ao2_unlock ( obj ) ; \
usleep ( 1 ) ; \
ao2_lock ( obj ) ;
# define CHANNEL_DEADLOCK_AVOIDANCE(chan) \
2010-04-22 19:08:01 +00:00
ast_channel_unlock ( chan ) ; \
usleep ( 1 ) ; \
ast_channel_lock ( chan ) ;
2004-06-22 17:42:14 +00:00
2013-11-02 04:12:36 +00:00
# define DEADLOCK_AVOIDANCE(lock) \
2010-07-03 02:36:31 +00:00
do { \
int __res ; \
if ( ! ( __res = ast_mutex_unlock ( lock ) ) ) { \
usleep ( 1 ) ; \
ast_mutex_lock ( lock ) ; \
} else { \
ast_log ( LOG_WARNING , " Failed to unlock mutex '%s' (%s). I will NOT try to relock. {{{ THIS IS A BUG. }}} \n " , # lock , strerror ( __res ) ) ; \
} \
} while ( 0 )
2007-08-03 19:41:42 +00:00
2010-04-22 19:08:01 +00:00
# define DLA_UNLOCK(lock) ast_mutex_unlock(lock)
2005-08-31 02:43:44 +00:00
2010-04-22 19:08:01 +00:00
# define DLA_LOCK(lock) ast_mutex_lock(lock)
2007-10-30 23:08:59 +00:00
2010-04-22 19:08:01 +00:00
# endif /* !DEBUG_THREADS */
2005-08-31 02:43:44 +00:00
2010-04-22 19:08:01 +00:00
# if defined(AST_MUTEX_INIT_W_CONSTRUCTORS)
/*
* If AST_MUTEX_INIT_W_CONSTRUCTORS is defined , use file scope constructors
* and destructors to create / destroy global mutexes .
*/
# define __AST_MUTEX_DEFINE(scope, mutex, init_val, track) \
scope ast_mutex_t mutex = init_val ; \
static void __attribute__ ( ( constructor ) ) init_ # # mutex ( void ) \
{ \
if ( track ) \
ast_mutex_init ( & mutex ) ; \
else \
ast_mutex_init_notracking ( & mutex ) ; \
} \
\
static void __attribute__ ( ( destructor ) ) fini_ # # mutex ( void ) \
{ \
ast_mutex_destroy ( & mutex ) ; \
2002-05-14 14:43:52 +00:00
}
2010-04-22 19:08:01 +00:00
# else /* !AST_MUTEX_INIT_W_CONSTRUCTORS */
/* By default, use static initialization of mutexes. */
# define __AST_MUTEX_DEFINE(scope, mutex, init_val, track) scope ast_mutex_t mutex = init_val
2006-09-18 22:05:53 +00:00
# endif /* AST_MUTEX_INIT_W_CONSTRUCTORS */
2005-08-31 02:43:44 +00:00
2010-04-22 19:08:01 +00:00
# define AST_MUTEX_DEFINE_STATIC(mutex) __AST_MUTEX_DEFINE(static, mutex, AST_MUTEX_INIT_VALUE, 1)
# define AST_MUTEX_DEFINE_STATIC_NOTRACKING(mutex) __AST_MUTEX_DEFINE(static, mutex, AST_MUTEX_INIT_VALUE_NOTRACKING, 0)
2007-08-03 19:41:42 +00:00
2005-08-31 02:43:44 +00:00
2010-04-22 19:08:01 +00:00
/* Statically declared read/write locks */
# ifdef AST_MUTEX_INIT_W_CONSTRUCTORS
# define __AST_RWLOCK_DEFINE(scope, rwlock, init_val, track) \
scope ast_rwlock_t rwlock = init_val ; \
static void __attribute__ ( ( constructor ) ) init_ # # rwlock ( void ) \
{ \
if ( track ) \
ast_rwlock_init ( & rwlock ) ; \
else \
ast_rwlock_init_notracking ( & rwlock ) ; \
} \
static void __attribute__ ( ( destructor ) ) fini_ # # rwlock ( void ) \
{ \
ast_rwlock_destroy ( & rwlock ) ; \
2003-08-13 15:25:16 +00:00
}
2008-05-23 22:35:50 +00:00
# else
2010-04-22 19:08:01 +00:00
# define __AST_RWLOCK_DEFINE(scope, rwlock, init_val, track) scope ast_rwlock_t rwlock = init_val
2008-05-23 22:35:50 +00:00
# endif
2005-08-31 02:43:44 +00:00
2010-04-22 19:08:01 +00:00
# define AST_RWLOCK_DEFINE_STATIC(rwlock) __AST_RWLOCK_DEFINE(static, rwlock, AST_RWLOCK_INIT_VALUE, 1)
# define AST_RWLOCK_DEFINE_STATIC_NOTRACKING(rwlock) __AST_RWLOCK_DEFINE(static, rwlock, AST_RWLOCK_INIT_VALUE_NOTRACKING, 0)
2005-08-31 02:43:44 +00:00
Add scoped locks to Asterisk.
With the SCOPED_LOCK macro, you can create a variable
that locks a specific lock and unlocks the lock when the
variable goes out of scope. This is useful for situations
where many breaks, continues, returns, or other interruptions
would require separate unlock statements. With a scoped lock,
these aren't necessary.
There are specializations for mutexes, read locks, write locks,
ao2 locks, ao2 read locks, ao2 write locks, and channel locks.
Each of these is a SCOPED_LOCK at heart though.
Review: https://reviewboard.asterisk.org/r/2060
git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@371582 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2012-08-21 19:04:32 +00:00
/*!
* \ brief Scoped Locks
*
* Scoped locks provide a way to use RAII locks . In other words ,
* declaration of a scoped lock will automatically define and lock
* the lock . When the lock goes out of scope , it will automatically
* be unlocked .
*
2013-05-07 18:32:34 +00:00
* \ code
Add scoped locks to Asterisk.
With the SCOPED_LOCK macro, you can create a variable
that locks a specific lock and unlocks the lock when the
variable goes out of scope. This is useful for situations
where many breaks, continues, returns, or other interruptions
would require separate unlock statements. With a scoped lock,
these aren't necessary.
There are specializations for mutexes, read locks, write locks,
ao2 locks, ao2 read locks, ao2 write locks, and channel locks.
Each of these is a SCOPED_LOCK at heart though.
Review: https://reviewboard.asterisk.org/r/2060
git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@371582 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2012-08-21 19:04:32 +00:00
* int some_function ( struct ast_channel * chan )
* {
* SCOPED_LOCK ( lock , chan , ast_channel_lock , ast_channel_unlock ) ;
*
* if ( ! strcmp ( ast_channel_name ( chan , " foo " ) ) {
* return 0 ;
* }
*
* return - 1 ;
* }
2013-05-07 18:32:34 +00:00
* \ endcode
Add scoped locks to Asterisk.
With the SCOPED_LOCK macro, you can create a variable
that locks a specific lock and unlocks the lock when the
variable goes out of scope. This is useful for situations
where many breaks, continues, returns, or other interruptions
would require separate unlock statements. With a scoped lock,
these aren't necessary.
There are specializations for mutexes, read locks, write locks,
ao2 locks, ao2 read locks, ao2 write locks, and channel locks.
Each of these is a SCOPED_LOCK at heart though.
Review: https://reviewboard.asterisk.org/r/2060
git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@371582 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2012-08-21 19:04:32 +00:00
*
* In the above example , neither return path requires explicit unlocking
* of the channel .
*
* \ note
* Care should be taken when using SCOPED_LOCKS in conjunction with ao2 objects .
* ao2 objects should be unlocked before they are unreffed . Since SCOPED_LOCK runs
* once the variable goes out of scope , this can easily lead to situations where the
* variable gets unlocked after it is unreffed .
*
* \ param varname The unique name to give to the scoped lock . You are not likely to reference
* this outside of the SCOPED_LOCK invocation .
* \ param lock The variable to lock . This can be anything that can be passed to a locking
* or unlocking function .
* \ param lockfunc The function to call to lock the lock
* \ param unlockfunc The function to call to unlock the lock
*/
# define SCOPED_LOCK(varname, lock, lockfunc, unlockfunc) \
2013-07-09 21:40:38 +00:00
RAII_VAR ( typeof ( ( lock ) ) , varname , ( { lockfunc ( ( lock ) ) ; ( lock ) ; } ) , unlockfunc )
Add scoped locks to Asterisk.
With the SCOPED_LOCK macro, you can create a variable
that locks a specific lock and unlocks the lock when the
variable goes out of scope. This is useful for situations
where many breaks, continues, returns, or other interruptions
would require separate unlock statements. With a scoped lock,
these aren't necessary.
There are specializations for mutexes, read locks, write locks,
ao2 locks, ao2 read locks, ao2 write locks, and channel locks.
Each of these is a SCOPED_LOCK at heart though.
Review: https://reviewboard.asterisk.org/r/2060
git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@371582 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2012-08-21 19:04:32 +00:00
/*!
* \ brief scoped lock specialization for mutexes
*/
# define SCOPED_MUTEX(varname, lock) SCOPED_LOCK(varname, (lock), ast_mutex_lock, ast_mutex_unlock)
/*!
* \ brief scoped lock specialization for read locks
*/
# define SCOPED_RDLOCK(varname, lock) SCOPED_LOCK(varname, (lock), ast_rwlock_rdlock, ast_rwlock_unlock)
/*!
* \ brief scoped lock specialization for write locks
*/
# define SCOPED_WRLOCK(varname, lock) SCOPED_LOCK(varname, (lock), ast_rwlock_wrlock, ast_rwlock_unlock)
/*!
* \ brief scoped lock specialization for ao2 mutexes .
*/
# define SCOPED_AO2LOCK(varname, obj) SCOPED_LOCK(varname, (obj), ao2_lock, ao2_unlock)
/*!
* \ brief scoped lock specialization for ao2 read locks .
*/
# define SCOPED_AO2RDLOCK(varname, obj) SCOPED_LOCK(varname, (obj), ao2_rdlock, ao2_unlock)
/*!
* \ brief scoped lock specialization for ao2 write locks .
*/
# define SCOPED_AO2WRLOCK(varname, obj) SCOPED_LOCK(varname, (obj), ao2_wrlock, ao2_unlock)
/*!
* \ brief scoped lock specialization for channels .
*/
# define SCOPED_CHANNELLOCK(varname, chan) SCOPED_LOCK(varname, (chan), ast_channel_lock, ast_channel_unlock)
2010-04-22 19:08:01 +00:00
# ifndef __CYGWIN__ /* temporary disabled for cygwin */
# define pthread_mutex_t use_ast_mutex_t_instead_of_pthread_mutex_t
# define pthread_cond_t use_ast_cond_t_instead_of_pthread_cond_t
2008-05-23 22:35:50 +00:00
# endif
2010-04-22 19:08:01 +00:00
# define pthread_mutex_lock use_ast_mutex_lock_instead_of_pthread_mutex_lock
# define pthread_mutex_unlock use_ast_mutex_unlock_instead_of_pthread_mutex_unlock
# define pthread_mutex_trylock use_ast_mutex_trylock_instead_of_pthread_mutex_trylock
# define pthread_mutex_init use_ast_mutex_init_instead_of_pthread_mutex_init
# define pthread_mutex_destroy use_ast_mutex_destroy_instead_of_pthread_mutex_destroy
# define pthread_cond_init use_ast_cond_init_instead_of_pthread_cond_init
# define pthread_cond_destroy use_ast_cond_destroy_instead_of_pthread_cond_destroy
# define pthread_cond_signal use_ast_cond_signal_instead_of_pthread_cond_signal
# define pthread_cond_broadcast use_ast_cond_broadcast_instead_of_pthread_cond_broadcast
# define pthread_cond_wait use_ast_cond_wait_instead_of_pthread_cond_wait
# define pthread_cond_timedwait use_ast_cond_timedwait_instead_of_pthread_cond_timedwait
2008-05-23 22:35:50 +00:00
2010-04-22 19:08:01 +00:00
# define AST_MUTEX_INITIALIZER __use_AST_MUTEX_DEFINE_STATIC_rather_than_AST_MUTEX_INITIALIZER__
2005-08-31 02:43:44 +00:00
2010-04-22 19:08:01 +00:00
# define gethostbyname __gethostbyname__is__not__reentrant__use__ast_gethostbyname__instead__
2007-08-03 19:41:42 +00:00
2010-04-22 19:08:01 +00:00
# ifndef __linux__
# define pthread_create __use_ast_pthread_create_instead__
2008-05-23 22:35:50 +00:00
# endif
2006-11-02 16:28:13 +00:00
2018-01-27 14:03:57 -05:00
/*!
* \ brief Support for atomic instructions .
*
* These macros implement a uniform interface to use built - in atomic functionality .
* If available __atomic built - ins are prefered . Legacy __sync built - ins are used
* as a fallback for older compilers .
*
* Detailed documentation can be found in the GCC manual , all API ' s are modeled after
* the __atomic interfaces but using the namespace ast_atomic .
*
* The memorder argument is always ignored by legacy __sync functions . Invalid
* memorder arguments do not produce errors unless __atomic functions are supported
* as the argument is erased by the preprocessor .
*
* \ note ast_atomic_fetch_nand and ast_atomic_nand_fetch purposely do not exist .
* It ' s implementation was broken prior to gcc - 4.4 .
*
* @ {
*/
2006-03-30 23:26:22 +00:00
# include "asterisk/inline_api.h"
2018-01-25 02:37:32 -05:00
# if defined(HAVE_C_ATOMICS)
2018-01-27 14:03:57 -05:00
/*! Atomic += */
# define ast_atomic_fetch_add(ptr, val, memorder) __atomic_fetch_add((ptr), (val), (memorder))
# define ast_atomic_add_fetch(ptr, val, memorder) __atomic_add_fetch((ptr), (val), (memorder))
/*! Atomic -= */
# define ast_atomic_fetch_sub(ptr, val, memorder) __atomic_fetch_sub((ptr), (val), (memorder))
# define ast_atomic_sub_fetch(ptr, val, memorder) __atomic_sub_fetch((ptr), (val), (memorder))
/*! Atomic &= */
# define ast_atomic_fetch_and(ptr, val, memorder) __atomic_fetch_and((ptr), (val), (memorder))
# define ast_atomic_and_fetch(ptr, val, memorder) __atomic_and_fetch((ptr), (val), (memorder))
/*! Atomic |= */
# define ast_atomic_fetch_or(ptr, val, memorder) __atomic_fetch_or((ptr), (val), (memorder))
# define ast_atomic_or_fetch(ptr, val, memorder) __atomic_or_fetch((ptr), (val), (memorder))
/*! Atomic xor = */
# define ast_atomic_fetch_xor(ptr, val, memorder) __atomic_fetch_xor((ptr), (val), (memorder))
# define ast_atomic_xor_fetch(ptr, val, memorder) __atomic_xor_fetch((ptr), (val), (memorder))
#if 0
/* Atomic compare and swap
*
* See comments near the __atomic implementation for why this is disabled .
*/
# define ast_atomic_compare_exchange_n(ptr, expected, desired, success_memorder, failure_memorder) \
__atomic_compare_exchange_n ( ( ptr ) , ( expected ) , ( desired ) , 0 , success_memorder , failure_memorder )
# define ast_atomic_compare_exchange(ptr, expected, desired, success_memorder, failure_memorder) \
__atomic_compare_exchange ( ( ptr ) , ( expected ) , ( desired ) , 0 , success_memorder , failure_memorder )
# endif
2018-01-25 02:37:32 -05:00
# elif defined(HAVE_GCC_ATOMICS)
2018-01-27 14:03:57 -05:00
/*! Atomic += */
# define ast_atomic_fetch_add(ptr, val, memorder) __sync_fetch_and_add((ptr), (val))
# define ast_atomic_add_fetch(ptr, val, memorder) __sync_add_and_fetch((ptr), (val))
/*! Atomic -= */
# define ast_atomic_fetch_sub(ptr, val, memorder) __sync_fetch_and_sub((ptr), (val))
# define ast_atomic_sub_fetch(ptr, val, memorder) __sync_sub_and_fetch((ptr), (val))
/*! Atomic &= */
# define ast_atomic_fetch_and(ptr, val, memorder) __sync_fetch_and_and((ptr), (val))
# define ast_atomic_and_fetch(ptr, val, memorder) __sync_and_and_fetch((ptr), (val))
/*! Atomic |= */
# define ast_atomic_fetch_or(ptr, val, memorder) __sync_fetch_and_or((ptr), (val))
# define ast_atomic_or_fetch(ptr, val, memorder) __sync_or_and_fetch((ptr), (val))
/*! Atomic xor = */
# define ast_atomic_fetch_xor(ptr, val, memorder) __sync_fetch_and_xor((ptr), (val))
# define ast_atomic_xor_fetch(ptr, val, memorder) __sync_xor_and_fetch((ptr), (val))
#if 0
/* Atomic compare and swap
*
* The \ a expected argument is a pointer , I ' m guessing __atomic built - ins
* perform all memory reads / writes in a single atomic operation . I don ' t
* believe this is possible to exactly replicate using __sync built - ins .
* Will need to determine potential use cases of this feature and write a
* wrapper which provides consistant behavior between __sync and __atomic
* implementations .
*/
# define ast_atomic_compare_exchange_n(ptr, expected, desired, success_memorder, failure_memorder) \
__sync_bool_compare_and_swap ( ( ptr ) , * ( expected ) , ( desired ) )
# define ast_atomic_compare_exchange(ptr, expected, desired, success_memorder, failure_memorder) \
__sync_bool_compare_and_swap ( ( ptr ) , * ( expected ) , * ( desired ) )
# endif
2018-01-25 02:37:32 -05:00
# else
# error "Atomics not available."
2006-06-30 15:54:13 +00:00
# endif
2018-01-27 14:03:57 -05:00
/*! Atomic flag set */
# define ast_atomic_flag_set(ptr, val, memorder) ast_atomic_fetch_or((ptr), (val), (memorder))
/*! Atomic flag clear */
# define ast_atomic_flag_clear(ptr, val, memorder) ast_atomic_fetch_and((ptr), ~(val), (memorder))
2018-01-25 02:37:32 -05:00
/*!
* \ brief Atomically add v to * p and return the previous value of * p .
*
2006-03-30 23:26:22 +00:00
* This can be used to handle reference counts , and the return value
* can be used to generate unique identifiers .
*/
2018-01-24 19:49:02 -05:00
AST_INLINE_API ( int ast_atomic_fetchadd_int ( volatile int * p , int v ) ,
{
2018-01-25 02:37:32 -05:00
return ast_atomic_fetch_add ( p , v , __ATOMIC_RELAXED ) ;
2018-01-24 19:49:02 -05:00
} )
2006-03-30 23:26:22 +00:00
2018-01-25 02:37:32 -05:00
/*!
* \ brief decrement * p by 1 and return true if the variable has reached 0.
*
2006-03-30 23:26:22 +00:00
* Useful e . g . to check if a refcount has reached 0.
*/
AST_INLINE_API ( int ast_atomic_dec_and_test ( volatile int * p ) ,
{
2018-01-25 02:37:32 -05:00
return ast_atomic_sub_fetch ( p , 1 , __ATOMIC_RELAXED ) = = 0 ;
2006-06-13 21:35:29 +00:00
} )
2006-03-30 23:26:22 +00:00
2024-12-31 11:10:20 -07:00
# if defined(__cplusplus) || defined(c_plusplus)
}
# endif
2018-01-27 14:03:57 -05:00
/*! @} */
2005-08-30 18:32:10 +00:00
# endif /* _ASTERISK_LOCK_H */